• 0 Posts
  • 18 Comments
Joined 8 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • you have to do a lot of squinting to accept this take.

    so his wins were copying competitors, and even those products didn’t see success until they were completely revolutionized (Bing in 2024 is a Ballmer success? .NET becoming widespread is his doing?). one thing Nadela did was embrace the competitive landscape and open source with key acquisitions like GitHub and open sourcing .NET, and i honestly don’t have the time to fully rebuff this hot take. but i don’t think the Ballmer haters are totally off base here. even if some of the products started under Ballmer are now successful, it feels disingenuous to attribute their success to him. it’s like an alcoholic dad taking credit for his kid becoming an actor. Microsoft is successful despite him


  • these days Hyprland but previously i3.

    i basically live in the terminal unless i’m playing games or in the browser. these days i use most apps full screen and switch between desktops, and i launch apps using wofi/rofi. this has all become very specialized over the past decade, and it almost has a “security by obscurity” effect where it’s not obvious how to do anything on my machines unless you have my muscle memory.

    not that i necessarily recommend this approach generally, but i find value in mostly using a keyboard to control my machines and minimizing visual clutter. i don’t even have desktop icons or a wallpaper.



  • All programs were developed in Python language (3.7.6). In addition, freely available Python libraries of NumPy (1.18.1) and Pandas (1.0.1) were used to manipulate data, cv2 (4.4.0) and matplotlib (3.1.3) were used to visualize, and scikit-learn (0.24.2) was used to implement RF. SqueezeNet and Grad-CAM were realized using the neural network library PyTorch (1.7.0). The DL network was trained and tested using a DL server mounted with an NVIDIA GeForce RTX 3090 GPU, 24 Intel Xeon CPUs, and 24 GB main memory

    it’s interesting that they’re using pretty modest hardware (i assume they mean 24 cores not CPUs) and fairly outdated dependencies. also having their dependencies listed out like this is pretty adorable. it has academic-out-of-touch-not-a-software-dev vibes. makes you wonder how much further a project like this could go with decent technical support. like, all these talented engineers are using 10k times the power to work on generalist models like GPT that struggle at these kinds of tasks, while promising that it would work someday and trivializing them as “downstream tasks”. i think there’s definitely still room in machine learning for expert models; sucks they struggle for proper support.






  • language is intrinsically tied to culture, history, and group identity, so any concept that is expressed through a certain linguistic system is inseparable from its cultural roots

    i feel like this is a big part of it. it reminds me of the Sapir Whorf Hypothesis. search results and neural networks are susceptible to bias just like a human is; “garbage in garbage out” as they say.

    the quote directly after mentions that newer or more precise searches produce more coherent results across languages. that reminds me of the time i got curious and looked up Marxism on Conservapedia. as you might expect, the high level descriptions of Marxism are highly critical and include a lot of bias, but interestingly once you dig down to concepts like historical materialism etc it gets harder to spin, since popular media narratives largely ignore those details and any “spin” would likely be blatant falsehood.

    the author of the article seems to really want there to be a malicious conspiratorial effort to suppress information, and, while that may be true in some cases, it just doesn’t seem feasible at scale. this is good to call out, but i don’t think these people who concern their lives with the research and advancement of language concepts are sleeping on the fact that bias exists.


  • it’s super weird that people think LLMs are so fundamentally different from neural networks, the underlying technology. neural network architectures are constantly improving, and LLMs are just a product of a ton of research and an emergence after the discovery of the transformer architecture. what LLMs have shown us is that we’re definitely on the right track using neural networks to solve a wide range of problems classified as “AI”









  • it’s not a password; it’s closer to a username.

    but realistically it’s not in my personal threat model to be ready to get tied down and forced to unlock my phone. everyone with windows on their house should know that security is mostly about how far an adversary is willing to go to try to steal from you.

    personally, i like the natural daylight, and i’m not paranoid enough to brick up my windows just because it’s a potential ingress.