The meme only says “if … then …”. It does not imply the reverse relationship of “if not … then not …”.
The meme only says “if … then …”. It does not imply the reverse relationship of “if not … then not …”.
Oh awesome, thank you so much!
Seconding this. Legitimately better than Google photos in a lot of ways, even if you don’t care about the data ownership aspect. If you’ve ever been annoyed at how Google Photos handles face detection / grouping, you’ll love Immich.
I’d love to know what font was used for the big “Saturday” there!
Eh, that’s a bit of a stretch. There’s more awareness by default here because of GDPR and such, but I wouldn’t say people really care that much more here
Now please explain to me how C works.
That’s not what they’re asking. It’s not about how C works, it’s about how specific APIs written in C work, which is hard to figure out on your own for anyone who is not familiar with that specific code. You’ll have to explain that to any developer coming new into the project expected to work with those APIs, no matter their experience with C.
Congrats, you completely missed the point. Maybe read the actual article, before going on a rant that’s only tangentially related?
It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.
That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.
LLMs do not make decisions.
What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.
That kind of window has been around for a long time already. Also, let me introduce you to window awnings
It still protects you from your passwords being compromised in any way except through a compromise of the password manager itself. Yes, it’s worse than keeping them separate, but it’s also still much better than not having 2fa at all.
You might be misremembering / misinterpreting a little there. This behavior is not intentional, it’s just a side effect of how the algorithm currently works. Showing you longer videos doesn’t equate to showing you more ads. On the contrary, if you get loads of short videos you’ll have way more opportunities to see pre-roll ads, but with longer videos, you’re just to just the mid-roll spots in that video. So YouTube doesn’t really have an incentive to make it work like that, it’s just accidental.
Here’s the spiffing Brit video on this, which I think you might have gotten this idea from: https://youtu.be/8iOjeb5DTZI
Edit: to be clear, I fully agree that YouTube will do anything to shove ads down our throats no matter how effective they actually are. I’m just saying that this example you’ve brought is not really that.