- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Today, a prominent child safety organization, Thorn, in partnership with a leading cloud-based AI solutions provider, Hive, announced the release of an AI model designed to flag unknown CSAM at upload. It’s the earliest AI technology striving to expose unreported CSAM at scale.
The model I use (I forget the name) popped out something pretty sus once. I wouldn’t describe it as CP, but it was definitely weird enough to really make me uncomfortable. It’s the only thing it ever made that I immediately deleted and removed from the recycling bin too lol.
The point I’m making is that this isn’t as far fetched as you believe.
Plus, you can merge models. Get a general purpose model that knows what children look like, a general purpose pornographic model, merge them, then start generating and selecting images based on Thorn’s classifier.
You can’t merge a generative model and a classification model. You can run then in series to get a bunch of false positives/hallucinations, but you can’t make it generate something from the other model.