That’s all manual effort currently. Someone has to report a problematic post, you have to manually look at it, manually decided if it should be removed, and manually remove it. I’m not the biggest fan of automatically removing content, but when someone posts 500 posts all at once that manual effort is a pain.
What tools are missing to moderate a LLM bot?
I am a mod, I receive a report from an user, check it and ban. All done with current tools we have.
That’s all manual effort currently. Someone has to report a problematic post, you have to manually look at it, manually decided if it should be removed, and manually remove it. I’m not the biggest fan of automatically removing content, but when someone posts 500 posts all at once that manual effort is a pain.
This is how we did in reddit too.
But what if Elon Musk makes 20.000 Lemmy accounts to give away free crypto?
This doesn’t scale.
This is how we did in reddit too.