Transcript:
Prof. Emily M. Bender(she/her) @emilymbender@dair-community.social
We’re going to need journalists to stop talking about synthetic text extruding machines as if they have thoughts or stances that they are trying to communicate. ChatGPT can’t admit anything, nor self-report. Gah.
She clearly didn’t read the article. That’s exactly what it’s about.
https://archive.ph/gsavP
And I’ll defend the use of the word “admit” here (and in the headline), because it makes clear that the companies are aware of the danger and are trying to do something about it, but people are still dying.
So they can’t claim ignorance — or that it’s technically impossible to detect, if the dude’s mom was able to elicit a reply of “yes this was a mental health crisis” after the fact.
This is the second time in recent days that I’ve seen Lemmy criticize journalists for reporting on what a chatbot says. We should be very careful here, to not let LLM vendors off the hook for what the chatbots say just because we know the chatbots shouldn’t be trusted. Especially when the journalists are trying to expose the ugly truth of what happens when they are trusted.
To begin with people were annoyed with every single answer beginning with “as a large language model, I don’t have thoughts or feelings, but…” so now some have overcorrected too much, I guess.