Transcript:
Prof. Emily M. Bender(she/her) @emilymbender@dair-community.social
We’re going to need journalists to stop talking about synthetic text extruding machines as if they have thoughts or stances that they are trying to communicate. ChatGPT can’t admit anything, nor self-report. Gah.
General purpose journalists have always been terrible with tech reporting. Surprisingly, this has not improved in the last 20 years.
Being an idiot and shitting on minorities, pretty popular combo, isn’t it?
When the
When the bullshit generating machine™ generates bullshit
It’s so fucking annoying >.<
Whining about using the word “admits” in this context is like whining about a science teacher using the word “wants” while talking about water taking the path of least resistance. We do this all the time in English.
Some serious “uhm ackshually!! ☝️🤓” energy.
And personification of natural phenomena is also a problem. It undercuts the fundamental causes of things when we shortcut that by saying a thing or force “wants” to do something, especially while teaching. It can be useful at a very basic level (like, kindergarten/children’s television), but only to a point. After that it is misleading and inefficient for actual education.
Same goes here. When we’re discussing the problems of people treating algorithms as thinking, acting beings, referring to the output as a “choice” or “claim” is the very last thing we should do, let alone using it as evidence of anything. There is no memory there - it’s fabricating a response based on the input, and the input directs the response. If I input a question like, “Why did you eat my pizza?”, it would output text fitting the context of my question, probably something akin to “Because pizza is delicious” or something. That doesn’t prove it at my pizza, it just shows the malleability of the algorithm.
But most people won’t be convinced to think that water has consciousness.
With AI there is enough ambiguity about what it’s doing, for the common person at least. And it’s in the company’s best interest to make it seem as smart as possible, after all, so they won’t correct that.
Just saying, there’s some reasoning behind criticizing the language here beyond the factual
Tell that to homeopathy believers
Just don’t stir the water and the wrong direction and it will somehow remember everything it wants to somehow tell you
Yer typical schoolchild knows that water doesn’t have a brain. Meanwhile, billion dollar companies are spending millions to sell snake-oil to other companies by promising them that a subscription to a jump-up chatbot can replace their employees, and all the language surrounding “AI” suggesting it has any cognitive abilities at all only makes the problem worse, and is literally putting professional workers jobs on the line.
But yes, your simile is super accurate. 🙄
She clearly didn’t read the article. That’s exactly what it’s about.
“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.
The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.” What it should have done, ChatGPT said, was regularly remind Irwin that it’s a language model without beliefs, feelings or consciousness.
And I’ll defend the use of the word “admit” here (and in the headline), because it makes clear that the companies are aware of the danger and are trying to do something about it, but people are still dying.
So they can’t claim ignorance — or that it’s technically impossible to detect, if the dude’s mom was able to elicit a reply of “yes this was a mental health crisis” after the fact.
This is the second time in recent days that I’ve seen Lemmy criticize journalists for reporting on what a chatbot says. We should be very careful here, to not let LLM vendors off the hook for what the chatbots say just because we know the chatbots shouldn’t be trusted. Especially when the journalists are trying to expose the ugly truth of what happens when they are trusted.
To begin with people were annoyed with every single answer beginning with “as a large language model, I don’t have thoughts or feelings, but…” so now some have overcorrected too much, I guess.