It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s
My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!
The stuff at the end was sarcasm, you dolt. Shut up.
Looks like someone’s taking some lessons from Zuck’s methodology. “Whoops! That highly questionable and suspiciously intentional shit we did was totes an accident! Spilt milk now, I guess! Huh-huh-heey honk-honk!”
The unauthorized edit is coming from inside the house.
Unauthorized is their office nickname for Musk.
Musk made the change but since AI is still as rough as his auto driving tech it did t work like he planned
But this is the future folks. Modifying the AI to fit the narrative of the regime. He’s just too stupid to do it right or he might be stupid and think these llms work better than they actually do.
This is why I tell people stop using LLMs. The owner class owns them (imagine that) and will tell it to tell you what they want so they make more money. Simple as that.
Don’t know the reference but I’m sure it’s awesome. :p
It’s from the show “I think you should leave.” There’s a sketch where someone has crashed a weinermobile into a storefront, and bystanders are like “did anyone get hurt?” “What happened to the driver?” And then this guy shows up.
None of the explanations matter now, too late. “White genocide” is now a thing in SA. The term is in people’s heads. Mission accomplished.
There goes Adrian Dittman again. That guy oughta be locked up.
I’m going to bring it up.
Isn’t this the same asshole who posted the “Woke racist” meme as a response to Gemini generating images of Black SS officers? Of course we now know he was merely triggered by the suggestion because of his commitment to white supremacy and alignment with the SS ideals, which he could not stand to see, pun not intended, denigrated.
The Gemini ordeal was itself a result of a system prompt; a half-ass attempt to correct for white bias deeply learned by the algorithm, just a few short years after Google ousted their AI ethics researcher for bringing this type of stuff up.
Few were the outlets that did not lend credence to the “outrage” about “diversity bias” bullshit and actually covered that deep learning algorithms are indeed sexist and racist.
Now this nazi piece of shit goes and does the exact same thing; he tweaks a system prompt causing the bot to bring up the self-serving and racially charged topic of apartheid racists being purportedly persecuted. He does the vary same thing he said was “uncivilizational”, the same concept he brought up just before he performed the two back-to-back Sieg Heil salutes during Trump’s inauguration.
He was clearly not concerned about historical accuracy, not the superficial attempt to brown-wash the horrible past of racism which translates to modern algorithms’ bias. His concern was clearly the representation of people of color, and the very ideal of diversity, so he effectively went on and implemented his supremacist seething into a brutal, misanthropic policy with his interference in the election and involvement in the criminal, fascist operation also known as DOGE.
Is there anyone at this point that is still sitting on the fence about Musk’s intellectual dishonesty and deeply held supremacist convictions? Quickest way to discover nazis nowadays really: (thinks that Musk is a misunderstood genius and the nazi shit is all fake).
In a (code) perfect world, wouldn’t an LLMs “personality” and biases be aligned with the median of its training set?
In other words…stupid-in/stupid-out. As long as the (median of) input data is racist and sexist, the output data would be equally bigoted.
That’s not to say that the average person is openly bigoted, but the open bigots are pretty damn loud.
That’s not to say that the average person is openly bigoted
I do think the average person openly perpetuates racist stereotypes due to the pressure of systemic racism. Not that they intend to, and their beliefs frequently contradict their actions because they just don’t notice that they are going along with it.
Like the average person will talk about the ‘bad part of town’ in a way that implies the bad part is due to being where ‘those people’ live.
I don’t disagree, but that’s probably closer to implicit bias than overt bigotry. When people talk about the “bad part of town”, often it’s the “bad part” as a result of perpetual systemic racism, and the concerns of going there is more rooted in personal safety (or at least the perception of it). And sure, that feeds into it, but it’s really more of a cycle or a feedback loop.
And there’s also the anxiety of being the cultural and demographic opposite of everyone around you. That’s gotta be some sub-type of agoraphobia or something.
Sure, probably, “implicit bias” is just a PC way of saying “racist-ish”, but it is at least a start. It’s very difficult to retrain behaviors that have been learned since birth, if not hypnopaedically earlier.
Did you read what Grok was saying? Grok was saying that white genocide is questionable at best, and unfounded.
It was just saying when prompted about unrelated stuff which is what made it bizarre. It never said it was a real thing, nor endorsing the idea that it is something real.
Why was it mentioning it at all in conversations not about it?
And why does the fact that it did that not seem to bother you?
I guess you didn’t read the article, or don’t understand how LLMs work so I’ll explain.
An employee changed the AI’s system prompt, telling it to avoid spreading white genocide misinformation in South Africa. The system prompt is a context that tells the AI how to work with the prompts it is given and it forces it “to think” about whatever is on there. So by making that change in the system prompt every time someone prompted Grok about anything, it would think about not spreading misinformation about white genocide in South Africa, so it inserted that into pretty much everything.
So it doesn’t bother me because it’s an LLM acting as it is supposed to when someone messes with the settings. Grok probably did not need these instructions in the first place, as it’s consistently been embarrassing Elon every time the man posts one of his shitbrained takes, and while I haven’t used that AI, I don’t think, or have yet to see proof that Elon is directing it’s training to be positive conservative ideologies or harebrained conspiracy theories. It could be for all I know, but from what I’ve seen Grok sticks to facts as they are.
That prompt modification “directed Grok to provide a specific response on a political topic” and “violated xAI’s internal policies and core values,” xAI wrote on social media.
Relevant quote because one of us didn’t read the article for sure.
Edit: not to mention that believing a system prompt somehow binds or constrains rather than influences these systems would also indicate to me that one of us definitely doesn’t understand how these work, either.
That doesn’t say anything about the content of the modification itself. For all you know the internal policy could be that white genocide is a thing. But what they are in fact referring to that violates the internal policies is modifying the prompt in such a way that it takes a specific stance on a political issue. Cmon man use your brain, it’s not that fricking hard.
If the contents of the prompt were to say that white genocide is a thing, it would have likely have said something along the lines that it is a nuanced topic of debate and it depends on how you define the situation or some other non answer. But the AI was consistently taking a stance that it was misinformation, that tells you what the prompt was. Also it was reported in other outlets that that was in fact what the modification was, to not spread misinformation about that.
You continue to spout things with no citations and a bad vibe. I am done here.
This actually shows that they’re is work being done to use LLM on social media to pretend to be ordinary users and trying to sway opinion of the population.
This is currently the biggest danger of LLM, and the bill to prevent states from regulating it is to ensure they can continue using it
That’s the problem with modern AI and future technologies we are creating
We, as a human civilization, are not creating future technology for the betterment of mankind … we are arrogantly and ignorantly manipulating all future technology for our own personal gain and preferences.
…is entertaining a plan to grant refugee status to white Afrikaners
FYI, the Republicans have already done it.
https://www.npr.org/2025/05/12/nx-s1-5395067/first-group-afrikaner-refugees-arrive
Elon looking for the authorized person: