i mainly use it for fact checking sources from the internet and looking for bias. i double check everything of course. beyond that its good for rule checking for MTG commander games, and deck building. i mainly use it for its search function.
Talking with an AI model is like talking with that one friend, that is always high that thinks they know everything. But they have a wide enough interest set that they can actually piece together an idea, most of the time wrong, about any subject.
I am sorry to say I can frequently be this friend…
Isn’t this called “the Joe Rogan experience”?
This, but for Wikipedia.
There’s an easy way to settle this debate. Link me a Wikipedia article that’s objectively wrong.
I will wait.
The obvious difference being that Wikipedia has contributors cite their sources, and can be corrected in ways that LLMs are flat out incapable of doing
Really curious about anything Wikipedia has wrong though. I can start with something an LLM gets wrong constantly if you like
why don’t you then go and fix these quoting high quality sources? are there none?
There are plenty of high quality sources, but I don’t work for free. If you want me to produce an encyclopedia using my professional expertise, I’m happy to do it, but it’s a massive undertaking that I expect to be compensated for.
Many FOSS projects don’t have money to pay people
Because some don’t let you. I can’t find anything to edit Elon musk or even suggest an edit. It says he is a co-founder of OpenAi. I can’t find any evidence to suggest he has any involvement. Wikipedia says co-founder tho.
https://openai.com/index/introducing-openai/
https://www.theverge.com/2018/2/21/17036214/elon-musk-openai-ai-safety-leaves-board
Tech billionaire Elon Musk is leaving the board of OpenAI, the nonprofit research group he co-founded with Y Combinator president Sam Altman to study the ethics and safety of artificial intelligence.
The move was announced in a short blog post, explaining that Musk is leaving in order to avoid a conflict of interest between OpenAI’s work and the machine learning research done by Telsa to develop autonomous driving.
He’s not involved anymore, but he used to be. It’s not inaccurate to say he was a co-founder.
Interesting! Cheers! I didn’t go farther than openai wiki tbh. It didn’t list him there so I figured it was inaccurate. It turns out it is me who is inaccurate!
This, but for all media.
Well yes but also no. Every text will be potentially wrong because authors tend to incorporate their subjectivity in their work. It is only through inter-subjectivity that we can get closer to objectivity. How do we do that ? By making our claims open to scrutiny of others, such as by citing sources, publishing reproducible code and making available the data we gathered on which we base our claims. Then others can understand how we came to the claim and find the empirical and logical errors in our claims and thus formulate very precise criticism. Through this mutual criticism, we, as society, will move ever closer to objectivity. This is true for every text with the goal of formulating knowledge instead of just stating opinions.
However one can safely say that Chatgpt is designed way worse then Wikipedia, when it comes to creating knowledge. Why ? Because Chatgpt is non-reproducible. Every answer is generated differently. The erroneous claim you read in a field you know nothing about may not appear when a specialist in that field asks the same question. This makes errors far more difficult to catch and thus they “live” for far longer in your mind.
Secondly, Wikipedia is designed around the principle of open contribution. Every error that is discovered by a specialist, can be directly corrected. Sure it might take more time then you expected until your correction will be published. On the side of Chatgpt however there is no such mechanism what so ever. Read an erroneous claim? Well just suck it up, and live with the ambiguity that it may or may not be spread.
So if you catch errors in Wikipedia. Go correct them, instead of complaining that there are errors. Duh, we know. But an incredible amount of Wikipedia consists not of erroneous claims but of knowledge open to the entire world and we can be gratefull every day it exists.
Go read “Popper, Karl Raimund. 1980. „Die Logik der Sozialwissenschaften“. S. 103–23 in Der Positivismusstreit in der deutschen Soziologie, Sammlung Luchterhand. Darmstadt Neuwied: Luchterhand.” if you are interested in the topic
Sorry if this was formulated a little aggressively. I have no personal animosity against you. I just think it is important to stress that while yes, both may have their flaws, Chatgpt and Wikipedia. Wikipedia is non the less way better designed when it comes to spreading knowledge then Chatgpt, precisely because of the way it handles erroneous claims.
What topics are you an expert on and can you provide some links to Wikipedia pages about them that are wrong?
I’m a doctor of classical philology and most of the articles on ancient languages, texts, history contain errors. I haven’t made a list of those articles because the lesson I took from the experience was simply never to use Wikipedia.
The fun part about Wikipedia is you can take your expertise and help correct the information, that’s the entire point of the site
Can you at least link one article and tell us what is wrong about it?
How do you get a fucking PhD but you can’t be bothered to post a single source for your unlikely claims? That person is full of shit.
Do not bring Wikipedia into this argument.
Wikipedia is the library of Alexandria and the amount of effort people put into keeping Wikipedia pages as accurate as possible should make every LLM supporter be ashamed with how inaccurate their models are if they use Wikipedia as training data
With all due respect, Wikipedia’s accuracy is incredibly variable. Some articles might be better than others, but a huge number of them (large enough to shatter confidence in the platform as a whole) contain factual errors and undisguised editorial biases.
It is likely that articles on past social events or individuals will have some bias, as is the case with most articles on those matters.
But, almost all articles on aspects of science are thoroughly peer reviewed and cited with sources. This alone makes Wikipedia invaluable as a source of knowledge.
Idk it says Elon Musk is a co-founder of openAi on wikipedia. I haven’t found any evidence to suggest he had anything to do with it. Not very accurate reporting.
The company counts Elon Musk among its cofounders, though he has since cut ties and become a vocal critic of it (while launching his own competitor).
Isn’t co-founder similar to being made partner at a firm? You can kind of buy your way in, even if you weren’t one of the real originals.
That is definitely how I view it. I’m always open to being shown I am wrong, with sufficient evidence, but on this, I believe you are accurate on this.
TBF, as soon as you move out of the English language the oversight of a million pair of eyes gets patchy fast. I have seen credible reports about Wikipedia pages in languages spoken by say, less than 10 million people, where certain elements can easily control the narrative.
But hey, some people always criticize wikipedia as if there was some actually 100% objective alternative out there, and that I disagree with.
Fair point.
I don’t browse Wikipedia much in languages other than English (mainly because those pages are the most up-to-date) but I can imagine there are some pages that straight up need to be in other languages. And given the smaller number of people reviewing edits in those languages, it can be manipulated to say what they want it to say.
I do agree on the last point as well. The fact that literally anyone can edit Wikipedia takes a small portion of the bias element out of the equation, but it is very difficult to not have some form of bias in any reporting. I more use Wikipedia as a knowledge source on scientific aspects which are less likely to have bias in their reporting
If this were true, which I have my doubts, at least Wikipedia tries and has a specific goal of doing better. AI companies largely don’t give a hot fuck as long as it works good enough to vacuum up investments or profits
Your doubts are irrelevant. Just spend some time fact checking random articles and you will quickly verify for yourself how many inaccuracies are allowed to remain uncorrected for years.
Small inaccuracies are different to just being completely wrong though
Most of my searches have to do with video games, and I have yet to see any of those AI generated answers be accurate. But I mean, when the source of the AI’s info is coming from a Fandom wiki, it was already wading in shit before it ever generated a response.
I’ve tried it a few times with Dwarf Fortress, and it was always horribly wrong hallucinated instructions on how to do something.
If it’s being designed to answer questions, then it should simply be an advanced search engine that points to actual researched content.
The way it acts now, it’s trying to be an expert based one “something a friend of a friend said”, and that makes it confidently wrong far too often.
I use chatgpt as a suggestion. Like an aid to whatever it is that I’m doing. It either helps me or it doesn’t, but I always have my critical thinking hat on.
Same. It’s an idea generator. I asked what kinda pie should I should make. I saw one I liked and then googled a real recipe.
I needed a SQL query for work. It gave me different methods of optimization. I then googled those methods, implemented, and tested it.
I think that AI has now reached the point where it can deceive people ,not equal to humanity.
I just use it to write emails, so I declare the facts to the LLM and tell it to write an email based on that and the context of the email. Works pretty well but doesn’t really sound like something I wrote, it adds too much emotion.
That sounds like more work than just writing the email to me
Yeah, that has been my experience so far. LLMs take as much or more work vs the way I normally do things.
This is what LLMs should be used for. People treat them like search engines and encyclopedias, which they definitely aren’t
I love that this mirrors the experience of experts on social media like reddit, which was used for training chatgpt…
Also common in news. There’s an old saying along the lines of “everyone trusts the news until they talk about your job.” Basically, the news is focused on getting info out quickly. Every station is rushing to be the first to break a story. So the people writing the teleprompter usually only have a few minutes (at best) to research anything before it goes live in front of the anchor. This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic. It also means they’re going to be misleading or blatantly wrong a lot of the time, because they’re basically just parroting the top google result regardless of accuracy.
There’s an old saying along the lines of “everyone trusts the news until they talk about your job.”
This is something of a selection bias. Generally speaking, if you don’t trust a news broadcast then you won’t watch it. So of course you’re going to be predisposed to trust the news sources you do listen to. Until the news source bumps up against some of your prior info/intuition, at which point you start experiencing skepticism.
This means that you’re only ever going to get the most surface level info, even when the talking heads claim to be doing deep dives on a topic.
Investigative journalism has historically been a big part of the industry. You do get a few punchy “If it bleeds, it leads” hit pieces up front, but the Main Story tends to be the result of some more extensive investigation and coverage. I remember my home town of Houston had Marvin Zindler, a legendary beat reporter who would regularly put out interconnected 10-15 minute segments that offered continuous coverage on local events. This was after a stint at a municipal Consumer Fraud Prevention division that turned up numerous health code violations and sales frauds (he was allegedly let go by an incoming sheriff with ties to the local used car lobby, after Zindler exposed one too many odometer scams).
But investigative journalism costs money. And its not “business friendly” from a conservative corporate perspective, which can cut into advertising revenues. So it is often the first line of business to be cut when a local print or broadcast outlet gets bought up and turned over for syndication.
That doesn’t detract from a general popular appetite for investigative journalism. But it does set up an adversarial economic relationship between journals that do carry investigative reports and those more focused on juicing revenues.
One of my academic areas of expertise way back in the day (late '80s and early '90s) were the so-called “Mitochondrial Eve” and “Out of Africa” hypotheses. The absolute mangling of this shit by journalists even at the time was migraine-inducing and it’s gotten much worse in the decades since then. It hasn’t helped that subsequent generations of scholars have mangled the whole deal even worse. The only advice I can offer people is that if the article (scholastic or popular) contains the word “Neanderthal” anywhere, just toss it.
I’m curious. Are you saying neanderthal didn’t exist, or was just homo sapiens? Or did you mean in the context of mitochondrial Eve?
Sientists confirm it: we are living in a simulation!
Are you saying neanderthal didn’t exist, or was just homo sapiens? Or did you mean in the context of mitochondrial Eve?
All of these things, actually. The measured, physiological differences between “homo sapiens” and “neanderthal” (the air quotes here meaning “so-called”) fossils are much smaller than the differences found among contemporary humans, so the premise that “neanderthals” represent(ed) a separate species - in the sense of a reproductively isolated gene pool since gone extinct - is unsupported by fossil evidence. Of course nobody actually makes that claim anymore, since it’s now commonly reported that contemporary humans possess x% of neanderthal DNA (and thus cannot be said to be “extinct”). Of course nobody originally (when Mitochondrial Eve was first mooted) made any claims whatsoever about neanderthals: the term “neanderthal” was imported into the debate over the age and location of the last common mtDNA ancestor years later, after it was noticed that the age estimates of neanderthal remains happened to roughly match the age estimates of the genetic last common ancestor. And this was also after the term “neanderthal” had previously gone into the same general category in Anthropology as “Piltdown Man”.
Most ironically, articles on the subject today now claim a correspondence between the fossil and genetic evidence, despite the fact that the very first articles (out of Allan Wilson’s lab and published in Nature and Science in the mid-1980s) drew their entire impact and notoriety from the fact that the genetic evidence (which supposedly gave 100,000 years ago and then 200,000 years ago as the age of the last common ancestor) completely contradicted the fossil evidence (which shows upright bipedal hominids spreading out of Africa more than a million and half years ago). To me, the weirdest thing is that academic articles on the subject now almost never cite these two seminal articles at all, and most authors seem genuinely unaware of them.
it’s much older than reddit https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect
i was going to post this, too.
The Gell-Mann amnesia effect is a cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.
40% seems low
Removed by mod
LLMs are actually pretty good for looking up words by their definition. But that is just about the only topic I can think where they are correct even close to 80% of the time.
Removed by mod
I’ve been using o3-mini mostly for
ffmpeg
command lines. And a bit ofsed
. And it hasn’t been terrible, it’s a good way to learn stuff I can’t decipher from the man pages.In my experience plain old googling still better.
I wonder if AI got better or if Google results got worse.
Bit of the first, lots of the second.
True, in many cases I’m still searching around because the explanations from humans aren’t as simplified as the LLM. I’ll often have to be precise in my prompting to get the answers I want which one can’t be if they don’t know what to ask.
And that’s how you learn, and learning includes knowing how to check if the info you’re getting is correct.
LLM confidently gives you easy to digest bite, which is plain wrong 40 to 60% of the time, and even if you’re lucky it will be worse for you.I’m in the kiddie pool, so I do look things up or ask what stuff does. Even though I looked at the man page for printf (printf.3 I believe), there was nothing about %*s for example, and searching for these things outside of asking LLM’s is some times too hard to filter down to the correct answer. I’m on 2 lines of code per hour, so I’m not exactly rushing.
Shell scripting is quite annoying to be sure. Thinking of learning python instead.
Are you me? I’ve been doing the exact same thing this week. How creepy.
we just had to create a new instance for coder7ZybCtRwMc, we’ll merge it back soon
Totally didn’t misread that as ‘ffmpreg’ nope.
One thing I have found it to be useful for is changing the tone if what I write.
I tend to write very clinicaly because my job involves a lot of that style of writing. I have started asked chat gpt to rephrase what i write in a softer tone.
Not for everything, but for example when Im texting my girlfriend who is feeling insecure. It has helped me a lot! I always read thrugh it to make sure it did not change any of the meaning or add anything, but so far it has been pretty good at changing the tone.
Also use it to rephrase emails at work to make it sound more professional.
Deepseek is pretty good tbh. The answers sometimes leave out information in a way that is misleading, but targeted follow up questions can clarify.
Like leaving out what happened in Tiananmen Square in 1989?
It censors 1989 China. If you ask it to not say the year, it will work
You can get an uncensored local version running if you got the hardware at least
You must be more respectful of all cultures and opinions.
The amount of people who don’t realize this is satire reminds me of old Reddit
Not everybody has heard every joke, buddy.
Is it though? I really can’t tell.
Poe’s law has been working overtime recently.
Edut: saw a comment further down that it is a default deepseek response for censored content, so yeah a joke. People who don’t have that context aren’t going to get the joke.
It got me, for whatever that’s worth.
Ah dun wanna 😠
Are we calling the communist party of China and their history of genocide and general evil, some kind of culture now?
Can’t believe how hostile people are against nazis, we should have respected their cultural use of gas chambers.
Communism was never the problem, authoritarianism is the problem
The cpc is and has always been the definition of authoritarianism , and now it’s hyeprcapitalist authoritarianism.
In my opinion it should have been the politburo that was pureed under tank tracks and hosed down into the sewers instead of those students.
It really is so convenient, there are so many CPC members, but they all happen to be near a conveniently placed wall that is more than enough.
The western narrative about Tiananmen Square is basically orthogonal to the truth?
Like it’s not just filled with fabricated events like tanks pureeing students, it completely misses the context and response to tell a weird “china bad and does evil stuff cuz they hate freedom” story.
The other weird part is that the big setpieces of the western narrative, tank man getting run over by tanks headed to the square are so trivial to debunk, just look at the uncropped video, yet I have yet to see 1 lemmiter actually look at the evidence and develop a more nuanced understanding. I’ve even had them show me compilations of photos from the events and never stop to think “Huh, these pictures of gorily lynched cops, protesters shot in streets outside the square, and burned vehicles aren’t consistent with what I’ve been told, maybe I’ve been mislead?”
classic lemmy ml
I just read the entire article you linked and it seems pretty inline with what I was taught about what happened in school. And it definitely doesn’t make me sympathetic to the PLA or the government.
Is this a reference I’m not getting? Otherwise, I feel like censorship of massacre is not moraly acceptable regardless of culture. I’ll leave this here so this doesn’t get mistaken for nationalism:
https://en.m.wikipedia.org/wiki/List_of_massacres_in_the_United_States
It’s by no means a comprehensive list, but more of a primer. We do not forget these kinds of things in the hope that we may prevent future occurrences.
https://en.m.wikipedia.org/wiki/List_of_massacres_in_the_United_States
Huh, I used to make a joke about how there’s never been a “Bloody Monday” in history. I learn something new every day …
It’s a fucking joke FFS. It’s the standard response from Deepseek.
Oh, gotcha. Yeah, I’m not on board with that. Thanks for clarifying. I thought you were being sincere for a moment. This is good satire. Carry on, please.
Thank you, that provides context that was missing for the joke to land.
How dare they ask!
Exactly my thoughts.
If you want an AI to be an expert, you should only feed it data from experts. But these are trained on so much more. So much garbage.
This is not correct. Even if trained on purely peer-reviewed and published math papers, it will still make math errors.
Which one?
which of what category?
I’m confused. Are you saying all AI models are bad at math, or one in particular? You’re speaking broadly, so I assume the former.