I usually tell it “using only information found on applicationwebsite.com <question>” that works pretty well at least to get me in the ballpark to find the answer I’m looking for.
Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You’re not having a conversation with it, it’s “completing” the chat history you’re providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.
If you don’t directly provide, in the chat history and/or the text completion prompt, the information you’re trying to retrieve, you’re essentially fishing for text in a sea of random text tokens that seems like it fits the question.
It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.
This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a “smarter” way (see GPT-4o and “chain of reasoning”)
They were trained on reddit. How much would you trust a chatbot whose brain consists of the entirety of reddit put in a blender?
I am amazed it works as well as it does. Gemini only occasionally tells people to kill themselves.
chatgpt has been really good for teaching me code. As long as I write the code myself and just ask for clarity or best practices i haven’t had any bad hallucinations.
For example I wanted to change a character in an array with another one but it would give some error about data types that were way out of my league. Anyways apparently I needed to run list(string) first even though string[5] will return the character.
However that’s in python which I assume is well understood due to the ton of stackoverflow questions and alternative docs. I did ask it to do something in Google docs scripting something once and it had no idea what was going on and just hoped it worked. Fair enough, I also had no idea what was going on.
The reason why
string[5] = '5'
doesn’t work is that strings in Python are immutable (cannot be changed). By doinglist(string)
you are actually creating a new list with the contents of the string and then modifying the list.I wonder if ChatGPT explains this or just tells you to do this… as this works but can be quite inefficient.
To me this highlights the danger with using AI… sure you can complete a task, but you may not understand why or learn important concepts.
Yeah, it’s a gift and a curse for exploring a new domain. It can help you move faster, but you’ll definitely loose some understanding you’d get from struggling on those topics longer.
who the fuck is scraeming ‘RTFM’ at my house. show yourself, coward. i will never r any fm
What are you talking about? We mention this on a daily basis. That’s the #1 complaint about ChatGPT when used for factual purposes
when used for factual purposes
I think the point of the post is that anyone who uses it for this is a fucking moron.
Literally the only use I’ve found for it that’s better than any other alternative is describing a thing to it that you can’t remember the name of. It’s usually right, and when it’s wrong you were probably never gonna find the thing on your own anyway.
But I don’t go to it first, only when I can’t figure out how to find the name any other way.
I only use it for complex searches with results I can usually parse myself like ‘‘list 30 typical household items without descriptions or explainations with no repeating items’’ kind of thing.
it’s because everyone stopped using it, right?
at least months ago?
great value for all that energy it expends, indeed!
Gippity is pretty good at getting me 90% of the way there.
It usually sets me up with at least all the terms and etc I now know to google, whereas before I wouldnt even know what I am looking for in the first place.
Also not gonna lie, search engines are even worse than gippity for accuracy often.
And Ive had to fight with so many cases of garbage documentation lately that gippity genuinely does the job better, because it has all the random comments from issues and solutions in its data.
Usually once I have my sort of key terms I need to dig into, I can use youtube/google and get more specific information though, and thats the last 10%
Because in a lot of applications you can bypass hallucinations.
- getting sources for something
- as a jump off point for a topic
- to get a second opinion
- to help argue for r against your position on a topic
- get information in a specific format
In all these applications you can bypass hallucinations because either it’s task is non-factual, or it’s verifiable while promoting, or because you will be able to verify in any of the superseding tasks.
Just because it makes shit up sometimes doesn’t mean it’s useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.
so, basically, even a broken clock is right twice a day?
No, maybe more like, even a functional clock is wrong every 0.8 days.
https://superuser.com/questions/759730/how-much-clock-drift-is-considered-normal-for-a-non-networked-windows-7-pcThe frequency is probably way higher for most LLMs though lol
Yes, but for some tasks mistakes don’t really matter, like “come up with names for my project that does X”. No wrong answers here really, so an LLM is useful.
How is that faster than just picking a random name? Noone picks software based on name.
And yet virtually all of software has names that took some thought, creativity, and/or have some interesting history. Like the domain name of your Lemmy instance. Or Lemmy.
And people working on something generally want to be proud of their project and not name it the first thing that comes to mind, but take some time to decide on a name.
Wouldnt they also not want to take a random name off an AI generated list? How is that something to be proud of? The thought, creativity, and history behind it is just that you put a query into chatgpt and picked one out of 500 names?
Maybe its just a difference of perspective but thats not only not a special origin story for a name, its taking from others in a way you won’t be able to properly credit them, which is essential to me.
I would rather avoid the trouble and spend the time with a coworker or friend throwing ideas back and forth and building an identity intentionally.
I suppose AI could be nice if I was alone nearly all the time.
great value for all that energy it expends, indeed!
The energy expenditure for GPT models is basically a per-token calculation. Having it generate a list of 3-4 token responses would barely be a blip compared to having it read and respond entire articles.
There might even be a case for certain tasks with a GPT model being more energy efficient than making multiple google searches for the same. Especially considering all the backend activity google tacks on for tracking users and serving ads, complaining about someone using a GPT model for something like generating a list of words is a little like a climate activist yelling at someone for taking their car to the grocery store while standing across the street from a coal-burning power plant.
… someone using a GPT model for something like generating a list of words is a little like a climate activist yelling at someone for taking their car to the grocery store while standing across the street from a coal-burning power plant.
no, it’s like a billion people taking their respective cars to the grocery store multiple times a day each while standing across the street from one coal-burning power plant.
each person can say they are the only one and their individual contribution is negligible. but get all those drips together and you actually have a deluge of unnecessary wastage.
Except each of those drips are subject to the same system that preferences individualized transport
This is still a perfect example, because while you’re nit-picking the personal habits of individuals who are a fraction of a fraction of the total contributors to GPT model usage, huge multi-billion dollar entities are implementing it into things that have no business using it and are representative for 90% of llm queries.
Similar for castigating people for owning ICE vehicles, who are not only uniquely pressued into their use but are also less than 10% of GHG emissions in the first place.
Stop wasting your time attacking individuals using the tech for help in their daily tasks, they aren’t the problem.
Also just searching the web in general.
Google is useless for searching the web today.
Not if you want that thing that everyone is on about. Don’t you want to be in with the crowd?! /s
All LLMs are text completion engines, no matter what fancy bells they tack on.
If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.
For everything else you are wading through territory you could probably do easier using other methods.
I love the people who are like “I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?” And blame ChatGPT when the problem is all they really needed was a fucking calculator
The fucking problem is they stole my damn calculator and now they’re trying to sell me an LLM as a replacement.
LLMs are an interesting if mostly useless toy (an excessively costly one, though; Eliza achieved mostly the same results at a fraction of the cost).
The massive scam bubble that’s been built around them, however, and its absurd contribution to enshittification and global warming, is downright monstrous, and makes anyone defending commercial LLMs worthy of the utmost contempt, just like those who defended cryptocurrencies before LLMs became the latest fad.
Big businesses know, they even ask people like me to add extra measures in place. I like to call it the concorde effect. Youre trying to make a plane that can shove air out of the way faster than it wants to move, and this takes an enormous amount of energy that isn’t worth the time save, or the cost. Even if you have higher airspeed when it works, if your plane doesn’t make it to destination it isn’t “faster”.
We hear a lot about the downsides of AI, except that doesn’t fit the big corpo narrative and people don’t care enough really. If youre just a consumer who has no idea how this really works, the investments companiess make into shoving it everywhere makes it seem like it’s not a problem and it looks like there’s only AI hype and no party poopers.
It’s usually good for ecosystems with good and loads of docs. Whenever docs are scarce the results become shitty. To me it’s mostly a more targeted search engine without the crap (for now)
I’m convinced people who can’t tell when a chat bot is hallucinating are also bad at telling whether something else they’re reading is true or not. What online are you reading that you’re not fact checking anyway? If you’re writing a report you don’t pull the first fact you find and call it good, you need to find a couple citations for it. If you’re writing code, you don’t just write the program and assume it’s correct, you test it. It’s just a tool and I think most people are coping because they’re bad at using it
Yeah. GPT models are in a good place for coding tbh, I use it every day to support my usual practice, it definitely speeds things up. It’s particularly good for things like identifying niche python packages & providing example use cases so I don’t have to learn shit loads of syntax that I’ll never use again.
In other words, it’s the new version of copying code from Stack Overflow without going to the trouble of properly understanding what it does.
Pft you must have read that wrong, its clearly turning them into master programmer one query at a time.
I know how to write a tree traversal, but I don’t need to because there’s a python module that does it. This was already the case before LLMs. Now, I hardly ever need to do a tree traversal, honestly, and I don’t particularly want to go to the trouble of learning how this particular python module needs me to format the input or whatever for the one time this year I’ve needed to do one. I’d rather just have something made for me so I can move on to my primary focus, which is not tree traversals. It’s not about avoiding understanding, it’s about avoiding unnecessary extra work. And I’m not talking about saving the years of work it takes to learn how to code, I’m talking about the 30 minutes of work it would take for me to learn how to use a module I might never use again. If I do, or if there’s a problem I’ll probably do it properly the second time, but why do it now if there’s a tool that can do it for me with minimum fuss?
The usefulness of Stack Overflow or a GPT model completely depends on who is using it and how.
It also depends on who or what is answering the question, and I can’t tell you how many times someone new to SO has been scolded or castigated for needing/wanting help understanding something another user thinks is simple. For all of the faults of GPT models, at least they aren’t outright abusive to novices trying to learn something new for themselves.
I fully expect an LLM trained in Stack Overflow is quiet capable of being just as much of an asshole as a Stack Overflow user.
Joke on the side, whilst I can see that “not going to the trouble of understanding the code you got” is mostly agnostic in terms of the source being Stack Overflow or an LLM (whilst Stack Overflow does naturally have more context around the solution, including other possible solutions, an LLM can be interrogated further to try and get more details), I think only time will tell if using an LLM model ultimately makes for less well informed programmers than being a heavy user of Stack Overflow or not.
What I do think is more certainly, is that figuring out a solution yourself is a much better way to learn that stuff than getting it from an LLM or Stack Overflow, though I can understand that often time is not available for that more time consuming method, plus that method is an investment that will only pay if you get faced with similar problems in the future, so sometimes it’s simply not worth it.
The broader point I made still stands: there is a class of programmers who are copy & paste coders (no idea if the poster I originally replied to is one or not) for whom an LLM is just a faster to query Stack Overflow.
There will always be a class of programmers/people that choose not to interrogate or seek to understand information that is conveyed to them - that doesn’t negate the value provided by tools like Stack Overflow or chatGPT, and I think OP was expressing that value.
Remember when you had to have extremely niche knowledge of “banks” in a microcontroller to be able to use PWM on 2 pins with different frequencies?
Yes, I remember what a pile of shit it was to try and find out why xyz is not working while x and y and z work on their own. GPT usually gets me there after some tries. Not to mention how much faster most of the code is there, from A to Z, with only little to tweak to get it where I want (since I do not want to be hyper specific and/or it gets those details wrong anyway, as would a human without massive context).
You have to understand it well enough to know what stuff you can rely on. On the other hand nowadays there are often sources there, so it’s easy to check.
They’re trying not to lose money on the developments
Probably because they’re not checking them