Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
I doubt most LLM haters have spent that much time thinking about it deeply. It’s part of the package deal you must subscribe to if you want to belong to the group. If you spend time in spaces where the haters are loud and everyone else stays quiet out of fear of backlash, it’s only natural to start feeling like everyone must think this way - so it must be true, and therefore I think this way too.
This take is so broke your soul probably has socks with holes in them
AI becoming much more widespread isn’t because it’s actually that interesting. It’s all manufactured. Forcibly shoved into our faces. And for the negative things AI is capable of, I have an uneasy feeling about all this.
I think a lot of it is anxiety; being replaced by AI, the continued enshitification of the services I loved, and the ever present notion that AI is, “the answer.” After a while, it gets old and that anxiety mixes in with annoyance – a perfect cocktail of animosity.
And AI stole em dashes from me, but that’s a me-problem.
Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?
Bunshit.
Fraking toaster…
Not to be snarky, but why didn’t you ask an LLM this question?
I’m not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.
My criticism comes in three parts:
-
The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people’s critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It’s making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.
-
A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are “open source” but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It’s done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it’s a trade secret. If it’s not replicable, its scientific value is a little bit in question.
-
The AI business is rotten to the core. AI businesses like to pretend they’re altruistic innovators who take us to the Future. They’re a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren’t even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they’re creating. If you try to apply even a little bit of reasonable regulation to this - “hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training” - they immediately whinge about how you’re impeding the great march of human progress or someshit.
And I’m not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say “this is so easy to use that I can just make my programs myself, looks like I don’t need you any more, you’re fired, bye”, and a year later, the biz bros come back and say “this software that I built is a pile of hellish garbage, please come back and fix this, I’ll pay triple”. This is just Visual Basic for Applications all over again.
-
It’s a hugely disruptive technology, that is harmful to the environment, being taken up and given center stage by a host of folk who don’t understand it.
Like the industrial revolution, it has the chance to change the world in a massive way, but in doing so, it’s going to fuck over a lot of people, and notch up greenhouse gas output. In a decade or two, we probably won’t remember what life was like without them, but lots of people are going to be out of jobs, have their income streams cut off and have no alternatives available to them whilst that happens.
And whilst all of that is going on, we’re getting told that it’s the best most amazing thing that we all need, and it’s being stuck in to everything, including things that don’t benefit from the presence of an LLM, and sometimes, where the presence of an LLM can be actively harmful
Am Mixed about LLMS And stuff but i agree with this
I am not an AI hater, it helps me automate many of the more mundane tasks of my job or the things I don’t ever have time for.
I also feel that change management is a big factor with any paradigm shifting technology, as is with LLMs. I recall when some people said that both the PC and the internet were going to be just a fad.
Nonetheless, all the reasons you’ve mentioned are the same ones that give me concern about AI.
To me, it’s not the tech itself, it’s the fact that it’s being pushed as something it most definitely isn’t. They’re grifting hard to stuff an incomplete feature down everyone’s throats, while using it to datamine the everloving spit out of us.
Truth be told, I’m genuinely excited about the concept of AGI, of the potential of what we’re seeing now. I’m also one who believes AGI will ultimately be as a progeny and should be treated as such, as a beong in itself, and while we aren’t capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I’m also deeply disappointed to see these people behaving this way - like madmen and cultists.
The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That’s my problem.
Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that’s how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.
Said it better than I did, this is exactly it!
Right now, it’s like watching everyone cheer on as the obvious Villain is developing nuclear weapons.
I‘ll just say I won‘t grand any machine even the most basic human rights until every last person on the planet has access to enough clean water, food, shelter, adequate education, state of the art health care, peace, democracy and enough freedom to not limit the freedom of others. That‘s the lowest bar and if I can think of other essential things every person on the planet needs I‘ll add them.
I don‘t want to live in a world where we treat machines like celebrities while we don‘t look after our own. That would be an express ticket towards disaster like we‘ve seen in many science fiction novels before.
Research towards AGI for AGI’s sake should be strictly prohibited until tech bros figure out how to feed the planet so to speak. Let‘s give them an incentive to use their disruptive powers for something good before they play god.
While I disagree with your hardline stance on prioritisation of rights (I believe any conscious/sentient being should be treated as such at all times, which implies full rights and freedoms), I do agree that we should learn to take care of ourselves before we take on the incomprehensible responsibility of developing AGI, yes.
For me personally, the problem is not so much LLMs and/or ML solutions (both of which I actively use), but the fact this industry is largely led by American tech oligarchs. Not only are they profoundly corrupt and almost comically dishonest, but they are also true degenerates.
I’m not part of the hate crowd but I do believe I understand at least some of it.
A fairly big issue I see with it is that people just don’t understand what it is. Too many people see it as some magical being that knows everything…
I’ve played with LLMs a lot, hosting them locally, etc., and I can’t say I find them terribly useful, but I wouldn’t hate them for what they are. There are more than enough real issues, of course, both societal and environmental.
One thing I do hate is using LLMs to generate tons of online content, though, be it comments or entire websites. That’s just not what I’m on the internet for.
My biggest issue is with how AI is being marketed, particularly by Apple. Every single Apple Intelligence commercial is about a mediocre person who is not up to the task in front of them, but asks their iPhone for help and ends up skating by. Their families are happy, their co-workers are impressed, and they learn nothing about how to handle the task on their own the next time except that their phone bailed their lame ass out.
It seems to be a reflection of our current political climate, though, where expertise is ignored, competence is scorned, and everyone is out for themselves.
Ethics and morality does it for me. It is insane to steal the works of millions and re-sell it in a black box.
The quality is lacking. Literally hallucinates garbage information and lies, which scammers now weaponize (see Slopsquatting).
Extreme energy costs and environmental damage. We could supply millions of poor with electricity yet we decided a sloppy AI which cant even count letters in a word was a better use case.
The AI developers themselves dont fully understand how it works or why it responds with certain things. Thus proving there cant be any guarantees for quality or safety of AI responses yet.
Laws, juridical systems and regulations are way behind, we dont have laws that can properly handle the usage or integration of AI yet.
Do note: LLM as a technology is fascinating. AI as a tool become fantastic. But now is not the time.
It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3)
It’s ironic that you describe your impression of LLMs in emotional terms.
You’ll find a different prevailing mood in different communities here on Lemmy. The people in the technology community (the example you gave) are fed up with talking about AI all day, each day. They’d like to talk about other technology at times and that skews the mood. At least that’s what I’ve heard some time ago… Go to a different community and discuss AI there and you’ll find it’s a different sentiment and audience there. (And in my opinion it’s the right thing to do anyway. Why discuss everything in this community, and not in the ones dedicated to the topic?)
I am in software and a software engineer, but the least of my concerns is being replaced by an LLM any time soon.
-
I don’t hate LLMs, they are just a tool and it does not make sense at all to hate a LLM the same way it does not make sense to hate a rock
-
I hate the marketing and the hype for several reasons:
- You use the term AI/LLM in the posts title: There is nothing intelligent about LLMs if you understand how they work
- The craziness about LLMs in the media, press and business brainwashes non technical people to think that there is intelligence involved and that LLMs will get better and better and solve the worlds problems (possible, but when you do an informed guess, the chances are quite low within the next decade)
- All the LLM shit happening: Automatic translations w/o even asking me if stuff should be translated on websites, job loss for translators, companies hoping to get rid of experienced technical people because LLMs (and we will have to pick up the slack after the hype)
- The lack of education in the population (and even among tech people) about how LLMs work, their limits and their usages…
LLMs are at the same time impressive (think jump to chat-gpt 4), show the ugliest forms of capitalism (CEOs learning, that every time they say AI the stock price goes 5% up), helpful (generate short pieces of code, translate other languages), annoying (generated content) and even dangerous (companies with the money can now literally and automatically flood the internet/news/media with more bullshit and faster).
Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.
Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.
LLMs and generative AI will do what social media did to us, but a thousand times worse. All that plus the nightmarish capacity of pattern matching at an industrial scale. Inequalities, repression, oppression, disinformation , propaganda and corruption will skyrocket because of it. It’s genuinely terrifying.
-
I think a lot of ground has been covered. It’s a useful technology that has been hyped to be way more than it is, and the really shitty part is a lot of companies are trying to throw away human workers for AI because they are that fucking stupid or that fucking greedy (or both).
They will fail, for the most part, because AI is a tool your employees use, they aren’t a thing to foist onto your customers. Also where do the next generation of senior developers come from if we replace junior developers with AI? Substitute in teachers, artists, copy editors, others.
Add to that people who are too fucking stupid to understand AI deciding it needs to be involved in intelligence, warfare, police work.
I frequently disagree with the sky is falling crowd. AI use by individuals, particularly local AI (though it’s not as capable) is democratizing. I moved from windows to Linux two years ago and I couldn’t have done that if I hadn’t had AI to help me troubleshoot a bunch of issues I had. I use it all the time at work to leverage my decades of experience in areas where I’d have to relearn a bunch of things from scratch. I wrote a Python program in a couple of hours having never written a line before because I knew what questions to ask.
I’m very excited for a future with LLMs helping us out. But everyone is fixated on AI gen (image, voice, text) but it’s not great at that. What it excels at is very quickly giving feedback. You have to be smart enough to know when it’s full of shit. That’s why vibe coding is a dead end. I mean it’s cool that very simple things can be churned out by very inexperienced developers, but that has a ceiling. An experienced developer can also leverage it to do more faster at a higher level, but there is a ceiling there as well. Human input and knowledge never stops being essential.
So welcome to Lemmy and discussion about AI. You have to be prepared for knee-jerk negativity, and the ubiquitous correction when you anthropomorphize AI as a shortcut to make your words easier to read. There isn’t usually too much overtly effusive praise here as that gets shut down really quickly, but there is good discussion to be had among enthusiasts.
I find most of the things folks hate about AI aren’t actually the things I do with it, so it’s easy to not take the comments personally. I agree that ChatGPT written text is slop and I don’t like it as writing. I agree AI art is soulless. I agree distributing AI generated nudes of someone is unethical (I could give a shit what anyone jerks off to in private). I agree that in certain niches, AI is taking jobs, even if I think humans ultimately do the jobs better. I do disagree that AI is inherently theft and I just don’t engage with comments to that effect. It’s unsettled law at this point and I find it highly transformative, but that’s not a question anyone can answer in a legal sense, it’s all just strongly worded opinion.
So discussions regarding AI are fraught, but there is plenty of good discourse.
Enjoy Lemmy!