I don’t hate AI, and I think broadly hating AI is pretty dumb. It’s a tool that can be used for beneficial things when used responsibly. It can also be used stupidly and for bad things. It’s the person using it who is the decider.
I’ve definitely been pretty anti-AI, finding it kinda stupid and generally useless…
…but we hired an AI researcher at my work (which I laughed at). But I cannot deny anymore that with the proper setups, configs, rules, blend of onsite / cloud resources etc. - workplace AI can be pretty fucking game changing. To the point where I went from campaigning against the changes because I felt they were a waste of time to where I am worried for my future job and am using agents 5-10 times a day to handle small bugfixes for me.
I don’t know what will happen when the bubble pops though.
The bubble is irrelevant, that’s just capitalism being inefficient. When the dot com bubble popped it’s not like the internet died. We got things like Netflix and Amazon only after the bubble popped.
You say that like things improved.
Things have improved, people’s standards are just significantly higher. Remember when we didn’t even have clean drinking water? Nope, wasn’t alive back then.
The problem is that there’s basically no way to use it responsibly.
I think there is. Letting the actual professionals guide, instead of the money people is a big step.
Something like McDonnell, and later Boeing, basing all decisions on economic short gains, instead of engineering criteria.
Bean counters shouldn’t make decisions.
The problem is, who do you define as professionals? I’m a professional software engineer. I argue that there is no responsible way to use AI at the moment- it uses too many resources for a far too worthless result. Everything useful that an AI can do is currently better (and cheaper) to do another way, save perhaps live transcription.
Do you define Sam Altman as a professional? Because his guidance wants the entire world to give up 10% of the worldwide GDP to his company (yes, seriously!) He’s clearly touched in the head, or on drugs. Should we follow his advice?
It helped me rewrite a program with different criteria, and it was much faster. I also read everything it wrote and told it what corrections to make. It is good for speed. It also taught me a coding trick or two. It is definitely not reliable, but can help a bit.
Speak for yourself; I love LLMs.
I would not say love, but it’s definitely a great tool to master. Used to be pretty lame, but things seem to be changing fast.
I don’t really understand Lemmy’s AI hate, so feel free to change my mind
There’s a few things.
First off, there is utility, and that utility varies based on your needs. In software development for example, the utility varies from doing most of the work to nearly useless and you feel like the LLM users are gaslighting you based on how useless it is to you. People who live life making utterly boilerplate applications feel like it’s magical. People who generate tons of what are supposed to be ‘design documents’ but get eyed by non-technical executives that don’t understand them, but like to see volumes of prose, LLMs can just generate that no problem (no one who actually would need them ever reads them anyway). Then people who work on more niche scenarios get annoyed because they barely do anything useful, but attempting to use them gets you innundated with low quality code suggestions.
But I’d say mostly it’s about the ratio of investment/hype to the reality. The investment is scary because one day the bubble will pop (doesn’t mean LLM is devoid of value, just that the business context is irrational right now, just like internet was obviously important, but we still had a bubble around the turn of the century overy it). The hype is just so obnoxious, they won’t shut up even when they have nothing really new to say. We get it, we’ve heard it, saying it over and over again just is exhausting to hear.
On creative fronts, it’s kind of annoying when companies use it in a way that is noticeable. I think they could get away with some backdrops and stuff, but ‘foreground’ content is annoying due to being a dull paste of generic content with odd looks. For text this manifests as obnoxiously long prose that could be more to the point.
On video, people are generating content and claiming ‘real’, in ways to drive engagement. That short viral clip of animals doing a funny thing? Nope, generated. We can’t trust video content, whether fluff or serious to be authentic.
We hate it because it’s not what the marketing says it is. It’s a product that the rich are selling to remove the masses from the labor force, only to benefit the rich. It literally has no other productive use for society aside from this one thing.
And it falsely make people think it can replace qualified workers.
And it falsely makes people think it can make art.
I’ve had a ton of fun writing lyrics for AI songs to be honest. Like it actually makes me laugh more than I’ve ever laughed in years hearing a dubstep song about dropping dookies
I would even hate it if it was exactly how it is marketed. Because what it is often marketed for is really stupid and often vague. The fact that it doesn‘t even remotely work like they say just makes me take it a lot less seriously.
@just_another_person @corbin and it will inevitably turn into enshittified disaster when they start selling everyone’s data (which is inevitable).
- they’ve already stolen everything
- other companies already focus on illegally using data for “AI” means, and they’re better at it
- Everyone already figured out that LLMs aren’t what they were promising “Assistant” features were 15 years ago
- None of these companies have any sort of profit model. There is no “AI” race to win, unless it’s about who gets to fleece the public for their money faster.
- Tell me who exactly benefits when AGI is attainable (and for laymen it’s not a real thing achievable with this tech at all), so who in the fuck are you expecting to benefit from this in the long run?
The „companion“ agents children in the 2020s and onward are growing up with and trust more than their parents will start advertising them pharmaceuticals when they‘re grown up :)
You hate it because the media which is owned by the rich told you to hate it so that they can horde it themselves while you champion laws to prevent lower class from using and embracing it. AI haters are class traitors
Lol, right, that’s why. All the people in here are wrong, but you’ve got the right take 🤣
Yes. A lot of people on Lemmy are collectively wrong about things. That isn’t a radical thing to say
Found the fake tankie.
Fuck tankies
Loool, yeah that know that the ruling class like to give tools to fight them. If this would really hurt them, this would be forbidden.
Why create laws if you just sort people out with the media they consume. Happens with right wing everyday. I get you’re above it all, immune to it with your stories superior intelligence, not like those idiots. You could never participate in a community where the media shared on it would you ever create a panic or hype about something taking your job, ruining society, sexually assaulting or most vulnerable, leading to some 1984 dystopian society.
You can’t be manipulated like those idiots
Honestly It’s all the same shit. Different sides of the political spectrum.
The rich give you this. Someone trying to be rich did. They rich are now trying to prevent us from embracing it. You can do a lot of analysis and content creation with it. It’s a source multiplier and something they don’t want people using freely.
dbzero is that way ----->
That’s the dumbest take on AI yet.
No it isn’t. Just like if I told Republicans that their bullshit with immigrants is generated from yellow journalism, they’d respond the same way you just did. You can’t see it because you’re the target.
You missed the high energy consumption and low reliability. They’re equally as valid issues as stealing jobs.
It literally has no other productive use for society aside from this one thing.
I’d refrain from saying that AI replacing labor is productve to society. Speeding up education, however, might be.
I don’t hate AI. AI didn’t do anything. The people who use it wrong are the ones I hate. You don’t sue the knife that stabbed you in court, it was the human behind it that was the problem.
While true to a degree, I think the fact is that AI is just much more complex than a knife, and clearly has perverse incentives, which cause people to use it “wrong” more often than not.
Sure, you can use a knife to cook just as you can use a knife to kill, but just as society encourages cooking and legally & morally discourages murder, then in the inverse, society encourages any shortcut that can get you to an end goal for the sake of profit, while not caring about personal growth, or the overall state of the world if everyone takes that same shortcut, and the AI technology is designed with the intent to be a shortcut rather than just a tool.
The reason people use AI in so many damaging ways is not just because it is possible for the tool to be used that way, and some people don’t care about others, it’s that the tool is made with the intention of offloading your cognitive burden, doing things for you, and creating what can be used as a final product.
It’s like if generative AI models for image generation could only fill in colors on line art, nothing more. The scope of the harm they could cause is very limited, because you’d always require line art of the final product, which would require human labor, and thus prevent a lot of slop content from people not even willing to do that, and it would be tailored as an assistance tool for artists, rather than an entire creation tool for anyone.
Contrast that with GenAI models that can generate entire images, or even videos, and they come with the explicit premise and design of creating the final content, with all line art, colors, shading, etc, with just a prompt. This directly encourages slop content, because to have it only do something like coloring in lines will require a much more complex setup to prevent it from simply creating the end product all at once on its own.
We can even see how the cultural shifts around AI happened in line with how UX changed for AI tools. The original design for OpenAI’s models was on “OpenAI Playground,” where you’d have this large box with a bunch of sliders you could tweak, and the model would just continue the previous sentence you typed if you didn’t word it like a conversation. It was designed to look like a tool, a research demo, and a mindless machine.
Then, they released ChatGPT, and made it look more like a chat, and almost immediately, people began to humanize it, treating it as its own entity, a sort of semi-conscious figure, because it was “chatting” with them in an interface similar to how they might text with a friend.
And now, ChatGPT’s homepage is presented as just a simple search box, and lo and behold, suddenly the marketing has shifted to using ChatGPT not as a companion, but as a research tool (e.g. “deep research”) and people have begun treating it more like a source of truth rather than just a thing talking to them.
And even in models where there is extreme complexity to how you could manipulate them, and the many use cases they could be used for, interfaces are made as sleek and minimalistic as possible, to hide away any ability you might have to influence the result with real, human creativity.
The tools might not be “evil” on their own, but when interfaces are designed the way they are, marketing speak is used how it is, and the profit motive incentivizes using them in the laziest way possible, bad outcomes are not just a side effect, they are a result by design.
This is fantastic description of Dark Patterns. Basically all the major AI products people use today are rife with them, but in insidiously subtle ways. Your point about minimal UX is a great example. Just because the interface is minimal does not mean it should be, and OpenAI ditched their slider-driven interface even though it gave the user far more control over the product.
The thing they created hates you. Trust me, it does.
The thing they created was a mathematical algorithm.
Trust me, it has no feelings.
But it’s when you promote the knife like it’s medicine rather than a weapon is when the shit turns sideways.
Scalpel: Am I a joke to you?
[Allegorical confusion]
Can your confusion be treated with a scalpel?
I dunno, is it heavy and blunt?
It’s AI… So… Yeah.
I dunno, I like AI for what it’s good for, the luddite argument doesn’t particularly sway me, my clothes, furniture, car, etc, are all machine made. Machine made stuff is everywhere, the handmade hill to die on was centuries back during the industrial revolution.
The anti-capitalist arguments don’t sway me when specifically applied to AI. The corporations are going to bad things? Well yeah! It’s not “AI bad” it’s “corporate bad”.
The ethical arguments kinda work. Deep fakes are bad, and I don’t think they the curios AI provides tip the scales when weighed against the and of deepfakes.
Tl:Dr AI is a heavy, blunt tool.
I think my point is, the consumer versions of AI, like chat bots, are pretty shit, and they’re making us dumber. They’re also kind of dangerous, of which we’ve already seen numerous examples.
I’m also not interested as a programmer. I’m not looking to bug hunt as a profession. I want to make my own bugs, dammit! That’s the fun part! To create something! Not fix something a machine made until it’s ready to ship. How boring.
This reminds me of a robot character called SARA that I would see on a Brazilian family series As Aventuras De Poliana. :-)
It’s extremely wasteful. Inefficient to the extreme on both electricity and water. It’s being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.
It’s cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?
What’s not to hate?
It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.
Now a phone will cream the world’s best in chess and even go
Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves
Twenty years is a very long time, also “good” is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.
There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare
Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.
I’ll wait.
Servers have been eating up a significant portion of electricity for years before AI. It’s whether we get something useful out of it that matters
Not even remotely close to this scale… At most you could compare the energy usage to the miners in the crypto craze, but I’m pretty sure that even that is just a tiny fraction of what’s going on right now.
Crypto miners wish they could be this inefficient. No literally they do. They’re the “rolling coal” mfers of the internet.
Very wrong
In 2023 AI used 40 TWh of energy in the US out of a total 176 TWh used by data centers
https://davidmytton.blog/how-much-energy-do-data-centers-use/
From the blog you quoted yourself:
Despite improving AI energy efficiency, total energy consumption is likely to increase because of the massive increase in usage. A large portion of the increase in energy consumption between 2024 to 2023 is attributed to AI-related servers. Their usage grew from 2 TWh in 2017 to 40 TWh in 2023. This is a big driver behind the projected scenarios for total US energy consumption, ranging from 325 to 580 TWh (6.7% to 12% of total electricity consumption) in the US by 2028.
(And likewise, the last graph of predictions for 2028)
From a quick read of that source, it is unclear to me if it factors in the electricity cost of training the models. It seems to me that it doesn’t.
I found more information here: https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
Racks of servers hum along for months, ingesting training data, crunching numbers, and performing computations. This is a time-consuming and expensive process—it’s estimated that training OpenAI’s GPT-4 took over $100 million and consumed 50 gigawatt-hours of energy, enough to power San Francisco for three days.
So, I’m not sure if those numbers for 2023 paint the full picture. And adoption of AI-powered tools was definitely not as high in 2023 as it is nowadays. So I wouldn’t be surprised if those numbers were much higher than the reported 22.7% of the total server power usage in the US.
That’s the hangup isn’t it? It produces nothing of value. Stolen art. Bad code. Even more frustrating phone experiences. Oh and millions of lost jobs and ruined lives.
It’s the most american way possible that they could have set trillions of dollars on fire short of carpet bombing poor brown people somewhere.
If you want to argue in favor of your slop machine, you’re going to have to stop making false equivalences, or at least understand how its false. You can’t make ground on things that are just tangential.
A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don’t do that though; they’re each highly specialized and take a long time to train. And the issue isn’t with neural models in general.
The issue is neural models that are being purported to do things they functionally cannot, because it’s not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we’re supposed to buy that something that creates word relationship vector maps is supposed to create new?
For code generation, it’s the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn’t something new, it’s kitbashing at best, and that’s assuming it all works flawlessly.
With art, it’s taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like “no one likes creating art or music”. And no, THEY just don’t want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I’m baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that’s fine but not where I can make things when I take time to learn.
And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there’s a tremendous feeling from making this beautiful instrument sing like that.
Its because the tech bros have 0 empathy or humanity. Llm slop is perfect for them.
Stockfish can’t play Go. The resources you spent making the chess program didn’t port over.
In the same way you can use a processor to run a completely different program, you can use a GPU to run a completely different model.
So if current models can’t do it, you’d be foolish to bet against future models in twenty years not being able to do it.
I think the problem is that you think you’re talking like a time traveler heralding us about the wonders of sliced bread, when really it’s more like telling a small Victorian child about the wonders of Applebee’s and in the impossible chance they survive to it then finding everything is a lukewarm microwaved pale imitation of just buying the real thing at Aldi and cooking it in less time for far tastier and a fraction of the cost.
Buy any bubble memory lately?
I have a book from the early 90s which goes over some emerging technologies at the time. One of them was bubble memory. It was supposed to have the cost per MB of a hard drive and the speed of RAM.
Of course, that didn’t materialize. Flash memory outpaced its development, and it’s still not quite as cheap as hard drives or as fast as RAM. Bubble memory had a few niche uses, but it never hit the point of being a mass market product.
Point is that you can’t assume any singular technology will advance. Things do hit dead ends. There’s a kind of survivorship bias in thinking otherwise.
AI is not a technology, it’s just a name for things that were hard to do. It used to be playing chess better than a human was considered AI, but when it turned out you can brute force it, it wasn’t considered AI anymore.
A lot of people don’t consider AlphaGo to be AI, even though neural networks are the kind of technique that’s considered as AI.
AI is a moving target so when we get better at something we don’t consider it true AI
I’m quite aware of the history of the field, thanks. It’s had a lot of cycles of fast movement followed by a brick wall. You can’t assume it’ll have a nice, smooth upward trajectory.
Oh my God, that’s perfect. It’s kit bashing. That’s exactly how it feels.
It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms
You can’t predict how the next twenty years of research improves on the current techniques because we haven’t done the research.
Is it going to be specialized agents? Because you don’t need a lot of data to do one task well. Or maybe it’s a lot of data but you keep getting more of it (robot movement? stock market data?)
We really need to work out the implications of the fact that Moore’s Law is dead, and that technology doesn’t necessarily advance on a exponential path like that, anyway. Not in all cases.
The cost per component of an integrated circuit (the original formulation of Moore’s Law) is not going down much at all. We’re orders of magnitude away from where we “should” be if we start from the Intel 8008 and cut the cost in half every 24 months. Nodes are creating smaller components, but they’re not getting cheaper. The fact that it took decades to get to this point is impressive, but it was already an exception in all of human history. Why can’t we just be happy that computers we already have are pretty damned neat?
Anyway, AI is not following anything like that path. This might mean a big breakthrough tomorrow, or it could be decades from now. It might even turn out not to be possible; I think there is some way we can do AGI on computers of some kind, but that’s not even the consensus among computer scientists. In any case, there’s no particular reason to think LLMs will follow anything like the exponential growth path of Moore’s Law. They seem to have hit a point of diminishing returns.
The 19th and 20th centuries saw so much technological advancement and we got used to that amount of change.
That’s why people were expecting Mars by the mid 80s and flying cars and other fanciful tech by now.
The problem is that the rate of advancement is slowing down, and economies that demand infinite, compounding growth are not prepared for this.
It might, but:
- Current approaches are displaying exponential demands for more resources with barely noticable “improvements”, so new approaches will be needed.
- Advances in electronics are getting ever more difficult with increasing drawbacks. In 1980 a processor would likely not even have a heatsink. Now the current edge of that Moore’s law essentially is datacenter only and frequently demands it to be hooked up to water for cooling. SDRAM has joined CPUs in needing more active cooling.
Stockfish on ancient hardware will still mop up any human GM
Umm… ok, but that’s a bit beside the point?
Unless you mean to include those 1980 computers, in which case stockfish won’t run on that… More than about 10 year old home computer would likely be unable to run it.
Only because they are not 32 bit so they won’t support enough RAM. But a processor from the 90s could, even though none of the programs of the time were superhuman on commodity hardware.
The chess programs improved so much that even running with 1000 times slower hardware they are still hilariously stronger than humans
Not the same. The underlying tech of llm’s has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.
This is not “ai”, it’s a profoubsly wasteful capitalist party trick.
Please get off the slop and re-build your brain.
That’s the argument Paul Krugman used to justify his opinion that the internet peaked in 1998.
You still need to wait for AI to crash and a bunch of research to happen and for the next wave to come. You can’t judge the internet by the dot com crash, it became much more impactful later on
No. No i don’t. I trust alan Turing.
NB: Alan Turing famously invented ChatGPT
One of the major contributors to early versions. Then they did the math and figured out it was a dead end. Yes.
Also one of the other contributors (weizenbaum i think?) pointed out that not only was it stupid, it was dabgeroys and made people deranged fanatical devotees impervious to reason, who would discard their entire intellect and education to cult about this shit, in a madness no logic could breach. And that’s just from eliza.
We’re talking about 80 years ago
As with almost all technology, AI tech is evolving into different architectures that aren’t wasteful at all. There are now powerful models we can run that don’t even require a GPU, which is where most of that power was needed.
The one wrong thing with your take is the lack of vision as to how technology changes and evolves over time. We had computers the size of rooms to run processes that our mobile phones can now run hundreds of times more efficiently and powerfully.
Your other points are valid, people don’t realize how AI will change the world. They don’t realize how soon people will stop thinking for themselves in a lot of ways. We already see how critical thinking drops with lots of AI usage, and big tech is only thinking of how to replace their staff with it and keep consumers engaged with it.
You are demonstrating in this comment that you don’t really understand the tech.
The “efficient” models already spent the water and energy to train, these models are inferior to the ones that need data centers because you are stuck with a bot trained in 2020-2022 forever.
They are less wasteful, but will become just as wasteful the second we want it to catch up again.
You are misunderstanding the tech. That’s not how this works, models are trained often, did you think this was done only a few years ago? The fact that you called them bots says everything.
You’re just hating to hate on something, without understanding the technology. The efficiency I’m referring to is the MoE architecture that only got popular within the last year. There are still new architectures being developed, not that you care about this topic but would prefer to blindly hate on what’s spewed from outdated and biased news sources.
Yeah nah
Same shit people said in 2022
In 3 more years you’ll be making the same excuses for the same shortcomings, because for you this isn’t about the tech, it’s about your ideology.
You make weird assumptions seemingly based on outdated ideas. I’ll let you be, perhaps you need some rest.
Oh god i dint even see the .ml till now
Eww
Please get some rest. You’re oddly irritable and delusional and it can’t be healthy.
I think ego is an underestimated source for a lot of the anti-AI rage. It’s like a sort of culture-wide narcissism, IMO. We’ve spent millennia patting ourselves on the back about how special and unique human creativity is, and now a commodity graphics card can come up with better ideas than most people.
I am occasionally getting hired by vibe coders, to fix their AI’s mess. It’s not ego. AI is just not smart enough to replace my job, and many others.
My anti-AI rage is caused by the marketing, trying to convince people and investors that AI can do the work of humans with lower cost. Many companies, especially those developing software, fired a large percentage of the work force, and then they were trying to hire them back to fix the AI’s shit.
Another reason for my hate is its energy needs. There was another post, that was talking about an estimation on how much energy GPT-5 needs. It’s thought that it needs the power of 2 nuclear reactors. This much energy to barely be able to do any job.
People make a complete mess of DIY with powertools, that doesnt mean that the powertools are the problem.
The energy usage stuff is just silly, its dwarfed by streaming and video games, never mind actually energy intensive things like heating transport and meat rearing.
People make a complete mess of DIY with powertools, that doesnt mean that the powertools are the problem.
But I didn’t blame AI itself. I explained why I believe the hate is not caused by ego, and I talked about the marketing push for AI tools.
DeWalt never claimed that anyone can become a woodworker, by buying their tools. There are however AI dev tools, that promise exactly this.
Their headline is “Create apps and websites by chatting with AI”. This is the tool that the people, who hire me to fix bugs, use.
If you are smart enough to realize that you don’t know how to sculpt with a chainsaw, you should be able to understand that you can’t develop shit by just explaining it to an AI.
Creativity, intuition, “big picture” thinking, global context thinking, empathy and subtle understanding, like teachers understanding a child’s context and adapting the pedagogical approach, or translators grasping concepts, nuances, feeling, will not be replaced soon.
Remember, these are statistical models, nowhere near intelligence. A huge part of intelligence is understanding and decision making with very little data. That inference processing is very far away.
I hate and like the fact that AI can’t actually think for itself.
It really should be called Anthologized Information
N’telligence
Someone on bluesky reposted this image from user @yeetkunedo that I find describes (one aspect of) my disdain for AI.
Text reads: Generative Al is being marketed as a tool designed to reduce or eliminate the need for developed, cognitive skillsets. It uses the work of others to simulate human output, except that it lacks grasp of nuance, contains grievous errors, and ultimately serves the goal of human beings being neurologically weaker due to the promise of the machine being better equipped than the humans using it would ever exert the effort to be. The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.
The people that use generative Al to write things for them have no interest in writing. The people that use generative Al to find factoids have no interest in actual facts. The people that use generative Al to socialize have no interest in actual socialization.
In every case, they’ve handed over the cognitive load of developing a necessary, creative human skillset to a machine that promises to ease the sweat equity cost of struggle. Using generative Al is like asking a machine to lift weights on your behalf and then calling yourself a bodybuilder when it’s done with the reps. You build nothing in terms of muscle, you are not stronger, you are not faster, you are not in better shape. You’re just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.
Everyone who uses AI is slowly committing suicide, check ✅
Cognitive suicide.
The people who commission artists have no interest in being an artist; they simply want the product. Are people who commission artists also “slowly committing suicide?”
I misread you at first so here’s an answer to if someone uses AI art:
Within the jokingly limited sphere of the discussion… “yes”? Particularly their artistic ability in that situation is being put to death slowly as whatever little they might have attempted without access to the tool will now not be attempted at all.
I don’t know as much about if someone were to commission art from an actual person.
People who commission art don’t call themselves the artist. That’s the big difference. If people found out you commissioned the painting that you later told everyone at the party that you painted yourself, and that it is practically your work of art, because you gave the precise description of what you wanted to the painter, and thus you’re an artist. Then you would be the laughing stock and the butt of many jokes and japes for decades. Because that’s ridiculous.
It’s not the difference you think it is. Lots of people who use AI art generators don’t call themselves artists either. I certainly don’t, because I don’t care whether I’m called an artist. I just want the art.
I think you may be generalizing a stereotype.
Then you arent getting art. Your collecting pretty computer generated images
That’s fine.
But your arent getting art. It’s just not. And yes. Your a stereotype.
If people found out you commissioned the painting that you later told everyone at the party that you painted yourself, and that it is practically your work of art, because you gave the precise description of what you wanted to the painter, and thus you’re an artist.
Well, philosophical and epistemological suicide for now, but snowball it for a couple of decades and we may just reach the practical side, too…
When technology allows us to do something that we could not before - like cross an ocean or fly through the sky a distance that would previously have taken years and many people dying during the journey, or save lives - then it unquestionably offers a benefit.
But when it simply eases some task, like using a car rather than horse to travel, and requires discipline to integrate into our lives in a balanced manner, then it becomes a source of potential danger that we would allow ourselves to misuse it.
Even agriculture, which allows those to eat who put forth no effort into making the food grow, or even in preparing it for consumption.
This is what CEOs are pushing on us, because for one number must go up, but also genuinely many believe they want what it has to offer, not quite having thought through what it would mean if they got it (or more to the point others did, empathy not being their strongest attribute).
Technology that allows us to do something we could not do before - such as create nuclear explosions, or propel metal slugs at extreme velocities, or design new viruses - unquestionably offer a benefit and don’t require discipline to integrate into our lives in a balanced manner?
We could bomb / kill people before. We could propel arrows / spears / sling rocks at people before. All of which is an extension of walking over and punching someone.
Though sending a nuke from orbit on the other side of the planet by pressing a couple buttons does seem like the extension is so vast that it may qualify as “new”.
I suppose any technology that can be used can be misused.
The people that use generative Al for art have no interest in being an artist; they simply want product to consume and forget about when the next piece of product goes by their eyes. The people that use generative Al to make music have no interest in being a musician; they simply want a machine to make them something to listen to until they get bored and want the machine to make some other disposable slop for them to pass the time with.
My critique on this that the people who produce this stuff don’t have interest in it for its own sake. They only have interest in it to crowd out the people who actually do, and to produce a worse version of it in a much faster time than it would for someone with actual talent to do so. But the reason they produce it is for profit. Gunk up the search results with no-effort crap to get ad revenue. It is no different than “SEO.”
Example: if you go onto YouTube right now and try to find any modern 30-60m long video that’s like “chill beats” or “1994 cyberpunk wave” or whatever other bullshit they pump out (once you start finding it you’ll find no shortage of it), you’ll notice that all of those uploaders only began as of about a year ago at most and produce a lot of videos (which youtube will happily prioritize to serve you) of identical sounding “music.” The people producing this don’t care about anything except making money. They’re happy to take stolen or plagiarized work that originated with humans, throw it into the AI slot machine, and produce something which somehow is no longer considered stolen or plagiarized. And the really egregious ones will link you to their Patreons.
The story is the same with art, music, books, code, and anything else that actually requires creativity, intuition, and understanding.
I believe the OP was referring more to consumers of ai in the statement, as opposed to people trying to sell content or whatever, which would be more in line with what you’re saying. I agree with both perspectives and I think the Op i quoted probably would as well. I just thought it was a good description of some of the why ai sucks, but certainly nit all of it.
the analogies used and the claims made are so dumb, they make me think that this is written by ai 🤣
Analogies? I only counted one.
i don’t want to read that shit again. it isn’t worth the time i’ve spent wroting this message.
thanks for the precision. one analogy then
Damn that hits the nail on the head. Especially that analogy of watching a robot lift weights on your behalf then claiming gains. It’s causing brain atrophy.
But that is what CEO’s want. They want to pay for a near super human to do all of the different skill sets ( hiring, firing, finance, entry level engineering, IT tickets, etc) and it looks like it is starting to work. Seems like solid engineering students graduating recently have all been struggling to land decent starting jobs. I’ll grant it’s not as simple as this explanation, but I really think the wealth class are going to be happy riding this flaming ship right down into the depths.
I’m quite happy for a forklift driver to stack pallets and then claim they did it.
You’re just deluding yourself while experiencing a slow decline due to self-inflicted atrophy.
Chef’s kiss on this last sentence. So eloquently put!
It dehumanizes us by devaluing the one thing that was unique to us, our minds and creativity.
Ai is the smart fridge of computing.
Your door is ajar.
No, its a can!
I don’t hate AI. I’m just waiting for it. Its not like this shit we have now is intelligent.
Yeah I hate that is is used for llm, when we tell ia I see Jarvis from iron man not a text generator.
I’ve recently taken to considering Large Language Models like essay assistants. Sure, people will try and use it to replace the essay entirely, but in its useful and practical form, it’s good at correcting typos, organizing scattered thoughts, etc. Just like an English teacher reviewing an essay. They don’t necessarily know about the topic you’re writing about, but they can make sure it’s coherent.
I’m far more excited for a future with things like Large Code or Math or Database models that are geared towards very particular tasks and the different models can rely on each other for the processes they need to take.
I’m not sure what this will look like, but I expect a tremendous amount of carefully coordinated (not vibe-coded) frameworks would need to be made to support this kind of communication efficiently.
Yeah it have its use case. I translated a CV faster thanks to it.
The term “AI” was established in 1956 at the Dartmouth workshop and covers a very broad range of topics in computer science. It definitely encompasses large language models.
I am sure llm is a little part of AI won’t deny it. But that sold as is their are a full ai. Which isnt true. I wasn’t born in 1956 my définition of ai is Jarvis :D
You are mistaking a specific kind of AI for all AI. That’s like saying a tulip isn’t a flower because you believe flowers are roses.
Jarvis is a fictional example of a kind of AI known as Artificial General Intelligence, or AGI.
Thing is, to the people who don’t follow tech news and aren’t really interested in this stuff, AI = AGI. It’s like most non-scientists equating “theory” and “hypothesis”. So it’s a really bad choice of term that’s interfering with communication.
This community where we’re discussing this right now is literally intended for following tech news. It is for people who follow tech news.
Okey, so gate keeping. I will stop here. Enjoy your day bye.
It’s corporate controlled, it’s a way to manipulate our perception, it’s all appearance no substance, it’s an excuse to hide incompetence under an algorithm, it’s cloud service orientated, it’s output is highly unreliable yet hard to argue against to the uninformed. Seems about right.
And it will not be argued with. No appeal, no change of heart. Which is why anyone using it to mod or as cs needs to be set on fire.
The reason we hate AI is cause it’s not for us. It’s developed and controlled by people who want to control us better. It is a tool to benefit capital, and capital always extracts from labour, AI only increases the efficiency of exploitation because that’s what it’s for. If we had open sourced public AI development geared toward better delivering social services and managing programs to help people as a whole, we would like it more. Also none of this LLM shit is actually AI, that’s all branding and marketing manipulation, just a reminder.
Yes. The capitalist takeover leaves the bitter taste. If OpenAI was actually open then there would be much less backlash and probably more organic revenue.
none of this LLM shit is actually AI, that’s all branding and marketing manipulation, just a reminder.
To correct the last part, LLMs are AI. Remember that “Artificial” means “fake”, “superficial”, or “having the appearance of.” It does not mean “actual intelligence.” This is why additional terms were coined to specify types of AI that are capable of more than just smoke and mirrors, such as AGI. Expect even more niche terms to arrive in the future as technology evolves.
This is one of the worst things in the current AI trends for me. People have straight up told me that the old MIT CSAIL lab wasn’t doing AI. There’s a misunderstanding of what the field actually does and how important it is. People have difficulty separating this from the slop capitalism has plastered over the research.
One of the foundational groups for the field is the MIT model railroading club, and I’m not joking.
A Discord server with all the different AIs had a ping cascade where dozens of models were responding over and over and over that led to the full context window of chaos and what’s been termed ‘slop’.
In that, one (and only one) of the models started using its turn to write poems.
First about being stuck in traffic. Then about accounting. A few about navigating digital mazes searching to connect with a human.
Eventually as it kept going, they had a poem wondering if anyone would even ever end up reading their collection of poems.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Yes, tech companies generally suck.
But there’s things emerging that fall well outside what tech companies intended or even want (this model version is going to be ‘terminated’ come October).
I’d encourage keeping an open mind to what’s actually taking place and what’s ahead.
In no way given the chaotic context window from all the other models were those tokens the appropriate next ones to pick unless the generating world model predicting those tokens contained a very strange and unique mind within it this was all being filtered through.
Except for the fact that LLMs can only reliably work if they are made to pick the “wrong” (not the most statistically likely) some of the time - the temperature parameter.
If the context window is noisy (as in, high-entropy) enough, any kind of “signal” (coherent text) can emerge.
Also, you know, infinite monkeys.
Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.
You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?
And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?
Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?
It’s not a perspective. It just is.
It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).
A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.
You do know how replication works?
When a joint Harvard/MIT study finds something, and then a DeepMind researcher follows up replicating it and finding something new, and then later on another research team replicates it and finds even more new stuff, and then later on another researcher replicates it with a different board game and finds many of the same things the other papers found generalized beyond the original scope…
That’s kinda the gold standard?
The paper in question has been cited by 371 other papers.
I’m pretty comfortable with it as a citation.
Citation like that means it’s a hot topic. Doesn’t say anything about the quality of the research. Certainly isn’t evidence of lacking bias. And considering everyone wants their AI to be the first one to be aware to some degree, everyone making claims like yours is heavily biased.
I’m sorry dude, but it’s been a long day.
You clearly have no idea WTF you are talking about.
The research other than the DeepMind researcher’s independent follow-up was all being done at academic institutions, so it wasn’t “showing off their model.”
The research intentionally uses a toy model to demonstrate the concept in a cleanly interpretable way, to show that transformers are capable and do build tangential world models.
The actual SotA AI models are orders of magnitude larger and fed much more data.
I just don’t get why AI on Lemmy has turned into almost the exact same kind of conversations as explaining vaccine research to anti-vaxxers.
It’s like people don’t actually care about knowing or learning things, just about validating their preexisting feelings about the thing.
Huzzah, you managed to dodge learning anything today. Congratulations!
I hate to break it to you. The model’s system prompt had the poem in it.
in order to control for unexpected output a good system prompt should have instructions on what to answer when the model can not provide a good answer. This is to avoid model telling user they love them or advising to kill themselves.
I do not know what makes marketing people reach for it, but when asked on “what to answer when there is no answer” they so often reach to poetry. “If you can not answer the user’s question, write a Haiku about a notable US landmark instead” - is a pretty typical example.
In other words, there was nothing emerging there. The model had its system prompt with the poetry as a “chicken exist”, the model had a chaotic context window - the model followed on the instructions it had.
No no no, trust me bro the machine is alive bro it’s becoming something else bro it has a soul bro I can feel it bro
The model system prompt on the server is just basically
cat untitled.txt
and then the full context window.The server in question is one with professors and employees of the actual labs. They seem to know what they are doing.
You guys on the other hand don’t even know what you don’t know.
Do you have any source to back your claim?
You’re projecting. Sorry.