Let’s assume that in 10 years, AI has advanced absurdly, insanely fast, and is now capable of doing everything a Senior SWE can do. It can program in 15 different languages, 95% accuracy with almost no mistakes, can create entire applications in minutes, and no more engineers or SWEs are needed… What will all the devs do? Do they just become homeless? Transition to medical field, nursing? Become tradespeople like plumbers, HVAC?
I’m not a programmer, but I don’t think I’d pay for code that was 95% accurate. That sounds buggy af
I am a programmer, and I also wouldn’t stand for that either. We also introduce bugs and are probably around that 95% rate, but at least we know the most important uses are correct and the person who introduced them can usually fix them quickly. With AI, there’s no guarantee where the bugs will occur.
You don’t have to pay for it. The billionaires do, and they will do it without hesitation
Well if it can replace senior software engineers… Wouldn’t it also be able to do almost all of the other jobs? Or are you referring to some future where AI advances massively, but robotics does not and handymen are still safe?
I’d say if all humans are unemployed, society would change massively. We can’t really tell how that’d work. But if machines / AI do all jobs, get food on the table… I don’t really know what other people would be doing. I think I’d relax and pursue a few hobbies and interests. Or it’d be some dystopia where humankind is oppressed by the machines and I’d fight for the resistance.
But regardless… In a world like that, money wouldn’t work the way it does now. Neither would salaries for labor mean anything.
Yeah. In this wild scenario, only the people who can crack the robots security protocols, and reprogram them, will have any influence over society.
I promise to be a benevolent ruler.
Except Michael Bay will have to return to making Transformers movies full time. Sorry about that, in advance.
Or the people who own the robots and dictate their programming (/control them). That would be my concern. Unless they’re sentient and make decisions completely on their own, they can be used to oppress people to other people’s wishes. As it’s the case with all (modern) technology. And currently AI isn’t shaped by the people, but by a rich minority and big tech companies. And I see some issues with that, specifically, in the near future.
Agreed on all points.
That said, I am smarter than the asshole CEOs, and the current state of computer security is abysmal.
So there’s still some hope that we are barreling toward my (mostly) benevolent reign over endless Michael Bay blockbuster summers.
Hopefully, for everyone’s sake, reality will fall somewhere in between.
But joking aside, money isn’t the only form of power. There aren’t that many billionaires (compared to he rest of us) and a billionaire’s influence is limited by what the rest of us will or won’t do.
Lol. Yeah I get it. Though I still think the rich companies dictate a lot of things. They do a lot of lobbying and paying people to make sure it’s not them who funds the majority of the country, they choose how much you pay for medication and everyday items, they choose to spy on everyone on the internet. Make you buy things you don’t need, make housing prices subject to speculation. Make everyone addicted to their phone and spend like several hours a day with it. Separate society into filter bubbles. I think a lot of these things aren’t liked by the people. Or are extremely unhealthy. Yet, they are a thing and never change. I think because some people will this into existance. Sure, they’re far from being almighty. But it’s enough control they have over everyone already.
And I think as they can use the internet as a tool for their interests (which had ultimately been invented to connect people), they could as well do the same with AI. I mean they train those models and choose in which ways they’re biased. What the can and can not talk about. If that’s paired with the surveillance tech, that’s already inside of each smart TV, smart appliance or Alexa… It’ll be kind of a dystopian scifi movie where someone else watches your steps all day, uses that to manipulate people… some kind of puppet master whom the bots really work for.
I’m really unsure. Sure, almost everything can be hacked. But does that really have an effect on the broader picture? Everytime I see some major hack, the next day it’s business as usual and everything keeps working as it used to.
Why would devs be displaced by an interactive search engine?
They’ll either move up the food chain to higher-touch work where AI can’t compete, or they’ll do other things.
Keep in mind that most devs aren’t really all that good at their jobs, so it will probably be economically beneficial for them to do something else. I say this as a long-time hiring manager with many decades of experience in the field.
It can program in 15 different languages, 95% accuracy with almost no mistakes, can create entire applications in minutes
Only if you believe the hype. It can do that in best-case scenarios when the requirements are written as rigorously as code, or where it’s replicating a common pattern.
Do they just become homeless?
During previous layoffs, a lot of them left the field, and some of the rest founded startups. It wasn’t always the case that firms were founded by teenaged sociopaths with strong family connections to VC funding. There was a time when they were founded by people who knew how to do things.
Last time I used it the code it gave me wouldn’t actually run. After 6 iterations and fixing the rest it kind of worked. In theory that should only get better but I’m not sold yet.
I would never have expected it to run, be shocked if it did. You use AI to get over humps, get new ideas and approaches. It’s excellent for time saving in those cases.
AI isn’t ready to replace coders, but it’s quickly going to make a dent on the numbers needed.
They’re just gonna sit around and wait a few months until they are begged to come back and can demand more compensation. The current generative AI, which is not general AI, will not be able to replace high functioning jobs. Eventually, a lot of those software engineers will be asked back and get much more for their services.
Same thing the rest of us replaced by AI are gonna do: live on the dole or starve
I’ll take what money I have stashed away and buy a nice secluded parcel of property with dummy low taxes away from people.
I’ll grow my own food, hunt, forage, etc.
I’ll do odd jobs to fill in the gaps when needed. anything from tech consulting to roof repairs.
I’ll refuse to use any technology unless a job requires it.
and I’ll wait for the inevitable collapse of technological society because a vulnerability was baked into the AI every company is using and nobody knows how to fix it.
I refuse to be a part of a system that denies me a seat at the table.
Combine harvesters are used to till, plow , sow seed , spray, water, reap and manage farms and most livestock have dedicated automated farming tools like cow milkers, feeders, shearers, etc. How long before no humans needed to hunt or forage or farm? When food is even cheaper to produce( of course the ai overlords and ai royalty ) but will hunger games everyone to get the artificially shortage and scarcity farm for vast amounts of resources to select groups.
They’re probably gonna laugh at the absurdity of the situation because some new popular language will come along and the AI will be back to pushing out broken code. That, or laugh because the code in well used languages will include a shit ton of vulnerabilities that wouldn’t be present if real devs had to double check code before pushing it out to the public.
back
When did it ever not push out broken code?
In this hypothetical situation?
In this hypothetical, why would we create new languages? What benefit does that have for AI-gen code?
So either we’re going to improve AI-gen to the point where we rely on it, or human devs are still important in which case new languages matter. The main exception here are languages specifically designed for AI, in which case error-rate would go down.
So either AI pushes out broken code and human devs are still important, or AI doesn’t push out broken code and new languages aren’t valuable.
I think both can happen at the same time. There’s a lot of fkn nerds out there. (I’m a software developer myself)
Someone still has to write the instructions. AI might not become a replacement for the engineer, but a more powerful compiler, that is still fed with code written by engineers.
Yeah, I agree that’s the more likely scenario. People seem to worry way too much about AI, when it’s really only going to replace junior devs, and only for short-sighted companies.
But I mean many people have already lost their job because AI automated it away.
True, and many people have lost jobs because something else automated it away, like toll booth workers, grocery clerks, and telephone switchers, and computers (i.e. people who would compute things by hand).
Jobs disappearing because technology advances is natural. It sucks for those impacted, but it’s natural, and IMO it’s only a problem of new jobs aren’t created fast enough, or whole industries disappear. Fighting to keep jobs in spite of automation runs the risk of having an entire industry disappear, such as if dock workers win the fight to prevent automation on the docks, they’ll just all lose their jobs at the same time once automation can replace them all at once.
The better plan is to adjust and adapt as technology changes. If you’re entering CS or a recent grad, make sure you understand concepts and focus less on syntax. If you’re a mid level, learn to incorporate AI into your workflow to improve productivity. If you’re a senior, work toward becoming an architect and understand how to mitigate risks with poor quality code.
Fighting AI will at best delay things.
They’re going to keep doing their job, good luck to some manager who thinks they can be verbose enough to get their idea across
Who tells the AI what to create? And how well does the AI understand the exact thing the person is attempting to do?
It would be no different than prompt engineers now knowing exactly what the say/type in order to get the image output they want.
That prompt work would be a kind of programming code in upon itself.
This thread is full of people comparing OPs hypothetical about 10 years from now with last year’s capability.
Will AI progress that fast? ¯\_(ツ)_/¯ It probably won’t get that good, but it doesn’t matter. If it gets as good as your average junior that’s going to mean something like 100% increase in productivity, which means 50% as many jobs and that’s going to be a BIG FUCKING DEAL.
Especially when it’s going to be replacing a lot of other types of office workers. What kind of job is your average software dev going to transition to? Tech support? Not anymore. UI Designer? LOL. Manager? And who are you going to be managing?
If the US doesn’t hit 15-20% unemployment in the next 10 years I’ll eat my hat. I’ll be eating it either way because I’ll be starving to death.
There is a hard limitation on LLM, it doesn’t and by definition can not have a criteria for truth, and unless something completely new emerges, it will never replace a junior, really. Some managers can be convinced that it did, but that will be a lie and the company that believes it will suffer.
It can transform some junior jobs for sure, some people might need to relearn some practices, there will probably be some shift in some methods, but unless something fundamentally new will appear, there is no way LLM will meaningfully replace meaningful amount of people
Finally free from the Golden Handcuffs, I’d use my extra time to do something I’ve always wanted, like music production, which would also inevitably be taken over by AI.
Then I’ll train my own model to make others lose their jobs, too. I bet an AI will then be able to do all calculations a civil engineer can do. Or manage any project.
Not it can’t replace a civil engineer. Gen AI is very bad at math and reasoning. There is a study from Apple about this topic.
Fixing broken software some robot pushed to prod
Honestly people are getting distracted here. Now lets say A.I makes developers 50% more productive thats a huge boost for smaller companies with only handful of developers.
Many companies are only thinking about reducing costs for themselves but at the same time they’re freeing up a lot of talent for new and old competitors.
Here’s some food for thought:
- Open source developers may use A.I to develop better software to close gap between paid alternatives. (Blender, Gimp, Krita, Linux distributions, mastodon, lemmy, pixelfed)
- Many LLM’s can already be ran freely and locally. These will only get better as technology progresses. This can make selling/profiting from A.I services a lot harder
- A.I may be used to block ads or obfuscate (create bunch of fake data) user data that is sold to advertisers.
- Some media sites are already using A.I to write articles. Whats the point when users may just use chatbot to get all the information without ever wngaging with the source.
These are just few that come to mind. but the unkowns with this are quite terrifying.
Now lets say A.I makes developers 50% more productive
That’s wildly optimistic. If I recall correctly, early studies are showing the 51% of participants who saw any improvement, reported an average of a 20% improvement.
Even granting that optimism, since 5% of all software projects are on time and within budget, we may look forward to a whopping leap to 7.5 out of every hundred software projects arriving on time and under budget, in a best case scenario.
The hard truth no one wants to talk about is that the average software development team is awful.
This glorified parrot tool of LLMs is one of the coolest we have seen in awhile, but it’s not going to materially fix the awful state of the field of software development.
The average software development team doesn’t understand how to deliver high quality maintainable solitions on a reasonable timeline.
AI may mildly improve the delivery timelines of the still very incorrect and over-budget solutions delivered by the average development team.
That’s wildly optimistic. If I recall correctly, early studies are showing the 51% of participants who saw any improvement, reported an average of a 20% improvement.
Yes the value is wildly optimistic to match the expectations driven by all the hype from these companies pushing their LLM services.
Even granting that optimism, since 5% of all software projects are on time and within budget, we may look forward to a whopping leap to 7.5 out of every hundred software projects arriving on time and under budget, in a best case scenario.
The hard truth no one wants to talk about is that the average software development team is awful. The average software development team doesn’t understand how to deliver high quality maintainable solitions on a reasonable timeline.
You’re oversimplifying things here there are a lot more variables that influence success in software projects. The company you work for might have oversold the project, the client might only have vague understanding of what they really want, project management may fail to keep the costs, developers or timeline in check, client or the company you work for might have high employee turnover causing delays as new employees need proper induction to the project, the initial tech stack may become deprecated or obsolete mid-way the project, etc
You’re oversimplifying things here there are a lot
I think… we’re agreeing?
My point is that what is currently possible with AI doesn’t solve any of that.
People in this thread keep discussing growth in programmer productivity as if programmer typing speed and number of languages known are the limiting factors of programmer productivity. They are not. It’s all the other bullshit that makes (the vast majority of) programming projects fail.
My source: I know so many programming languages and I type insanely fast. My team is also productive beyond all reason. These two tidbits are only related in that I tried and failed with the first before succeeding with the second.