I am increasingly starting to believe that all these rumors and “hush hush” PR initiatives about “reasoning AI” is an attempt to keep the hype going (and VC investments) till the vesting period for their stock closes out.
I wouldn’t be surprised if all these “AI” companies have come to a point where they’re basically at the limits of LLM capabilities (due to problems with it’s fundamental architecture) while not being able to solve it’s core drawbacks (hallucinations, ridiculously high capex and opex cost).
It’s the Elon Musk narrative making we’ve been seeing over and over again. It’s hype. They’re about to run out of input data because they’ve sucked up everything they could. The Internet is being fed a bunch of bad results that come from LLM produced output which enshittifies the Internet further. These companies are burning cash and grid energy while the world burns. Unless there’s a spectacular breakthrough, this can’t keep going on much longer.
Yea this. It’s a weird time though. All of it is hype and marketing hoping to cover costs by searching for some unseen product down the line … even the original chatGPT feels like a basic marketing stunt: “If people can chat with it they’ll think it’s miraculous however useful it actually is”.
OTOH, it’s easy to forget that genuine progress has happened with this rush of AI that surprised many. Literally the year before AlphaGo beat the world champion no one thought it was going to happen any time soon. And though I haven’t checked in, from what I could tell, the progress on protein folding done by DeepMind was real (however hyped it was also). Whether new things are still coming or not I don’t know, but it seems more than possible. But of course, it doesn’t mean there isn’t a big pile of hype that will blow away in the wind.
What I ultimately find disappointing is the way the mainstream has responded to all of this.
The lack of conversation about what we want this to look like in the end. There’s way too much of a passive “lets see where the technology and big-corp capitalism take us and hope it doesn’t lead to some sort of apocalypse”
The very seamless and reflexive acceptance that an AI chat interface could be an all knowing authority for everything in life … was somewhat shocking to me. Obviously decades of “Googling” to get the answers to things has laid the groundwork for that, but still, there was IMO an unseemly acceptance of a pretty troubling future that indicated just how easily some dark timeline could arise.
IMHO there’s nothing amazing about a computer winning a board game. People act like Go is some mystery from the heavens, but it’s just a finite board with two different colored rocks. Big whoop in 2024.
Progress is definitely happening. One area that I am somewhat knowledgeable about is image/video upscaling. Neural net enhanced upscaling has been around for a while, but we are increasingly getting to a point where SD (DVD source, older videos from the 90s/2000s) to HD upscaling is working almost like in the science fiction movies. There are still issues of course, but the results are drastically better than simply scale the source media by x2.
The framing of LLMs as some sort of techno-utopian “AI oracle” is indeed a damning reflection on our society. Although I think this topic is outside the scope of current “AI” discussions and would likely involve a fundamental reform of our broader social, economic, political and educational models.
Even the term “AI” (and its framing) is extremely misleading. There is no “artificial intelligence” involved in a LLM.
One area that I am somewhat knowledgeable about is image/video upscaling
Oh I believe you. I’ve seen it done on a home machine on old time-lapse photos. It might have been janky for individual photos, but as frames in a movie it easily elevated the footage.
I am increasingly starting to believe that all these rumors and “hush hush” PR initiatives about “reasoning AI” is an attempt to keep the hype going (and VC investments) till the vesting period for their stock closes out.
I wouldn’t be surprised if all these “AI” companies have come to a point where they’re basically at the limits of LLM capabilities (due to problems with it’s fundamental architecture) while not being able to solve it’s core drawbacks (hallucinations, ridiculously high capex and opex cost).
It’s the Elon Musk narrative making we’ve been seeing over and over again. It’s hype. They’re about to run out of input data because they’ve sucked up everything they could. The Internet is being fed a bunch of bad results that come from LLM produced output which enshittifies the Internet further. These companies are burning cash and grid energy while the world burns. Unless there’s a spectacular breakthrough, this can’t keep going on much longer.
It would cost a lot of money, but you can definitely go through and manually sanitize the data.
That would give a good bump in performance, both quality and resources required to run it.
Quality over quantity.
they’re not even close to out of input data, you forget youtube exists.
Yea this. It’s a weird time though. All of it is hype and marketing hoping to cover costs by searching for some unseen product down the line … even the original chatGPT feels like a basic marketing stunt: “If people can chat with it they’ll think it’s miraculous however useful it actually is”.
OTOH, it’s easy to forget that genuine progress has happened with this rush of AI that surprised many. Literally the year before AlphaGo beat the world champion no one thought it was going to happen any time soon. And though I haven’t checked in, from what I could tell, the progress on protein folding done by DeepMind was real (however hyped it was also). Whether new things are still coming or not I don’t know, but it seems more than possible. But of course, it doesn’t mean there isn’t a big pile of hype that will blow away in the wind.
What I ultimately find disappointing is the way the mainstream has responded to all of this.
IMHO there’s nothing amazing about a computer winning a board game. People act like Go is some mystery from the heavens, but it’s just a finite board with two different colored rocks. Big whoop in 2024.
It sounds like you don’t understand the complexity of the game. Despite being finite, the number of possible games is extremely large.
Progress is definitely happening. One area that I am somewhat knowledgeable about is image/video upscaling. Neural net enhanced upscaling has been around for a while, but we are increasingly getting to a point where SD (DVD source, older videos from the 90s/2000s) to HD upscaling is working almost like in the science fiction movies. There are still issues of course, but the results are drastically better than simply scale the source media by x2.
The framing of LLMs as some sort of techno-utopian “AI oracle” is indeed a damning reflection on our society. Although I think this topic is outside the scope of current “AI” discussions and would likely involve a fundamental reform of our broader social, economic, political and educational models.
Even the term “AI” (and its framing) is extremely misleading. There is no “artificial intelligence” involved in a LLM.
Oh I believe you. I’ve seen it done on a home machine on old time-lapse photos. It might have been janky for individual photos, but as frames in a movie it easily elevated the footage.
Sure but that’s specialist models.
Generalist models are stagnant and show little potential for progress.