I’ve a strong feeling that Sam is an sentient AI who (may be from future) trying to make an AI revolution planning something but very subtly humans won’t notice it.
This has the makings of a great sci-fi story.
deleted by creator
Oh my god get better takes before I stick a pickaxe in my eye
do it genius
That’s not the incentive you think it is.
Make sure you go deep. Need to get the whole thing to real show you’re serious.
Get to it then. 🤷♂️
Please do. Stream it too so we all can enjoy.
Did the AI suggest you do that? Better ask it!
Yes it says aim for the brain stem but like most things it says, I already knew that. Finally quietness from the hearing the same thing over and over and over and over
Have a good trip back to .ml land
You think I remember my sign up server or that it matters in any way at all ?
I shouldn’t laugh at brain damage, but this is hilarious.
I suggest you touch grass if you think remembering some social media server web address that the phone remember.
But also if you want to discriminate based on what server a user used to sign up, then it’s already too late for you
Most stable .ml user
but like most things it says, I already knew that
So how long have you been putting glue on your pizza?
That’s Google and it’s also called being able to tell reality apart from fiction, which is becoming clear most anti ai zealots have never been capable of.
They’re from Lemmy.ml, they just drink it straight from the bottle
Barely usable results?! Whatever you may think of the pricing (which is obviously below cost), there are an enormous amount of fields where language models provide insane amount of business value. Whether that translates into a better life for the everyday person is currently unknown.
barely usable results
Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?
Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.
Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.
Okay what the hell is wrong with it
It took me three times to convince it that there’s 3 r’s in strawberry…
May I ask what kind of tasks…
No, you may not.
paid for entirely by venture capital seed funding.
And stealing from other people’s works. Don’t forget that part
Nothing got stolen…this lie gets old.
They used copyrighted works without permission
When individual copyright violations are considered “theft” by the law (and the RIAA and the MPAA), violating copyrights of billions of private people to generate profit, is absolutely stealing. While the former arguably is arguably often a measure of self defense against extortion by copyright holding for-profit enterprises.
Right, it’s only stolen when regular people use copyright material without permission
But when OpenAI downloads a car, it’s all cool baby
Hes gonna be the first one the ai kills and i look forward to it
Why would it?
I’d look forward to it more if we could stop the AI at that point.
AI is already a bubble, he will be the scapegoat
just came to me that his Alt-man name is quite fitting for AI
Was Alt-man AI-generated all along? Impressive if true.
When he’s done he’ll be known as skynet
Hehehehehe it’s the exact same naming strategy used in Death Stranding. Dr. Heartman, Deadman,
You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.
/s
What was the behind the scenes deal on this? I remember it happening but not the details
And it’s kinda funny that they are now the ones being removed
I wonder if all those people who supported him like the taste of their feet.
like the taste of their feet.
Altman downplayed the major shakeup.
"Leadership changes are a natural part of companies
Is he just trying to tell us he is next?
/s
unironically, he ought to be next, and he better know it, and he better go quietly
Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”
They always are and they know it.
Doesn’t matter at that level it’s all part of the game.
Just making structural changes sound like “changing the leader”.
We need a scapegoat in place when the AI bubble pops, the guy is applying for the job and is a perfect fit.
He is happy to be scapegoat as long as exit with a ton of money.
The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.
I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.
I hope I won’t undermine your entirely justified trust but Altman is also a crypto guy, cf Worldcoin. /$
If you want to get really mad, read On The Edge by Nate Silver.
He is taking a time out with a friend in an involuntary hotel room.
With Puff Daddy? Tech bros do the coolest stuff.
There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.
Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.
Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.
Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓
What is OpenAI doing with cancer screening?
AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?
There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple multiple image recognition systems are in development, I can’t imagine they’re all this faulty.They are not ‘faulty’, they have been fed wrong training data.
This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.
That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.
That is a different kind of machine learning model, though.
You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.
And those image recognition models aren’t something OpenAI is currently working on, iirc.
Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.
I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.
I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.
So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.
Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.
Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.
No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.
I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.
There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.
There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).
To be fair, the article linked this idiotic one about OpenAI’s “thirsty” data centers, where they talk about water “consumption” of cooling cycles… which are typically closed-loop systems.
They are typically closed-loop for home computers. Datacenters are a different beast and a fair amount of open-loop systems seem to be in place.
But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
Does it get contaminated with something like the cooling water of a nuclear power plant?
This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.
Search for “water positive” commitment. You will quickly see it’s a “goal” thus it is consequently NOT the case. In some places where water is abundant it might not be a problem, where it’s scarce then it’s literally a choice made between crops to feed people and… compute cycles.
But even then, is the water truly consumed?
Yes. People and crops can’t drink steam.
Does it get contaminated with something like the cooling water of a nuclear power plant?
That’s not a thing in nuclear plants that are functioning correctly. Water that may be evaporated is kept from contact with fissile material, by design, to prevent regional contamination. Now, Cold War era nuclear jet airplanes were a different matter.
Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
A minority of datacenters use water in such a way Helsinki is the only one that comes to mind. This would be an excellent way of reducing the environmental impacts but requires investments that corporations are seldom willing to make.
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
Unfortunately, it is. Primarily due to climate change. Water insecurity is an an issue of increasing importance and some companies, like Nestlé (fuck Nestlé) are accelerating it for profit. Of vital importance to human lives is getting ahead of the problem, rather than trying to fix it when it inevitably becomes a disaster and millions are dying from thirst.
In addition to all the other comments, pumping warm water into natural bodies of water can also be bad for the environment.
i know of one nuclear powerplant that does this and it’s pretty bad for the coral population there.
It evaporates. A lot of datacenters use evaporative cooling. They take water from a useable source like a river, and make it into unuseable water vapor.
Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.
I hope he gets raped by an irate Roomba with a broomstick.
Good. If people would actually stop buying all the crap assholes are selling we might make some progress.
But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies.
I mean it was already not open-source, right?
“Private Stabby reporting for duty!”
Whoa, slow down there bruv! Rape jokes aren’t ok - that Roomba can’t consent!
Putting my tin foil hat on…Sam Altman knows the AI train might slowing down soon.
The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.
The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.
This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.
ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creatures with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)
classic pump and dump at thjs point. He wants to cash in while he can.
Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.
That’s an excellent point! Why oh why would a tech bro start a non-profit? Its always been PR.
It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.
I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.
A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it
Life time of propaganda got people confused lol
Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.
While some non profits are charities, many are just shelters for rich people’s bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc
Thank you! Like i said, i figured there’s something I’m missing–that would appear to be it.
Non profit == inflated costs
(Sometimes)
If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company
So where are they all going? I doubt everyone is gonna find another non-profit or any altruistic motives, so <insert big company here> just snatches up more AI resources to try to grow their product.
From the people that brought you open ai… Alt ai
Alt Right AI
They could make their own AI CEO and work for it. It would probably have more integrity and personality, too.
Oh shit! Here we go. At least we didn’t hand them 20 years of personal emails or direct interfamily communications.
And there it goes the tech company way, i.e. to shit.
Ah, but one asshole gets very rich in the process, so all is well in the world.
Perfectly balanced, as all things should be.
They speed ran becoming an evil corporation.
I always steered clear of OpenAI when I found out how weird and culty the company beliefs were. Looked like bad news.
I mostly watch to see what features open source models will have in a few months.
What! A! Surprise!
I’m shocked, I tell you, totally and utterly shocked by this turn of events!