• kalleboo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    They literally don’t know. “GPT-5” is several models, with a model gating in front to choose which model to use depending on how “hard” it thinks the question is. They’ve already been tweaking the front-end to change how it cuts over. They’ve definitely going to keep changing it.

  • Transtronaut@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    If anyone has ever wondered what it would look like if tech giants went all in on “brute force” programming, this is it. This is what it looks like.

  • devfuuu@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    How can anyone look at that face and trust anything that mad man could have to say.

  • Saledovil@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    It’s safe to assume that any metric they don’t disclose is quite damning to them. Plus, these guys don’t really care about the environmental impact, or what us tree-hugging environmentalists think. I’m assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don’t care about the environment, the problem with LLMs is how poorly they scale.

    An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it’s done talking.

    If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it’s worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we’re just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.

    So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.

    • Jeremyward@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.

      • SkunkWorkz@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.

        The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.

      • squaresinger@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        That’s actually true. I read some research on that and your feeling is correct.

        Can’t be bothered to google it right now.

    • Event_Horizon@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I wonder if at this stage all the processors should simply be submerged into a giant cooling tank. It seems easier and more efficient.

      • IsoKiero@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Or you could build the centers in colder climate areas. Here in Finland it’s common (maybe even mandatory, I’m not sure) for new datacenters to pull the heat from their systems and use that for district heating. No wasted water and at least you get something useful out of LLMs. Obviously using them as a massive electric boiler is pretty inefficient but energy for heating is needed anyways so at least we can stay warm and get 90s action series fanfic on top of that.

          • IsoKiero@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            There’s experimental storages where heat is pumped to underground pools or sand, but as far as I know there’s heat exchangers and radiators to outside, so majority of excess heat is just wasted to outside. But absolute majority of them are closed loop systems since you need something else than plain water anyways to prevent freezing in the winter.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      those are his lying/making up hand gestures. its the same thing trump does with his hands when hes lying or exaggerating, he does the wierd accordian hands.

  • threeduck@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    All the people here chastising LLMs for resource wastage, I swear to god if you aren’t vegan…

    • Bunbury@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.

        • Bunbury@feddit.nl
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 days ago

          I did a quick calculation and got to around 500 queries per quarter pounder. Lot of guesstimation and rounding though, but I’m pretty sure I got close enough to know that you’re off by quite a lot.

          Edit to add: I used 21.9kg CO2 per 1kg of beef and 4.32 grams per ChatGPT query for my rough estimate.

          However that 4.32 number is already over a year old. Chances are it’s way outdated but everyone still keeps on quoting it. It definitely does not take into account that ChatGPT often “thinks” now, because chain of thought is likely as expensive as multiple queries by itself. Additionally the models are more advanced than a year ago, but also more costly and that CO2 amount everyone keeps quoting doesn’t even mention which model they used. If anyone can find the original source of this number I’d be very curious.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.

      • threeduck@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        The holocaust was well scaled too. Animal ag is responsible for 15-20% of the entire planets GHG emissions. You can live a healthier, more morally consistent life if you give up meat.

          • thatcrow@ttrpg.networkBanned
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            Not really. It shows that he used scale in the appropriate manner.

            I think you’re just grasping at straws by saying what sounds nice in your head instead of engaging in a legitimate argument.

            • Saledovil@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              I said “scaling” not “scale”. “Scaling” refers to how output and expenses of the enterprise behaves as it becomes bigger or smaller. Threeduck seems to think it means “big”. And then immediately refers the holocaust for some reason.

              Though, the term is broad, hence why I asked them about how they interpret the term.

    • lowleekun@ani.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Dude, wtf?! You can’t just go around pointing out peoples hypocrisy. Companies killing the planet is big bad.

      People joining in? Dude just let us live!! It is only animals…

      big /s

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I mean, they’re both bad.

      But also, “Throw that burger in the trash I’m not eating it” and “Uninstall that plugin, I’m not querying it” have about the same impact on your gross carbon emissions.

      These are supply side problems in industries that receive enormous state subsides. Hell, the single biggest improvement to our agriculture policy was when China stopped importing US pork products. So, uh… once again, thank you China for saving the planet.

      • 3abas@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        It’s not, you’re just personally insulted. The livestock industry is responsible for about 15% of human caused greenhouse gas emissions. That’s not negligible.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          you’re just personally insulted.

          I swear to God this attitude is why people don’t like what you’re saying. I am all for weighing the two against each other but the “I am more moral than thou” is why I left the church.

        • k0e3@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          So, I can’t complain about any part of the remaining 85% if I’m not vegan? That’s so fucking stupid. Do you not complain about microplastics because you’re guilty of using devices with plastic in them to type your message?

          • 3abas@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            Yes, I’m a piece of shit for using a phone made by a capitalist corporation and contributes to harming the planet. I don’t deny that I live in a horrible society that forces me to be a bad human just to survive.

            I also don’t call people stupid for telling me my device is bad for the environment. I still eat meat, I’m not a vegan, but I understand and completely agree that it’s terrible for the environment. By recognizing it, I can be conscious of my consumption and reduce it.

            I also use LLMs conservatively, I use them where they add value and I don’t use them frivolously to generate shitty AI slop.

            I’m conscious of its dangers and that drives my consumption of it.

            But I don’t pick and choose. I don’t eat animal products three meals a day and bitch about someone using an LLM to edit a file instead of manually working on it for five hours.

            Just be consistent is the message they were communicating, not that you shouldn’t complain about 85%.

            • k0e3@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              Same, I’m very aware that my selfish actions cause harm to the environment and I do try to be conservative about meat, electricity, and water usage. I don’t even own a car.

              But “I swear to God, if you aren’t vegan,” which is what OP said, is hardly the same as “keep it consistent.” It feels like they’re telling us both that our efforts are pointless because we aren’t vegan. They could have said, try cutting meat from your diet to help more, or give veganism a thought. It comes off as insufferably arrogant, you know?

              I’ll end my rant now, haha. Sorry.

              • 3abas@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 days ago

                They were trying to be funny, don’t be too literal.

                I think (that’s how I interrupted it, I don’t know) the intent was to reflect the insufferably arrogant tone of most people who exclusively complain about AI as if it has no benefits and will be the sole destroyer of our society.

          • thatcrow@ttrpg.networkBanned
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 days ago

            You can, but you also open yourself up to criticism of your hypocrisy.

            Right now, your cognitive dissonance is flaring up because you have to simultaneously criticize someone for contributing to a problem while contributing to it yourself.

            I’m not even vegan, but I absolutely love watching meat-eaters squirm because it reinforces my notion that the average person is a complete idiot and should not be taken seriously.

            • k0e3@lemmy.ca
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              I’m very much aware of my hypocrisy, but I do my best in other ways to help.

              If people like OP are going to be dismissive of my efforts, then I guess they’re free to think that — just like I’m free to think that kind of hate keeping is stupid.

          • threeduck@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            Imagining complaining about someone tipping out fresh water, while eating a burger when a single kilo of beef uses between 15,000 to 200,000 liters.

            Like, until you stop doing the worst thing a single consumer can do to the planet for literally nothing but greed and pleasure (eating meat instead of healthier alternatives), you have no leg to criticize.

      • stratoscaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        What is it with vegans and comparing literally everything to veganism? I was in another thread and it was compared to genocide, rape, and climate change all in the same thread. Insanity

  • fuzzywombat@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we’ve hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.

    • rozodru@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Bingo. If you routinely use LLM’s/AI you’ve recently seen it first hand. ALL of them have become noticeably worse over the past few months. Even if simply using it as a basic tool, it’s worse. Claude for all the praise it receives has also gotten worse. I’ve noticed it starting to forget context or constantly contradicting itself. even Claude Code.

      The release of GPT5 is proof in the pudding that a wall has been hit and the bubble is bursting. There’s nothing left to train on and all the LLM’s have been consuming each others waste as a result. I’ve talked about it on here several times already due to my work but companies are also seeing this. They’re scrambling to undo the fuck up of using AI to build their stuff, None of what they used it to build scales. None of it. And you go on Linkedin and see all the techbros desperately trying to hype the mounds of shit that remain.

      I don’t know what’s next for AI but this current generation of it is dying. It didn’t work.

      • Tja@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Any studies about this “getting worse” or just anecdotes? I do routinely use them and I feel they are getting better (my workplace uses Google suite so I have access to gemini). Just last week it helped me debug an ipv6 ra problem that I couldn’t crack, and I learned a few useful commands on the way.

      • BluesF@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        I was initially impressed by the ‘reasoning’ features of LLMs, but most recently ChatGPT gave me a response to a question in which it stated five or six possible answers sparated by “oh, but that can’t be right, so it must be…”, and none of them was right lmao. Thought for like 30 seconds to give me a selection of wrong answers!

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      He’s also already admitted that they’re out of training data. If you’ve wondered why a lot more websites will run some sort of verification when you connect, it’s because there’s a desperate scramble to get more training data.

    • Tollana1234567@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      MS already released, thier AI doesnt make money at all, in fact its costing too much. of course hes freaking out.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Current genAI? Never. There’s at least one breakthrough needed to build something capable of actual thinking.

    • xthexder@l.sw0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      Most certainly it won’t happen until after AI has developed a self-preservation bias. It’s too bad the solution is turning off the AI.

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Photographer1: Sam, could you give us a goofier face?

    *click* *click*

    Photographer2: Goofier!!

    *click* *click* *click* *click*

    • cenzorrll@piefed.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      He looks like someone in a cult. Wide open eyes, thousand yard stare, not mentally in the same universe as the rest of the world.