• sudneo@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    There is a bunch of research showing that model improvement is marginal compared to energy demand and/or amount of training data. OpenAI itself ~1 month ago mentioned that they are seeing a smaller improvements in Orion (I believe) vs GPT4 than there was between GPT 4 and 3. We are also running out of quality data to use for training.

    Essentially what I mean is that the big improvements we have seen in the past seem to be over, now improving a little cost a lot. Considering that the costs are exorbitant and the difference small enough, it’s not impossible to imagine that companies will eventually give up if they can’t monetize this stuff.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Surely you can see there is a difference between marginal improvement with respect to energy and not improving.

    • icecreamtaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Compare Llama 1 to the current state of the art local AI’s. They’re on a completely different level.

      • sudneo@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 day ago

        Yes, because at the beginning there was tons of room for improvement.

        I mean take openAI word for it: chatGPT 5 is not seeing improvement compared to 4 as much as 4 to 3, and it’s costing a fortune and taking forever. Logarithmic curve, it seems. Also if we run out of data to train, that’s it.