• kaffiene@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot

    • Rogers@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.

      Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.

      Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.

      • kaffiene@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.

        • keegomatic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.