• renegadespork@lemmy.jelliefrontier.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    19 days ago

    If you think of LLMs as something with actual intelligence you’re going to be very unimpressed… It’s just a model to predict the next word.

    This is exactly the problem, though. They don’t have “intelligence” or any actual reasoning, yet they are constantly being used in situations that require reasoning.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      19 days ago

      Maybe if you focus on pro- or anti-AI sources, but if you talk to actual professionals or hobbyists solving actual problems, you’ll see very different applications. If you go into it looking for problems, you’ll find them, likewise if you go into it for use cases, you’ll find them.

    • Tgo_up@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      18 days ago

      What situations are you thinking of that requires reasoning?

      I’ve used LLMs to create software i needed but couldn’t find online.

      • renegadespork@lemmy.jelliefrontier.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        Creating software is a great example, actually. Coding absolutely requires reasoning. I’ve tried using code-focused LLMs to write blocks of code, or even some basic YAML files, but the output is often unusable.

        It rarely makes syntax errors, but it will do things like reference libraries that haven’t been imported or hallucinate functions that don’t exist. It also constantly misunderstands the assignment and creates something that technically works but doesn’t accomplish the intended task.