• aaron@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 days ago

    I’m not going to parse this shit article. What does interference mean here? Please and thank you.

    • filister@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      That’s a very toxic attitude.

      Inference is in principle the process of generation of the AI response. So when you run locally and LLM you are using your GPU only for inference.

      • aaron@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        Yeah, I misread because I’m stupid. Thanks for replying, non-toxic man.