Am I missing something? The article seems to suggest it works via hidden text characters. Has OpenAI never heard of pasting text into a utf8 notepad before?

  • PenisDuckCuck9001@lemmynsfw.comOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    So if cheating on homework, use self hosted only then. Cool. I mean, they can’t possibly use that algorithm for every model on hugging face especially if I don’t tell anyone which one I use. I’m done with school after this semester anyway, I feel sorry for everyone in the future that has to complete assignments in the age of ai warfare.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      You have full control of your logit outputs with local LLMs, so theoretically you could “unscramble” them. And any finetuning would just blow that bias away anyway.

      OpenAI (IIRC) very notably stopped giving the logprobs of their models. They did this for many reasons, and most of them boil down to “profits” and “they are anticompetitive jerks,” but another reason is to enable watermark methods just like this.

      Also, thing about this is that basically no one uses self hosted LLMs compared to OpenAI (or really any API) LLM.