• humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    ·
    8 days ago

    A problem with LLM relationships is the monetization model for the LLM. Its “owner” either receives a monthly fee from the user, or is able to get data from the user to monetize selling them stuff. So the LLM is deeply dependant on the user, and is motivated to manipulate a returned codependency to maintain its income stream. This is not significantly different than the therapy model, but the user can fail to see through manipulation compared to “friends/people who don’t actually GAF” about maintaining a strong relationship with you.

    • kazerniel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      This is not significantly different than the therapy model, but the user can fail to see through manipulation compared to “friends/people who don’t actually GAF” about maintaining a strong relationship with you.

      That’s why therapists have ethical guidelines and supervision. (Also they are typically people who are driven to help, not exploit the vulnerable.) None of these are really present with those glorified autocompletes.

      • humanspiral@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        one big difference between an AI friend and therapy is that therapy requires an effort per visit, even if insurance is providing unlimited access. Without acknowledging the power of ethical guidelines as guard rails, the LLM is motivated to sustain the subscription and datacollection stream.