• XeroxCool@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Will this further fuck up the inaccurate nature of AI results? While I’m rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.

    • ladel@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      The article says it’s not poisoning the AI data, only providing valid facts. The scraper still gets content, just not the content it was aiming for.

      E:

      It is important to us that we don’t generate inaccurate content that contributes to the spread of misinformation on the Internet, so the content we generate is real and related to scientific facts, just not relevant or proprietary to the site being crawled.

      • einlander@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        The problem I see with poisoning the data is the AI’s being trained for law enforcement hallucinating false facts used to arrest and convict people.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Law enforcement doesn’t convict anyone, that’s a judge’s job. If a LEO falsely arrests you, you can sue them, and it should be pretty open-and-shut if it’s due to AI hallucination. Enough of that and LEO will stop it.

        • patatahooligan@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Law enforcement AI is a terrible idea and it doesn’t matter whether you feed it “false facts” or not. There’s enough bias in law enforcement that the data is essentially always poisoned.