Wikipedia has a new initiative called WikiProject AI Cleanup. It is a task force of volunteers currently combing through Wikipedia articles, editing or removing false information that appears to have been posted by people using generative AI.

Ilyas Lebleu, a founding member of the cleanup crew, told 404 Media that the crisis began when Wikipedia editors and users began seeing passages that were unmistakably written by a chatbot of some kind.

  • e$tGyr#J2pqM8v@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Sabotage Wikipedia, Ddos the Internet Archive. Makes you wonder if in the future we’re going to forget our past. Will actual history be obscured in a sea of alternative histories unrecognizably presented as the same thing. Maybe we need to keep some books laying around in archives just to be sure.

    • endofline@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      We have still Anna’s archive, scihub, libgen and old fashion traditional libraries ( including the national ). National libraries won’t disappear in the nearest years, maybe will rotten due to defunding but still they will exist

    • TachyonTele@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 days ago

      The digital dark age will be a real thing, absolutely.

      Interesting idea on a sea of alternative histories. That might be a possible threat.
      Someone else here called it “AI text apocalypse”. I like that term.

    • NateNate60@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      “[The] main reasons that motivate editors to add AI-generated content: self-promotion, deliberate hoaxing, and being misinformed into thinking that the generated content is accurate and constructive,” Lebleu said.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    If anyone can survive the AI text apocalypse, it is wikipedia. They have been fending off and regulating article writing bots since someone coded up a US town article writer from the 2000 census (not the 2010 or 2020 census, the 2000 census. This bot was writing wikipedia articles in 2003)

  • kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Unleashing generative AI on the world was basically the information equivalent of jumping headfirst into Kessler Syndrome.

    • khannie@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      For the uninitiated like me:

      The Kessler syndrome (also called the Kessler effect,[1][2] collisional cascading, or ablation cascade), proposed by NASA scientists Donald J. Kessler and Burton G. Cour-Palais in 1978, is a scenario in which the density of objects in low Earth orbit (LEO) due to space pollution is numerous enough that collisions between objects could cause a cascade in which each collision generates space debris that increases the likelihood of further collisions.

      Wikipedia link.

  • narc0tic_bird@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Best case is that the model used to generate this content was originally trained by data from Wikipedia so it “just” generates a worse, hallucinated “variant” of the original information. Goes to show how stupid this idea is.

    Imagine this in a loop: AI trained by Wikipedia that then alters content on Wikipedia, which in turn gets picked up by the next model trained. It would just get worse and worse, similar to how converting the same video over and over again yields continuously worse results.

    • 8uurg@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      A very similar situation to that analysed in this paper that was recently published. The quality of what is generated degrades significantly.

      Although they mostly investigate replacing the data with ai generated data in each step, so I doubt the effect will be as pronounced in practice. Human writing will still be included and even curation of ai generated text by people can skew the distribution of the training data (as the process by these editors would inevitably do, as reasonable text could get through the cracks.)

      • Blaster M@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        AI model makers are very well aware of this and there is a move from ingesting everything to curating datasets more aggressively. Data prep ia something many upstarts have no idea is critical, but everyone is learning about, sometimes the hard way.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      See also: model collapse

      (Which is more or less just regression towards the mean with more steps)

    • Wrench@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Yes, this is what many of us worry will become the internet in general. AI content generated on from AI trained on AI garbage.

      AI bots can trivially outpace humans.

      • kboy101222@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        I was just discussing with a friend of mine how we’re rapidly approaching the dead internet. At some point, many websites will likely just be chat bots talking to other chat bots, which then gets used to train further chat bots. Human made content is already becoming harder and harder to find on algorithm heavy websites like Reddit and facebooks suite of sites. The bots can easily outpace any algorithmic changes they might make to help deter them, but my fb using family members all constantly block those weird Jesus accounts and they still show up constantly

      • FeelzGoodMan420@eviltoast.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        I wouldn’t know. I use pihole to block all ads on my TV OS. I’m curious though, which service/app is giving you ads on pause? Do you mean like on a Roku TV where the screensaver is ads? Many TVs let you disable that (i.e. LG WebOS.) otherwise pihole is your friend :-)

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          My TV is old enough that it doesn’t have it, I’m just talking about the general trend toward making that a thing. I’m not going to buy a TV that forces ads on me, and the fact that I have to actively look for that on my next TV is appalling.

          • FeelzGoodMan420@eviltoast.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 days ago

            I have bad news for you. Literally every TV has ads now. Every. Single. One. That’s why I keep harping on Pihole. It blocks them.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 days ago

              Not the commercial grade ones, like “hospitality” TVs. They’re more expensive, but they’re also intended to be a bit more reliable as well.

              I’m worried they’ll adapt the ads to not be blockable w/ Pihole.

    • Bahnd Rollard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      They used to be contained, every village has their idiot. Now that the internet is the global village, all the formerly isolated idiots have a place to chat.

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        0
        ·
        7 days ago

        Amazing how these idiots are this effective…

        While us common folk can’t organize or agree on anything

        • Geobloke@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Most of us do something idiotic once and when the opportunity to do it again, pull back and think "this was embarrassing last time, maybe I’ll re-evaluate. "

          But a dedicated idiotic is a different beast, fill of confidence and have had what ever organ produces shame surgically removed enabling them to commit ever greater acts of idiocy. But then the internet was invented and these people met. Some even had babies. And now there is arms race to see how many idiots can squeeze through the same tiny door. They have recognised their time to shine and seized it with their clammy yet also sticky hands.

          Truly, it’s inspiring in its own special way

  • RubberDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Require someone that wants to add stuff to pay a small amount to the Wikimedia Foundation for activating their account and refund it if they moderate a certain amount.

    • aubertlone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Yeah I mean I’ve had minor edits reversed because I didn’t source the fact properly

      And that was like 10 years ago I’m surprised these edits are getting through in the first place

      • Shdwdrgn@mander.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Seems like that would be an easy problem to solve… require all edits to have a peer review by someone with a minimum credibility before they go live. I can understand when Wikipedia was new, allowing anyone to post edits or new content helped them get going. But now? Why do they still allow any random person to post edits without a minimal amount of verification? Sure it self-corrects given enough time, but meanwhile what happens to all the people looking for factual information and finding trash?

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Or at least give it a certain amount of time before it goes live. So if nobody comes around to approve it in 24 hours, it goes live.

          Usually bad edits are corrected within hours, if not minutes, so that should catch the lion’s share w/o bogging down the approval queue too much.

        • RubberDuck@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Croudsourcing is the strenght that led to the vast resource and also the weakness as displayed here. So probably there will be a need for some form of barrier. Hence my suggestion.