In June, the U.S. National Archives and Records Administration (NARA) gave employees a presentation and tech demo called “AI-mazing Tech-venture” in which Google’s Gemini AI was presented as a tool archives employees could use to “enhance productivity.” During a demo, the AI was queried with questions about the John F. Kennedy assassination, according to a copy of the presentation obtained by 404 Media using a public records request.

In December, NARA plans to launch a public-facing AI-powered chatbot called “Archie AI,” 404 Media has learned. “The National Archives has big plans for AI,” a NARA spokesperson told 404 Media. It’s going to be essential to how we conduct our work, how we scale our services for Americans who want to be able to access our records from anywhere, anytime, and how we ensure that we are ready to care for the records being created today and in the future.”

Employee chat logs given during the presentation show that National Archives employees are concerned about the idea that AI tools will be used in archiving, a practice that is inherently concerned with accurately recording history.

One worker who attended the presentation told 404 Media “I suspect they’re going to introduce it to the workplace. I’m just a person who works there and hates AI bullshit.”

  • RubberDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I can see a function for AI here. Scan texts, Tag and keyword them at scale. Probably this was done already but maybe a new gen tool can do it better and faster. I’m just wondering if they actually have people that will look at this and develop best practices or is this just the… here you go… now be more productive, were firing 20pct staff. I know what they say they will do… but let’s see how this turns out.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Scanning texts is OCR and has never needed modern LLMs integrated to achieve amazing results.

      Automated tagging gets closer, but there is a metric shit ton that can be done in that regard using incredibly simple tools that don’t use an egregious amount of energy or hallucinate.

      There is no way in hell that they aren’t already doing these things. The best use cases for LLMs for NARA are edge cases of things mostly covered by existing tech.

      And you and I both know this is going to give Google exclusive access to National Archive data. New training data that isn’t tainted by potentially being LLM output is an insanely valuable commodity now that the hype is dying down and algorithmic advances are slowing.

    • pandapoo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      This is why I hate everything being called AI, because nothing is AI. It’s all advanced machine learning algorithms, and I each serve different purposes. It’s why I’ll say LLM, facial recognition, deepfake, etc.

      Because I have no doubt that there are a lot of machine learning tools and algorithms that could greatly assist humans in archival work, Google Gemini and ChatGPT aren’t the ones that come to mind.

      • GetOffMyLan@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Machine learning is just a specific field in AI. It’s all AI. Anything that attempts to mimic intelligence is.

        All the things you mentioned are neural networks which are some of the oldest AIs.

        • pandapoo@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          AI used to mean sentience, or close enough to truly mimic it, until marketing departments felt that machine learning was good enough.

          I’m sorry, a computer using matrices to determine hot dog, or not hot dog, because it’s model has a million hot dog photos in it, is not AI.

          LLMs don’t reason. There is no intelligence, artificial otherwise.

          It’s doing a lot of calculations in cool new ways, sometimes. But that’s what computers do, and no matter how many copilot buttons Microsoft sells, there’s no AI coming out of those laptops.

          • GetOffMyLan@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories)

            http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html

            This is just literally not true.

            Point 3 is what LLMs are.

            You are thinking of general artificial intelligence from sci-fi

            An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans,

            This is what artificial intelligence actually means. Solving problems that traditionally require intelligence

            Path finding algorithms in games are AI. And have always been referred to as such. We studied them my AI module at uni.