Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.

Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.

The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.

Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.

These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.

    • fine_sandy_bottom@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      This is a really valid point, especially because it’s not only faster but dramatically cheaper.

      The thing is, summaries which are pretty terrible might be costly. If decision makers are relying on these summaries and they’re inaccurate, then the consequences might be immeasurable.

      Suppose you’re considering 2 cars, one is very cheap but on one random day per month it just won’t start, the other is 5x the price but will work every day. If you really need the car to get to work, then the one that randomly doesn’t start might be worse than no car at all.

      • PumpkinSkink@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Are we sure it’s cheaper though? I mean it legitimatly might not be. I have some friends who work in tech and they use an AI model for, amongst other things, summarizing information on their internal documentation. They’ve told me what their company is paying for the license to use this thing, and it’s eyewatering. also, uhh last time I checked, the company they got that license from does not turn a profit… so it appears to be too cheap at the moment.

        It might really be the case that it isn’t cheaper than just paying someone a normal salary to do that work, and it probably isn’t cheaper than just jamming the work being done by the AI now back onto preexisting employees (which is what they did before ~2 years ago anyway).

        The other thing that makes me feel this might not be unreasonable is that everyone on the team likes the tool, except their manager, who has thrown out the idea to cut it twice now (that I know of).

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        It might be all I care about. Humans might always be better, but AI only has to be good enough at something to be valuable.

        For example, summarizing an article might be incredibly low stakes (I’m feeling a bit curious today), or incredibly high stakes (I’m preparing a legal defense), depending on the context. An AI is sufficient for one use but not the other.

        • Grandwolf319@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          I mean, what you’re essentially implying is, what if we could do a lot of things that we do today, but faster and less quality.

          Imo we have too much things today and very few are worth their salt, so this is the opposite of the right direction.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            That’s not what I’m implying. What I’m saying is that wasting time and effort on quality is pointless when the threshold for success is low.

            For example, I could use aerospace quality parts (perfectly machined to micron-level tolerances) to build a toaster. However, while this would not increase the performance meaningfully, the cost would be orders of magnitude greater. Instead I can use shitty off-the-shelf parts because it doesn’t really make a difference.

            Maybe in other words, engineering tolerances apply to LLMs too. They’re crude devices, but it’s totally fine if you have a crude problem.

            • Grandwolf319@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              That’s not what I’m implying. What I’m saying is that wasting time and effort on quality is pointless when the threshold for success is low.

              Yes and my response to that is for some people maybe, for others they don’t want a low threshold, they want few good articles instead of spam of low quality.

              Maybe in other words, engineering tolerances apply to LLMs too. They’re crude devices, but it’s totally fine if you have a crude problem.

              Exactly, I’m saying there is no objective crude problem. You might be okay with simple summaries but I want every single piece of information I consume to have a very high bar.

              • Pennomi@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Sure, go for it. But good luck paying an army of copywriters to summarize every article you read.

              • AA5B@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                What if you’re reading Lemmy, and you don’t really feel like reading the article. Is the headline likely to tell you all you need to know or is the ai summary likely to find more info and without the clickbait?

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Sometimes I am preparing a high stakes communication for work and struggling for brevity. I will ask AI for help reducing my word count and I find it is helpful as an impartial editor. I take its 25% reduction, sigh, accept most of what it sacrificed, fix a word or two, and am done. It’s helpful.

        • greenskye@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          And you can absolutely trust that tons of executives will definitely not understand this distinction and will use AI even in areas where it’s actively harmful.

          • Mrkawfee@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            They’ll use it until it blows up in their faces and then they will all backtrack. Executives are like startled cattle.

            • scarabic@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Let’s not act like executives are the only morons in this world. Plenty of rank and file are leaning on AI as well.

  • SkyNTP@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    LLMs == AGI was and continues to be a massive lie perpetuated by tech companies and investors that people still have not woken up to.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Who is claiming that LLMs are generally intelligent? Is it just “they” or can you actually name a company?

      • exanime@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        You mean the stuff currently peddled everywhere as “Artificial intelligence”?

        Yeah, nobody is saying they are intelligent

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          In game NPC actions have been called “AI” for decades. Computers playing chess has been called AI for decades. Lots of stuff has been.

          Nobody thought they were genuinely sentient or sapient.

          The fact that people lumped LLMs, text-to-image generators, machine learning algorithms, image recognition algorithms, etc into a category and called it “AI” doesn’t mean they think it is self aware or intelligent in the way a human would be.

          • exanime@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            The person I replied to said nobody was claiming LLMs were intelligent. I just posted that the people behind the push for this overhyped bubble are indeed making that claim

            Whether people believe it is something else. But also, many people do believe it

            • Grimy@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              He said generally intelligent, In the context of the first reply using the term AGI. There is a difference between artificial intelligence and artificial general intelligence.

              • exanime@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                I see… At first read I thought the generally was implying somewhat. I missed the meaning in aGi

        • ContrarianTrail@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          AI and AGI are not the same thing.

          A chess playing robot is intelligent but it’s so called “narrow intelligence” because it’s really good at one thing but that doesn’t translate to other things. Human are generally intelligent because we can perform a wide range of cognitive tasks. There’s nothing wrong at calling LLM an AI because that’s what it is. I’m not aware of a single AI company claiming to posses an AGI system.

      • kautau@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I think the idea is that every company is dumping money into LLMs and no other form of alternative AI development to the point that all AI research is LLM based and therefore to investors and those involved, it’s effectively the only only avenue to AGI, though that’s likely not true

    • jaybone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The fact that we even had to start using the term AGI when in common parlance AI always meant the same up until recently, shows how goal posts are being moved.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        The thing with ‘common parlance’ is that it’s used by people without a deep understanding of the subject. Among AI researchers, there’s never been confusion about this. We have different terms for different things for a reason. The term AGI has been around since the early 2000s.

        It’s like complaining about the terms jig, spoon, spinner, and fly, and saying that back in the day, we just called them fishing lures. They are fishing lures, but these terms describe different types. Similarly, AGI is a form of AI, but it refers to a specific kind.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        What people mean by AI has been changing for as long as the term has been used. When I was studying CS in the 80s, people said the holy grail was giving a computer printed English text and having it read it aloud. It wasn’t much later that OCR and text to speech software was commonplace.

        Generally, when people say AI, they mean a computer doing something that normally takes a human, and that bar goes up all the time.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          It might also be a question of how we define “intelligence”. We really don’t have a clear definition and it’s a moving target as we find out more

          • “reading aloud is something only a person can do. It requires intelligence”. Here’s a computer doing it. “Oh, that’s not really intelligence, is it”
      • Jojo, Lady of the West@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        To a degree, but, like, video game ai has been called that for decades, I don’t think anyone ever thought it was agi. It’s a more specific term, and it saw use before the big LLM craze started

    • pingveno@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I’m doing a series of conversations/interviews with my parents’ generation to keep a voice record of their stories. As part of that, I’m doing transcripts that start with the transcript feature of Google’s Recorder. It can do some nifty things like assign speakers to individual voices. I have to clean up the transcripts some, but it’s far less laborious than dealing with a 15-20 minute conversation. I can fix up a transcript in maybe 5 minutes.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      but it can make a human way more efficient, and make 1 human able to do the work of 3-5 humans.

      Not if you have to proof-read everything to spot the entirely convincing-looking but completely inaccurate parts, is the problem the article cites.

        • WiseThat@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          If the error is hidden well, yes. Close-reading a text and cross referencing everything it says takes MUCH longer than writing a piece you know is accurate to begin with

      • soul@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        For summarization, having the data correct is crucial because manual typing itself is not a large chore. AI tends to shine more when you’re producing a lot of manual labor such as a 10-page document for something. At that point, the balance tips the other way where proofing and correcting is much easier and less time-consuming than the production itself. That’s where AI comes in for the gains in workflows. It has other fantastic uses as well, like being another voice for brainstorming ideas. If done well, you’re not taking the AI’s idea so much as just using it to spur more creative thinking on your end.

  • Chef_Boyardee@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    To all of you AI haters out there, stay away from the two minute papers yt channel. You’ll get very sad at the actual state of AI.

    • Gumus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Also beware the AI Explained channel, where the creator is full-time investigating and evaluating cutting edge development in AI. You might even glimpse what’s coming.

  • Glitch@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Nice to have though, would likely skip or half-ass a lot of stuff if I didn’t have a tool like AI to do the boring parts. When I can get started on a task really quickly, I don’t care what the quality is, I’ll iterate until it meets my standards.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Here is the summary by AI

    The article suggests AI is worse than humans at summarizing documents, based on one outdated trial. But really, Crikey is just feeling threatened. AI is evolving fast, and its ability to handle vast amounts of data without the human biases Crikey often exhibits is undeniable. While they nitpick AI’s limitations, they ignore how much better it will get—probably even better than their reporters. Maybe they’re just jealous that AI could do in seconds what takes humans hours!

  • AA5B@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Artificial intelligence is worse than humans in every way at summarizing documents

    In every way? How about speed? The goal is to save human time so if AI is faster and the summary is good enough, then it is a success. I guarantee it is faster. Much faster.

    • loonsun@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      If you make enough mistakes, speed is a detriment not a benefit. Increasing speed allows you to produce more summaries but if you still need to correct and edit them all you’ve done is add a step where a human has to still read the document to the level where they could summarize it and edit the AI summary. Therefore the bottleneck of a human reading the document and working on a summary is still there. It would only potentially make it slightly easier if the corrections needed are small and obvious.

    • Hacksaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      47% is a fail. 81% is an A-… Sure the AI can fail faster than a human can succeed, but I can fail to run a marathon faster than an athlete can succeed.

      I guess by the standards we use to judge AI I’m a marathon runner!

      • Matthew@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’d heard that Canada gives out As down into the 80% range but I thought I was being fed a line

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Yeah 0- 49% is an F 50-59 is a D 60-69 is a C 70-79 is a B 80-89 is an A 90-100 is an A+

          It means that 10-20% of exams and assignments can be used to really challenge students without unfairly affecting grades of those who meet curriculum expectations.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        If I want to get a better sense of lemmy than headlines, that 47% success at summarizing all the posts is good enough and much faster than I can even skim

        If I want to code a new program, that 47% is probably pretty solid at structure and boilerplate so good enough. It can save me a lot of time

        If I’m writing my thesis, that 47% is abject failure

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          If you miss key information the summary is useless.

          If the structure of the code is bad then using that boilerplate will harm your ability to maintain the code FOREVER.

          There are use cases for it, but it has to be used by someone who understands the task and knows the outcome they’re looking for. It can’t replace any measure of skill just yet, but it behaves as if it can which is hazardous.

  • jawa21@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This reminds me. What happened to that tldr bot? I did appreciate the summaries, even if they weren’t perfect.

  • dreaddynaughty@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This is an old study, they tested University level adults against the standard Llama2-70B.

    Kinda absolete now, the model has completely fallen out of use, for the newer and far better 3 and 3.1 Versions. It also wasnt fine tuned for summarization, and while base L2-70B was OK, it wasnt great at anything without fine tuning.

    This clickbait title also sounds like self gratification, the abysmal reading comprehension in the Internet is directly counter to it. The average human found on the Internet doesnt approch the level of literary capabilities, that those ten human testers showed in the study.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    “Just one more training on a social network”

    Can’t wait for the bouble to burst.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      We shouldn’t wait, it is already basically illegal to sample the works of others so we should just pull the plug now.

      • stoy@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        The issue with legally pulling the plug is that it won’t stop AI baddies, only good AI companies who respect the law.

        The knowledge and tools are still out there.

        But when the bouble bursts it will tank AI globally.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          good AI companies who respect the law

          When those come around maybe we can rethink our stance, but for now we should stop the AI baddies.

          • stoy@lemmy.zip
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Which will only be possible with good old fashioned bouble bursting as I said.

            • finitebanjo@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Nah we can start enforcing the laws as they exist. OpenAI is using works of others commercially without permission.

              We don’t have to wait.

              • stoy@lemmy.zip
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                As I noted, that only works with a limited set of AI companies.

                They need to be in the juristiction of whatever government that decide to enforce the laws, if not, there is very little that can be done.

                Then, besides needing to be in the right juristiction, the punnishment needs to be large enough that you can’t just budget it away.

                Then any country doing this will know that they are deliberately getting rid of an important sector, while other countries will continue running their sectors.

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Artificial intelligence is worse than humans in every way

    As if capitalists have ever cared about that…

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    “AI” or Large Lange Models, are designed by definition to give averaged answers. So they’re not just averaging on the text you give them, they’re averaging it with all general text of the training model, to create a probabilistically average result based on all of it.

    There’s no way around this, because it’s simply how such systems work. It’s their lifeblood to produce a “best guess” across large amounts of training data …which is done by averaging out all that language. A large amount of language… Hence the name.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    Meanwhile, here’s an excerpt of a response from Claude Opus on me tasking it to evaluate intertextuality between the Gospel of Matthew and Thomas from the perspective of entropy reduction with redactional efforts due to human difficulty at randomness (this doesn’t exist in scholarship outside of a single Reddit comment I made years ago in /r/AcademicBiblical lacking specific details) on page 300 of a chat about completely different topics:

    Yeah, sure, humans would be so much better at this level of analysis within around 30 seconds. (It’s also worth noting that Claude 3 Opus doesn’t have the full context of the Gospel of Thomas accessible to it, so it needs to try to reason through entropic differences primarily based on records relating to intertextual overlaps that have been widely discussed in consensus literature and are thus accessible).