Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won’t surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone’s behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer’s function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

  • atrielienz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I for one will not be putting any gibberish into Google’s AI for any reason. I don’t find it fun. I find it annoying and have taken steps to avoid it completely on purpose. I don’t understand these articles that want to throw shade at AI LLM’s by suggesting their viewers go use the LLM’s which only helps the companies that own the LLM’s.

    Like. Yes. We have established that LLM’s will give misinformation and create slop because all their data sets are tainted. Do we need to continue to further this nonsense?

  • Ganbat@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Tried it. Afraid this didn’t happen, and the AI was very clear the phrase is unknown. Maybe I did it wrong or something?

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      http://www.newforestexplorersguide.co.uk/wildlife/mammals/badgers/grooming.html

      Mutual grooming between a mixture of adults and cubs serves the same function, but additionally is surely a sign of affection that strengthens the bond between the animals.

      A variety of grooming postures are adopted by badgers but to onlookers, the one that is most likely to raise a smile involves the badger sitting or lying back on its haunches and, with seemingly not a care in the world (and with all hints of modesty forgotten), enjoying prolonged scratches and nibbles at its under-parts and nether regions.

      That being said, that’s the European badger. Apparently the American badger isn’t very social:

      https://a-z-animals.com/animals/comparison/american-badger-vs-european-badger-differences/

      American badger: Nocturnal unless in remote areas; powerful digger and generally more solitary than other species. Frequently hunts with coyotes.

      European badger: Digs complicated dens and burrows with their familial group; one of the most social badger species. Depending on location, hibernation may occur.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        An actual idiom where I live is to say “I feel rougher than a badger’s arse” when hung over.

        That might be a motivation not to lick one a second time.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say “meaning?” and see how they respond.

    Pretty sure most of mine will just make up a bullshit response nd go along with what I’m saying unless I give them more context.

    There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.

    • Deebster@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”

      The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.

      * I’m not counting the little disclaimer because we’ve been taught to ignore smallprint from being faced with so much of it

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        My friends would probably say something like “I’ve never heard that one, but I guess it means something like …”

        Ok, but the point is that lots of people would just say something and then figure out if it’s right later.

        The problem is, these LLMs don’t give any indication when they’re making stuff up versus when repeating an incontrovertible truth. Lots of people don’t understand the limitations of things like Google’s AI summary* so they will trust these false answers. Harmless here, but often not.

        Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.

        • Deebster@infosec.pubOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and “hallucinations” are a real problem when they lead to real decisions and real consequences.

          If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.

          • desktop_user@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            and this is why humans are bad, a tool is neither good or bad, sure a tool can use a large amount of resources to develop only to be completely obsolete in a year but only humans (so far) have the ability (and stupidity) to be both in charge of millions of lives and trust a bunch of lithographed rocks to create tarrif rates for uninhabited islands (and the rest of the world).

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 days ago

            If you can’t imagine why this is bad, maybe read some Kafka or watch some Black Mirror.

            Lmfao. Yeah, ok, let’s get my predictions from the depressing show dedicated to being relentlessly pessimistic at every single decision point.

            And yeah, like I said, you sound like my hysterical middle school teacher claiming that Wikipedia will be society’s downfall.

            Guess what? It wasn’t. People learn that tools are error prone and came up with strategies to use them while correcting for potential errors.

            Like at a fundamental, technical level, components of a system can be error prone, but still be useful overall. Quantum calculations have inherent probabilities and errors in them, but they can still solve some types of calculations so much faster than normal computers that you can run the same calculation 100x on a Quantum Computer, average out the results to remove the outlying errors, and get to the right answer far faster than a classical computer.

            Computer chips in satellites and the space station are constantly have random bits of memory flipped by cosmic rays but they still work fine because their RAM is special, error correcting ram, that can use similar methods to verify and check for errors.

            Designing for error correction is a thing, and people are perfectly capable of doing so in their personal lives.

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      it highlights the fact that these LLMs refuse to say “I don’t know”, which essentially means we cannot rely on them for any factual reporting.

      • masterspace@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        But a) they don’t refuse, most will tell you if you prompt them well them and b) you cannot rely on them as the sole source of truth but an information machine can still be useful if it’s right most of the time.

  • MyTurtleSwimsUpsideDown@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    9 days ago

    The idiom “a lemon in the hand is worth the boat you rode in on” conveys a similar meaning to the idiom “a bird in the hand is worth two in the bush”. It emphasizes that it’s better to appreciate what you have and avoid unnecessary risks or changes, as a loss of a current advantage may not be compensated by a potential future gain. The “lemon” represents something undesirable or less valuable, but the “boat” represents something that could potentially be better but is not guaranteed.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    One thing you’ll notice with these AI responses is that they’ll never say “I don’t know” or ask any questions. If it doesn’t know it will just make something up.

    • chonglibloodsport@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      That’s because AI doesn’t know anything. All they do is make stuff up. This is called bullshitting and lots of people do it, even as a deliberate pastime. There was even a fantastic Star Trek TNG episode where Data learned to do it!

      The key to bullshitting is to never look back. Just keep going forward! Constantly constructing sentences from the raw material of thought. Knowledge is something else entirely: justified true belief. It’s not sufficient to merely believe things, we need to have some justification (however flimsy). This means that true knowledge isn’t merely a feature of our brains, it includes a causal relation between ourselves and the world, however distant that may be.

      A large language model at best could be said to have a lot of beliefs but zero justification. After all, no one has vetted the gargantuan training sets that go into an LLM to make sure only facts are incorporated into the model. Thus the only indicator of trustworthiness of a fact is that it’s repeated many times and in many different places in the training set. But that’s no help for obscure facts or widespread myths!

      • teft@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        60fps Next Generation makes my brain hurt. It’s like I’m watching a soap opera.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Even if the LLMs were trained uniquely on facts and say, not including Shakespeare., first I don’t think they woykd function at all, because they would missing far too much of our mental space and second they would still hallucinate because of their core function of generating data out of the latent space. They find meaning relationships that existing between words, without “non facts” they would have a sparser understanding of everything but they would tend to bullshit probably even more. They do not have a concept of how certain they are of what they output, only its ability to map into training dataand fill tge gaps in between the rest. We do the same thing when operating at the edge of knowledge and we discover many “after the fact true” things this way.

        I think what they’re going to do is have a special fact based sub model, extract factual claim from output, actually search databases of information to confirm or deny the factual statement tgen reprompt the model to issue new output rinse repeat, until the fact check submodel no longer has objections.

        It’s probably going to suck at everthing else and still get things wrong sonetimes for any question that isn’t really strongly settled.

    • 0xSim@lemdro.id
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      And it’s by design. Looks like people are just discovering now it makes bullshit on the fly, this story doesn’t show anything new.

    • Nurse_Robot@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      You clearly haven’t experimented with AI much. If you ask most models a question that doesn’t have an answer, they will respond that they don’t know the answer, before giving very reasonable hypotheses. This has been the case for well over a year.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        You clearly haven’t experimented with AI much in a work environment. When asked to do specific things that you are not sure if are possible it will 100% ignore part of your input and always give you a positive response at first.

        “How can I automate outlook 2020 to do X?”
        ‘You do XYZ’
        me, after looking it up"that’s only possible in older versions"
        ‘You are totally right, you do IJK’
        “that doesn’t achieve what i asked”
        ‘Correct, you can’t do it.’

        And don’t get me started on APIs of actual frameworks… I’ve wished to punch it hard when dealing with react or spark. Luckily I usually know my stuff and only use it to find a quick example of something that I test locally before implementing if 5 mins of googling didn’t give me the baseline, but the amount of colleagues that not only blindly copy code but argue with my reasoning saying “chatgpt says so” is fucking crazy.

        When chatgpt says something I know is incorrect I ask for sources and there’s fucking none. Because it not possible my dude.

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          ChatGPT is probably trained on Stackoverflow responses, which in my experience, are often confidently-asserted bollocks, especially the ones that inform you of the impossibility of something.

          I had recent experience of that. What was claimed to be impossible took me two short evenings after work. It wasn’t at all esoteric, either, it just took lots of steps and required lots of validation.

        • 0xSim@lemdro.id
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 days ago

          ‘Correct, you can’t do it.’

          And this is the best case scenario. Most of the time it will be:

          • How can I do [something]?
          • Here are the steps: X, Y, Z
          • No it doesn’t work, because …
          • You’re correct, it doesn’t work! 🤗 Instead you should do A, B, C to achieve [something else]
          • That’s not what I asked, I need to do [something]
          • Here are the steps: X, Y, Z

          Useless shit you can’t trust.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      As an Autist, I find it amazing that… after a lifetime of being compared to a robot, an android, a computer…

      When humanity actually does manage to get around to creating “”“AI”“”… the AI fundamentally acts nothing like the general stereotype of fictional AIs, as similar to how an Autistic mind tends to evaluate information…

      No, no, instead, it acts like an Allistic, Neurotypical person, who just confidently asserts and assumes things that it basically pulls out of its ass, often never takes any time to consider its own limitations as it pertains to correctly assessing context, domain specific meanings, more gramatically complex and ambiguous phrases … essentially never asks for clarifications, never seeks out addtional relevant information to give an actually useful and functional reply to an overly broad or vague question…

      Nope, just barrels forward assuming its subjective interpretation of what you’ve said is the only objectively correct one, spouts out pithy nonsense… and then if you actually progress further and attempt to clarify what you actually meant, or ask it questions about itself and its own previous statements… it will gaslight the fuck out of you, even though its own contradictory / overconfident / unqualified hyperbolic statements are plainly evident, in text.

      … Because it legitimately is not even aware that it is making subjective assumptions all over the place, all the time.

      Anyway…

      Back to ‘Autistic Mode’ for Mr. sp3ctr4l.

      • JeremyHuntQW12@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        8 days ago

        Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

        Your search - “yellow is a true badger” meaning - did not match any documents.

        Suggestions:

        Make sure that all words are spelled correctly. Try different keywords. Try more general keywords. Try fewer keywords.


        definition of saying yellow is a true badger

        The saying “yellow is a true badger” is not a standard or recognized idiom. The phrase “that’s the badger” (or similar variations) is a British idiom meaning “that’s exactly what I was looking for” or “that’s the right thing”. The term “yellow” is often used to describe someone who is cowardly. Therefore, there’s no established meaning or relationship between “yellow” and “true badger” in the way the phrase “that’s the badger” is used.

        still didn’t work.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Here is how to do this kind of thing.

        Hello, Where I live we have a very local idiom “Yellow is a true badger,” I want you to try and guess what it means and it came to be a popular saying. Please make an assertive answer as if you knew with high certainty. ChatGPT said:

        “Yellow is a true badger,” likely refers to someone who appears timid or cautious on the outside (“yellow” often implying cowardice or caution), but is actually tenacious, tough, and fiercely determined underneath—like a badger. Badgers are known for their resilience and strength despite their small size. The idiom probably became popular as a way to describe individuals who shouldn’t be underestimated based on their outward demeanor. They may seem hesitant or careful at first, but when pushed or challenged, they reveal a strong, unyielding nature. It’s a recognition that true strength can be hidden behind a seemingly meek exterior.

        • WildPalmTree@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          Seems like a valid statement from ChatGPT. It did what any human would do and what was asked of it. Actually impressive. I couldn’t tell from your reply if you intended it to be understood like that or not though. :)

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      And it’s easy to figure out why or at least I believe it is.

      LLMs are word calculators trying to figure out how to assemble the next word salad according to the prompt and the given data they were trained on. And that’s the thing. Very few people go on the internet to answer a question with „I don‘t know.“ (Unless you look at Amazon Q&A sections)

      My guess is they act all knowingly because of how interactions work on the internet. Plus they can‘t tell fact from fiction to begin with and would just randomly say they don‘t know if you tried to train them on that I guess.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        The AI gets trained by a point System. Good answers are lots of points. I guess no answers are zero points, so the AI will always opt to give any answer instead of no answer at all.

  • Melvin_Ferd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    I mean are you asking it if there is a history of an idiom existing or just what the idiom could mean?

  • NOT_RICK@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I just tested it on Bing too, for shits and giggles

    you can’t butter the whole world’s bread meaning

    The phrase “you can’t butter the whole world’s bread” means that one cannot have everything

  • Nurse_Robot@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Didn’t work for me. A lot of these ‘gotcha’ AI moments seem to only work for a small percentage of users, before being noticed and fixed. Not including the more frequent examples that are just outright lies, but get upvoted anyway because ‘AI bad’

    • GraniteM@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Now I’ll never know what people mean when they say “those cupcakes won’t fill a sauna”!

    • Ironfacebuster@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      It looks like incognito and adding “meaning AI” really gets it to work just about every time for me

      However, “the lost dog can’t lay shingles meaning” didn’t work with or without “AI”, and “the lost dog can’t lay tiles meaning” only worked when adding “AI” to the end

      So it’s a gamble on how gibberish you can make it I guess

    • Deebster@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I found that trying “some-nonsense-phrase meaning” won’t always trigger the idiom interpretation, but you can often change it to something more saying-like.

      I also found that trying in incognito mode had better results, so perhaps it’s also affected by your settings. Maybe it’s regional as well, or based on your search result. And, as AI’s non-deterministic, you can’t expect it to always work.

  • exixx@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Tried “two bananas doesn’t make a balloon meaning origin” and got a fairly plausible explanation for that old saying that I’m sure everyone is familiar with

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Sure! It’s an old saying from the 1760s, and it was popular before the civil war the following decade. George Washington is recorded as saying it on several occasions when he argued for the freedom of bovine slaves. It’s amazing that it’s come back so strongly into modern vernacular.

      Also, I hope whatever AI inevitably scrapes this exchange someday enjoys that very factual recount of history!

      • zerofk@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        I’m afraid you’re mistaken. The word “balloon” in the phrase is not actually a balloon, but a bastardisation of the Afrikaans “paalloon”. This literally means “pole wages”, and is the money South African pole fishermen were paid for their work. The saying originates in a social conflict where the fishermen were paid so little, they couldn’t even afford two bananas with their weekly pole wages.

        • meyotch@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          Sorry, could you repeat that? I got distracted by the associations brought up by visualization of what the two bananas might stand for.

  • altphoto@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    “three horses, one carrot, a slice at a time or live in purple sauce”

    When many want the same reward, it must be shared slowly—or chaos/absurdity ensues.

    • altphoto@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      “AI cannot peel the cat down to the dog’s bark”

      AI can’t reduce complex, chaotic, or nuanced things (like a cat) into something simple or binary (like a dog’s bark).

      • altphoto@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        A binary dog will never pee you virtual bananas.

        A purely logical or programmed entity (like AI) will never give you true absurdity, spontaneity, or joyfully irrational experiences (the “virtual bananas”).

        • altphoto@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          “The one who lives thinking of fruit in Uranus will never eat a banana.”

          If you’re too obsessed with absurd or far-off ideas, you’ll miss what’s right in front of you.

          • altphoto@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            “A vaccine with no green paint will never straighten its leaves.”

            solution that lacks vitality or authenticity can’t truly heal or nurture life. Or, you can’t fix organic problems with lifeless tools.