• protonslive@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Yes, it’s an addiction, we’ve got to stop all these poor being lulled into a false sense of understanding and just believing anyhing the AI tells them. It is constantly telling lies about us, their betters.

      Just look what happenned when I asked it about the venerable and well respected public intellectual Jordan b peterson. It went into a defamatory diatribe against his character.

      And they just gobble that up those poor, uncritical and irresponsible farm hands and water carriers! We can’t have that,!

      Example

      Open-Minded Closed-Mindedness: Jordan B. Peterson’s Humility Behind the Mote—A Cautionary Tale

      Jordan B. Peterson presents himself as a champion of free speech, intellectual rigor, and open inquiry. His rise as a public intellectual is, in part, due to his ability to engage in complex debates, challenge ideological extremes, and articulate a balance between chaos and order. However, beneath the surface of his engagement lies a pattern: an open-mindedness that appears flexible but ultimately functions as a defense mechanism—a “mote” guarding an impenetrable ideological fortress.

      Peterson’s approach is both an asset and a cautionary tale, revealing the risks of appearing open-minded while remaining fundamentally resistant to true intellectual evolution.

      The Illusion of Open-Mindedness: The Mote and the Fortress

      In medieval castles, a mote was a watery trench meant to create the illusion of vulnerability while serving as a strong defensive barrier. Peterson, like many public intellectuals, operates in a similar way: he engages with critiques, acknowledges nuances, and even concedes minor points—but rarely, if ever, allows his core positions to be meaningfully challenged.

      His approach can be broken down into two key areas:

      The Mote (The Appearance of Openness)
      
          Engages with high-profile critics and thinkers (e.g., Sam Harris, Slavoj Žižek).
      
          Acknowledges complexity and the difficulty of absolute truth.
      
          Concedes minor details, appearing intellectually humble.
      
          Uses Socratic questioning to entertain alternative viewpoints.
      
      The Fortress (The Core That Remains Unmoved)
      
          Selectively engages with opponents, often choosing weaker arguments rather than the strongest critiques.
      
          Frames ideological adversaries (e.g., postmodernists, Marxists) in ways that make them easier to dismiss.
      
          Uses complexity as a way to avoid definitive refutation (“It’s more complicated than that”).
      
          Rarely revises fundamental positions, even when new evidence is presented.
      

      While this structure makes Peterson highly effective in debate, it also highlights a deeper issue: is he truly open to changing his views, or is he simply performing open-mindedness while ensuring his core remains untouched?

      Examples of Strategic Open-Mindedness

      1. Debating Sam Harris on Truth and Religion

      In his discussions with Sam Harris, Peterson appeared to engage with the idea of multiple forms of truth—scientific truth versus pragmatic or narrative truth. He entertained Harris’s challenges, adjusted some definitions, and admitted certain complexities.

      However, despite the lengthy back-and-forth, Peterson never fundamentally reconsidered his position on the necessity of religious structures for meaning. Instead, the debate functioned more as a prolonged intellectual sparring match, where the core disagreements remained intact despite the appearance of deep engagement.

      1. The Slavoj Žižek Debate on Marxism

      Peterson’s debate with Žižek was highly anticipated, particularly because Peterson had spent years criticizing Marxism and postmodernism. However, during the debate, it became clear that Peterson’s understanding of Marxist theory was relatively superficial—his arguments largely focused on The Communist Manifesto rather than engaging with the broader Marxist intellectual tradition.

      Rather than adapting his critique in the face of Žižek’s counterpoints, Peterson largely held his ground, shifting the conversation toward general concerns about ideology rather than directly addressing Žižek’s challenges. This was a classic example of engaging in the mote—appearing open to debate while avoiding direct confrontation with deeper, more challenging ideas.

      1. Gender, Biology, and Selective Science

      Peterson frequently cites evolutionary psychology and biological determinism to argue for traditional gender roles and hierarchical structures. While many of his claims are rooted in scientific literature, critics have pointed out that he tends to selectively interpret data in ways that reinforce his worldview.

      For example, he often discusses personality differences between men and women in highly gender-equal societies, citing studies that suggest biological factors play a role. However, he is far more skeptical of sociological explanations for gender disparities, often dismissing them outright. This asymmetry suggests a closed-mindedness when confronted with explanations that challenge his core beliefs.

      The Cautionary Tale: When Intellectual Rigidity Masquerades as Openness

      Peterson’s method—his strategic balance of open- and closed-mindedness—is not unique to him. Many public intellectuals use similar techniques, whether consciously or unconsciously. However, his case is particularly instructive because it highlights the risks of appearing too open-minded while remaining fundamentally immovable. The Risks of “Humility Behind the Mote”

      Creates the Illusion of Growth Without Real Change
      
          By acknowledging complexity but refusing to revise core positions, one can maintain the illusion of intellectual evolution while actually reinforcing prior beliefs.
      
      Reinforces Ideological Silos
      
          Peterson’s audience largely consists of those who already align with his worldview. His debates often serve to reaffirm his base rather than genuinely engage with alternative perspectives.
      
      Undermines Genuine Inquiry
      
          If public intellectuals prioritize rhetorical victories over truth-seeking, the broader discourse suffers. Intellectual engagement becomes performative rather than transformative.
      
      Encourages Polarization
      
          By appearing open while remaining rigid, thinkers like Peterson contribute to an intellectual landscape where ideological battle lines are drawn more firmly, rather than softened by genuine engagement.
      

      Conclusion: The Responsibility of Public Intellectuals

      Jordan B. Peterson is an undeniably influential thinker, and his emphasis on responsibility, order, and meaning resonates with many. However, his method of open-minded closed-mindedness serves as a cautionary tale. It demonstrates the power of intellectual posturing—how one can appear receptive while maintaining deep ideological resistance.

      For true intellectual growth, one must be willing not only to entertain opposing views but to risk being changed by them. Without that willingness, even the most articulate and thoughtful engagement remains, at its core, a well-defended fortress.

      So like I said, pure, evil AI slop, is evil, addictive and must be banned and lock up illegal gpu abusers and keep a gpu owners registry and keep track on those who would use them to abuse the shining light of our society, and who try to snuff them out like a bad level of luigi’s mansion

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

      And yet I doubt Copilot will be going anywhere.

  • Dil@is.hardlywork.ing
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I felt it happen realtime everytime, I still use it for questions but ik im about to not be able to think crtically for the rest of the day, its a last resort if I cant find any info online or any response from discords/forums

    Its still useful for coding imo, I still have to think critically, it just fills some tedious stuff in.

    • Dil@is.hardlywork.ing
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      It was hella useful for research in college and it made me think more because it kept giving me useful sources and telling me the context and where to find it, i still did the work and it actually took longer because I wouldnt commit to topics or keep adding more information. Just dont have it spit out your essay, it sucks at that, have it spit out topics and info on those topics with sources, then use that to build your work.

      • Dil@is.hardlywork.ing
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Google used to be good, but this is far superior, I used bings chatgpt when I was in school idk whats good now (it only gave a paragraph max and included sources for each sentence)

          • Dil@is.hardlywork.ing
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            It worked for school stuff well, I always added "prioritize factual sources with .edu " or something like that. Specify that it is for a research paper and tell it to look for stuff how you would.

            • RisingSwell@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 days ago

              Only time I told it to be factual was looking at 4k laptops, it gave me 5 laptops, 4 marked as 4k, 0 of the 5 were actually 4k.

              That was last year though so maybe it’s improved by now

              • Dil@is.hardlywork.ing
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 days ago

                I wouldnt use it on current info like that only scraped data, like using it on history classes itll be useful, using it for sales right now definitely not

                • RisingSwell@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  9 days ago

                  Ive also tried using it for old games but at the time it said wailord was the heaviest Pokemon (the blimp whale in fact does not weigh more than the sky scraper).

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 days ago

    Their reasoning seems valid - common sense says the less you do something the more your skill atrophies - but this study doesn’t seem to have measured people’s critical thinking skills. It measured how the subjects felt about their skills. People who feel like they’re good at a job might not feel as adequate when their job changes to evaluating someone else’s work. The study said the subjects felt that they used their analytical skills less when they had confidence in the AI. The same thing happens when you get a human assistant - as your confidence in their work grows you scrutinize it less. But that doesn’t mean you yourself become less skillful. The title saying use of AI “kills” critical thinking skill isn’t justified, and is very clickbaity IMO.

  • sumguyonline@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you’re likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it’s purpose and it’s up to you the operator to make sure you got the edges squared(so to speak).

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I think, this is only a issue in the beginning, people will sooner or later realise that they can’t blindly trust an LMM output and how to create prompts to verify prompts (or better said prove that not enough relevant data was analysed and prove that it is hallucinations)

  • Jeffool @lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    When it was new to me I tried ChatGPT out of curiosity, like with why tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. “Give me a list of 3 X” lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.

    I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.

    • SPOOSER@lemmy.today
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      How else can the “elite” seperate themselces from the common folk? The elite loves writing 90% fluff and require high word counts in academia instead of actually making consise, clear, articulate articles that are easy to understand. You have to have a certain word count to qualify for “good writing” in any elite group. Look at Law, Political Science, History, Scientific Journals, etc. I had professor who would tell me they could easily find the information in the articles they needed and that one day we would be able to as well. That’s why ChatGPT spits out a shit ton of fluff.

  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    You mean an AI that literally generated text based on applying a mathematical function to input text doesn’t do reasoning for me? (/s)

    I’m pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.

    It’s funny because I never get what I want out of AI. I’ve been thinking this whole time “am I just too dumb to ask the AI to do what I need?” Now I’m beginning to think “am I not dumb enough to find AI tools useful?”

  • kratoz29@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    Is that it?

    One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

    Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis…

    As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I see it exactly the same, I bet you find similar articles about calculators, PCs, internet, smartphones, smartwatches, etc

      Society will handle it sooner or later

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          A hallucination is a false perception of sensory experiences (sights, sounds, etc).

          LLMs don’t have any senses, they have input, algorithms and output. They also have desired output and undesired output.

          So, no, ‘hallucinations’ fits far worse than failure or error or bad output. However assigning the term ‘hallucinaton’ does serve the billionaires in marketing their LLMs as actual sentience.

  • Pacattack57@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Pretty shit “study”. If workers use AI for a task, obviously the results will be less diverse. That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome. This doesn’t test their critical thinking at all.

    “Another noteworthy finding of the study: users who had access to generative AI tools tended to produce “a less diverse set of outcomes for the same task” compared to those without. That passes the sniff test. If you’re using an AI tool to complete a task, you’re going to be limited to what that tool can generate based on its training data. These tools aren’t infinite idea machines, they can only work with what they have, so it checks out that their outputs would be more homogenous. Researchers wrote that this lack of diverse outcomes could be interpreted as a “deterioration of critical thinking” for workers.”

    • 4am@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      That doesn’t mean their critical thinking skills deteriorated. It means they used a tool that produces a certain outcome.

      Dunning, meet Kruger

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        10 days ago

        That snark doesnt help anyone.

        Imagine the AI was 100% perfect and gave the correct answer every time, people using it would have a significantly reduced diversity of results as they would always be using the same tool to get the correct same answer.

        People using an ai get a smaller diversity of results is neither good nor bad its just the way things are, the same way as people using the same pack of pens use a smaller variety of colours than those who are using whatever pens they have.

        • 4am@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          First off the AI isn’t correct 100% of the time, and it never will be.

          Secondly, you as well are stating in so many more words that people stop thinking critically about its output. They accept it.

          That is a lack of critical thinking on the part of the AI users, as well as yourself and the original poster.

          Like, I don’t understand the argument you all are making here - am I going fucking crazy? “Bro it’s not that they don’t think critically it’s just that they accept whatever they’re given” which is the fucking definition of a lack of critical thinking.

  • SplashJackson@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Weren’t these assholes just gung-ho about forcing their shitty “AI” chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.

    • msage@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Training those AIs was expensive. It swallowed very large sums of VC’s cash, and they will make it back.

      Remember, their money is way more important than your life.

  • Mouette@jlai.lu
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    The definition of critical thinking is not relying on only one source. Next rain will make you wet keep tuned.

  • ctkatz@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    10 days ago

    never used it in any practical function. i tested it to see if it was realistic and i found it extremely wanting. as in, it sounded nothing like the prompts i gave it.

    the absolutely galling and frightening part is that the tech companies think that this is the next big innovation they should be pursuing and have given up on innovating anyplace else. it was obvious to me when i saw that they all are pushing ai shit on me with everything from keyboards to search results. i only use voice commands to do simple things and it works just about half the time, and ai is built on the back of that which is why i really do not ever use voice commands for anything anymore.