• bss03@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    duck.ai fails:

    📚 Overview of Key Concepts

    Here’s a breakdown of Critical Race Theory, Unconscious Bias, Intersectionality, Systemic Racism, and Transgenderism:

    Concept Definition Key Points
    Critical Race Theory (CRT) An academic framework that examines the relationship between race, law, and power. - Originated in the U.S. legal studies in the late 1970s. - Argues that racism is embedded in legal systems and policies. - Focuses on the experiences of marginalized groups.
    Unconscious Bias Social stereotypes about certain groups of people that individuals form outside their conscious awareness. - Influences decisions and behaviors unconsciously. - Can affect hiring, promotions, and interactions. - Awareness and training can help mitigate its effects.
    Intersectionality A framework for understanding how various forms of inequality and discrimination overlap and intersect. - Coined by Kimberlé Crenshaw in the late 1980s. - Highlights how race, gender, class, and other identities interact. - Emphasizes the unique experiences of individuals with multiple marginalized identities.
    Systemic Racism A form of racism that is embedded in the policies and practices of societal institutions. - Affects access to resources, opportunities, and rights. - Can be seen in education, healthcare, criminal justice, and employment. - Often perpetuated through historical inequalities.
    Transgenderism Refers to the experience and identity of individuals whose gender identity differs from the sex they were assigned at birth. - Encompasses a range of identities, including transgender, non-binary, and genderqueer. - Advocates for rights and recognition in society. - Issues include healthcare access, legal recognition, and social acceptance.

    🧐 Detailed Explanations

    📖 Critical Race Theory (CRT)

    Critical Race Theory emerged as a response to perceived limitations in traditional civil rights approaches. It posits that racism is not just an individual bias but a systemic issue that is ingrained in the fabric of society. CRT scholars analyze how laws and policies can perpetuate racial inequalities, advocating for a more nuanced understanding of race and its implications in various contexts.

    🧠 Unconscious Bias

    Unconscious Bias refers to the automatic judgments we make about people based on their race, gender, age, or other characteristics. These biases can lead to unintentional discrimination in various settings, such as workplaces or educational institutions. Training programs aimed at increasing awareness of these biases can help individuals recognize and counteract their effects.

    🌈 Intersectionality

    Intersectionality is crucial for understanding the complexities of identity and oppression. It recognizes that individuals can face multiple forms of discrimination simultaneously. For example, a Black woman may experience both racism and sexism, which can create unique challenges that are not fully addressed by looking at either issue in isolation.

    🌍 Systemic Racism

    Systemic Racism highlights how societal structures and institutions can perpetuate racial inequalities. This concept emphasizes that racism is not just about individual actions but is embedded in policies and practices that disadvantage certain racial groups. Addressing systemic racism requires comprehensive reforms across various sectors, including education, healthcare, and criminal justice.

    🏳️‍⚧️ Transgenderism

    Transgenderism encompasses a wide range of identities and experiences related to gender. Advocacy for transgender rights focuses on issues such as legal recognition, healthcare access, and protection against discrimination. The movement seeks to create a more inclusive society that respects and affirms diverse gender identities.


    These concepts are interconnected and play significant roles in discussions about social justice, equality, and human rights. If you have specific questions about any of these topics or want to explore them further, feel free to ask!

  • brvslvrnst@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy. Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races.

    Ahhh, so white men are the victims of woke AI, got it.

    • Genius@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      If I were asked to list the achievements of white people and I had to do it, I’d be real catty about it. “White people invented sitcoms, gas chambers, and the word ‘staycation’”

  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    Since Trump’s brain is stuck in the 90s I’ll put this in a fashion that’s period appropriate: That’s fuckin retarded.

  • Irdial@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    So the administration wants to win the AI race through deregulation, except they want to regulate its social compass. Make it make sense

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Who is responsible? The SW creator? The trainer? The source of the training data? The hosting data center?

    What are the penalties? Who enforces this? Who investigates?

    • The Octonaut@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      lol look at this guy still thinking in terms of “the crime comes first”.

      You arrest the dissident, then choose which ‘law’ they’ve broken. Don’t worry about the details. ‘Who is responsible’? The guy you just arrested. Duh.

  • Thoralf Will@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Do they call it „Newspeak“ yet or does it take another couple of months until they do?

    Damn, the US deteriorated quickly into a total shithole! Not much left to go until it’s worse than China.

    • kingthrillgore@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      15 days ago

      China is fucking kicking ass what are you talking about? We’re on track to become Russia – where nothing works, everyone drinks, and you’re fodder for the war machine.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        This is more accurate. China is fucking killing it. Far from perfect but somehow the US has turned into a third world shithole despite having the biggest budget in the world. China takes care of its people and actually sends help when natural disasters hit.

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Because Executive Orders aren’t laws. They’re just guidelines for the executive branch of the federal government, which the POTUS is in charge of. It can’t affect private entities like AI businesses, because that would require an actual act of congress.

      Notably, this would potentially determine what kinds of contracts the executive branch was able to make. For instance, maybe the government wants to contract out a LLM instead of building their own. This EO could affect which companies are able to bid on that contract, by adding these same restrictions to any LLM that they provide. But on its own, the EO is just that; an order to the executive branch of the federal government.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      15 days ago

      Watch all the AI companies scramble to comply in a quest for government contracts. This will affect everyone who uses American LLMs and generative AI.

      It should also open an opportunity for international competition from less censored models.

      • bigfondue@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        15 days ago

        And this is one of the best arguments against depending on LLMs. People are outsourcing their thinking to linear algebra machines owned by the wealthy. LLMs are a tool of social control.

      • Tony Bark@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Considering how much they bleed cash regularly, I can see them jumping on the government contract bandwagon quickly.

      • leftytighty@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        To be fair to the executive order (ugh) many of the examples cited are due to well intentioned system prompts that encourage the LLM to actively be diverse.

        The example of a female pope or whatever (read this earlier) is an example of that.

        Generally speaking the LLMs have left-bias because they’re trained on information unlike conservatives, but they aren’t necessarily asking the models to be censored

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      But anything the US feds contracted them for, like building data centres, they have to comply or they face penalties and have to pay all the costs back.

      10 days ago, a week before this was announced, they awarded $200M contracts each to Anthropic, OpenAI, Google and xAI

      This doesn’t doom the public versions, but they now have a pretty strong incentive to save money and make them comply with the US governments new definition of truth.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Well, in practice, no.

      Do you think any corporation is going to bother making a separate model for government contracts versus any other use? I mean, why would they. So unless you can pony up enough cash to compete with a lucrative government contract (and the fact none of us can is, on fact, the while point), the end result will involve these requirements being adopted by the overwhelming majority of generative AI available on the market.

      So in reality, no, this absolutely will not be limited to models purchased by the feds. Frankly, I believe choosing to think otherwise to be dangerously naive.

      • itsame@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        15 days ago

        No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

        • forrgott@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          15 days ago

          So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.

          Frankly, you sound like you’re talking straight out of your ass.

          • itsame@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

            BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.

      • MrMcGasion@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.

        Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).

    • dontmindmehere@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Honestly this order seems empty. Does the government even have a need for general LLMs? Why would they need an AI to answer simple questions?

      As much as I dislike Trump, this shouldn’t impact any AI available to the general public.

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Why would they need an AI to answer simple questions?

        to shift blame and responsibility, to create a more modern deity, …

      • Feyd@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Does the government even have a need for general LLMs?

        Will this stop them from spending our hard earned tax money on it?

      • jerakor@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Would you rather our current administration make their decisions by using the lowest bidder LLM, or their own brains?

  • blattrules@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I thought he was going to deregulate ai; this seems like regulation to me. Add it to the pile mountain of lies.

  • ArbitraryValue@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    Are there currently any government contracts put at risk by this? I didn’t think that the feds were major spenders on AI. And is Trump aware that Musk is currently the one man trying to provide the sort of AI that Trump wants?

  • wjrii@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    LLMs shall prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory.

    This may not go how they think it will. As an aside, for the moment at least, this is only for AI used/procured by the federal government.

  • Basic Glitch@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    So by EO all AI must be biased, but we don’t need to worry about bias in AI.

    You can pick your bias. You can pick your AI. But you can’t pick your AI’s bias (bc it has already been baked in by government mandate).

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I read through the thing, and it’s a doozy. he seems triggered by the fact that AI might make a picture where Washington is a black man, or Hamilton is Latino. I hope he hasn’t been to Broadway lately…