• 8 Posts
  • 55 Comments
Joined 6 months ago
cake
Cake day: January 4th, 2025

help-circle

  • other_cat@lemmy.ziptoLemmy Shitpost@lemmy.worldLiving a lie
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Always wondered why some of the CSRs at the call center would get chewed out more than others despite being polite and respectful and I think you just made me understand that it was because they sounded false. They had a very obvious “customer service” voice. The ones who didn’t tended to just sound like normal folk having a good day. (That’s the secret sauce, they usually weren’t having a good day, but they sounded like they were in on some joke with you, that joke being ‘ahh talking to people, am I right?’)


  • I also firmly believe it’s to crush the idea that the majority of people don’t like what’s happening. Feels like standard censorship to me. If you look around, and it feels like everyone is against you, you have much incentive to speak up. Nobody wants to stand alone. Reddit is such a huge platform for discussion–they want to use that power to shape what is ‘normal’.











  • I don’t think so. A smaller pool does mean smaller odds that someone will take what you are offering and do anything with it, but it’s still possible to affect change, especially if you are asking people to affect things actually within their control.

    But, with that said, your impact will likely be stronger if you communicate with the people near you locally instead of online, since you and those (physically) around you are affected by the same localized forces.

    The internet is a good place to collaborate on ideas and methodologies; your local community is a good place to try to implement those things.








  • I thought all the energy drain was from training, not from prompts? So I looked it up. Like most things, it’s complicated.

    My takeaway is that training an LLM is the biggest energy sink, and after that it’s maintaining the data centers they live in, but when it comes to generative AI itself, prompts aren’t completely innocent either.

    So, you’re right, energy is being wasted on silly prompts, particularly when you compare it to other AI types than generative. But the biggest culprit is in the training and maintaining of the LLMs in the first place.

    I don’t know, I personally feel like I have a finite amount of rage, I’d rather write an angry post on a blog about the topic than yell at some rando on a forum.



  • I will give it this. It’s been actually pretty helpful in me learning a new language because what I’ll do is that I’ll grab an example of something in working code that’s kind of what I want, I’ll say “This, but do X” then when the output doesn’t work, I study the differences between the chatGPT output & the example code to learn why it doesn’t work.

    It’s a weird learning tool but it works for me.