The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

  • Nobody@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

    Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Nooooo. I mean, we have about 80 years of history into AI research and the field is just full of overhyped promised that this particularly teach is the holy grail of AI to end in disappointment each time, but this time will be different! /s

      • Nobody@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yeah, OpenAI, ChatGPT, and Sam Altman have no relevance to AI LLMs. No idea what I was thinking.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I prefer Claude, usually, but the article also does not mention LLMs. I use generative audio, image generation, and video generation at work as often if not more than text generators.

          • Nobody@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            Good point, but LLMs are both ubiquitous and the public face of “AI.” I think it’s fair to assign them a decent share of the blame for overpromising and underdelivering.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Aha, so this must all be Elon’s fault! And Microsoft!

        There are lots of whipping boys these days that one can leap to criticize and get free upvotes.

  • FartsWithAnAccent@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    They tried implementing AI in a few our our systems and the results were always fucking useless. Maybe what we call “AI” can be helpful in some ways but I’d bet the vast majority of it is bullshit.

    • The Menemen!@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It is great for pattern recognition (we use it to recognize damages in pipes) and probably pattern reproduction (never used it for that). Haven’t really seen much other real life value.

    • speeding_slug@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      To not even consider the consequences of deploying systems that may farm your company data in order to train their models “to better serve you”. Like, what the hell guys?

    • DragonTypeWyvern@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      The one thing “AI” has improved in my life has been a banking app search function being slightly better.

      Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You’re telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

      • MindTraveller@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Generating multiple pictures of the same character is actually pretty hard. For example, let’s say you’re making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We’ll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.

        And all of this is multiplied ten times over if you want granular changes to a character. Let’s say you’re making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You’re going to need to be extremely specific with the AI and it’s probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn’t understand the assignment well enough.

        • okwhateverdude@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.

          You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.

          These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.

        • This is fine🔥🐶☕🔥@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Generating multiple pictures of the same character is actually pretty hard.

          Not from what I have seen on Civitai. You can train a model on specific character or person. Same goes for facial expressions.

          Of course you need to generate hundreds of images to get only a few that you might consider acceptable.

      • FartsWithAnAccent@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Looking like they were doing something with AI, no joke.

        One example was “Freddy”, an AI for a ticketing system called Freshdesk: It would try to suggest other tickets it thought were related or helpful but they were, not one fucking time, related or helpful.

        • Dave.@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          As an Australian I find the name Freddy quite apt then.

          There is an old saying in Aus that runs along the lines of, “even Blind Freddy could see that…”, indicating that the solution is so obvious that even a blind person could see it.

          Having your Freddy be Blind Freddy makes its useless answers completely expected. Maybe that was the devs internal name for it and it escaped to marketing haha.

          • FartsWithAnAccent@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            I actually ended up becoming blind to Freddy because of how profoundly useless it was: Permanently blocked the webpage elements that showed it from my browser lol. I think Fresh since gave up.

            Don’t get me wrong, the rest of the service is actually pretty great and I’d recommend Fresh to anyone in search of a decent ticketing system. Freddy sucks though.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Ahh, those things - I’ve seen half a dozen platforms implement some version of that, and they’re always garbage. It’s such a weird choice, too, since we already have semi-useful recommendation systems that run on traditional algorithms.

        • MentallyExhausted@reddthat.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          That’s pretty funny since manually searching some keywords can usually provide helpful data. Should be pretty straight-forward to automate even without LLM.

          • FartsWithAnAccent@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Yep, we already wrote out all the documentation for everything too so it’s doubly useless lol. It sucked at pulling relevant KB articles too even though there are fields for everything. A written script for it would have been trivial to make if they wanted to make something helpful, but they really just wanted to get on that AI hype train regardless of usefulness.

  • Hackworth@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            “Soup to nuts” just means I am responsible for the entirety of the process, from pre-production to post-production. Sometimes that’s like a dozen roles. Sometimes it’s me.

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Cool, enjoy your entire industry going under thanks to cheap and free software and executives telling their middle managers to just shoot and cut it on their phone.

          Sincerely,

          A former video editor.

          • Hackworth@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            If something can be effectively automated, why would I want to continue to invest energy into doing it manually? That’s literal busy work.

                • Flying Squid@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Video editing is not busy work. You’re excusing executives telling middle managers to put out inferior videos to save money.

                  You seem to think what I used to do was just cutting and pasting and had nothing to do with things like understanding film making techniques, the psychology of choosing and arranging certain shots, along with making do what you have when you don’t have enough to work with.

                  But they don’t care about that anymore because it costs money. Good luck getting an AI to do that as well as a human any time soon. They don’t care because they save money this way.

    • themurphy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Same, I’ve automated alot of my tasks with AI. No way 77% is “hampered” by it.

        • themurphy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I’m not working in tech either. Everyone relying on a computer can use this.

          Also, medicin and radiology are two areas that will benefit from this - especially the patients.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Voiceover recording, noise reduction, rotoscoping, motion tracking, matte painting, transcription - and there’s a clear path forward to automate rough cuts and integrate all that with digital asset management. I used to do all of those things manually/practically.

          • WalnutLum@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            All the models I’ve used that do TTS/RVC and rotoscoping have definitely not produced professional results.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              What are you using? Cause if you’re a professional, and this is your experience, I’d think you’d want to ask me what I’m using.

              • WalnutLum@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Coqui for TTS, RVC UI for matching the TTS to the actor’s intonation, and DWPose -> controlnet applied to SDXL for rotoscoping

                • Hackworth@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Full open source, nice! I respect the effort that went into that implementation. I pretty much exclusively use 11 Labs for TTS/RVC, turn up the style, turn down the stability, generate a few, and pick the best. I do find that longer generations tend to lose the thread, so it’s better to batch smaller script segments.

                  Unless I misunderstand ya, your controlnet setup is for what would be rigging and animation rather than roto. I do agree that while I enjoy the outputs of pretty much all the automated animators, they’re not ready for prime time yet. Although I’m about to dive into KREA’s new key framing feature and see if that’s any better for that use case.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

            They’re right IMO. Practical effects still look and age better than (IMO very obvious) digital effects. Oh and digital deaging IMO looks like crap.

            But, this will always remain an opinion battle anyway, because quantifying “artistry” is in and of itself a fool’s errand.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              Digital video, not digital effects - I mean the guys I went to film school with that refused to touch digital videography.

      • Hackworth@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I dunno, mishandling of AI can be worse than avoiding it entirely. There’s a middle manager here that runs everything her direct-report copywriter sends through ChatGPT, then sends the response back as a revision. She doesn’t add any context to the prompt, say who the audience is, or use the custom GPT that I made and shared. That copywriter is definitely hampered, but it’s not by AI, really, just run-of-the-mill manager PEBKAC.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

  • barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn’t help people finish their tasks.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It’s like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      My dumbass friend who over confidently smart is switch to Linux bcz of open source AI. I can’t wait to see what he learns.

  • Sk1ll_Issue@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The study identifies a disconnect between the high expectations of managers and the actual experiences of employees

    Did we really need a study for that?

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    AI is better when I use it for item generation. It kicks ass at generating loot drops for encounters. All I really have to do is adjust item names if its not a mundane weapon. I do occasionally change an item completely cause its effects can get bland. But dont do much more than that.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      That’s because you’re using AI for the correct thing. As others have pointed out, if AI usage is enforced (like in the article), chances are they’re not using AI correctly. It’s not a miracle cure for everything and should just be used when it’s useful. It’s great for brainstorming. Game development (especially on the indie side of things) really benefit from being able to produce more with less. Or are you using it for DnD?

          • Ragnarok314159@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Right now the choice is Dark Soul bosses who are mean, scripted stories (although BG3 is good), or people online who have sex with my mother.

            LLM chat bots just open up new possibilities.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          You mean…I might finally be able to play that game?!? Hurray!

          Once a couple weeks I go somewhere to play it or similar games. Can’t follow, feel awkward, get sensory overload and a headache, get terribly tired, come home depressed over a wasted day.

          That is, once in 3-5 games I feel that maybe it wasn’t that bad.

          • Ragnarok314159@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            The groups I learned of were really weird about letting anyone else show up. Was told I had to form my own group and write my own adventures.

            Thank you, fellow nerds.

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              It’s the other way around for me, wanted to play in Star Wars KotOR setting, one time one guy showed up (but only over voice call), another time my buddy agreed to play.

              Then wrote something in one DM’s setting, only that DM showed up, said the quest was actually cool with good ideas yadda-yadda and mentioned it on another game, and later reused some of the moments in his own ones.

              But me coming to other DMs’ games seems welcomed.

              Was told I had to form my own group and write my own adventures.

              I think they didn’t like you or your way of playing busted something in the quest their DM wrote, or something like that.

              • Ragnarok314159@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                It was probably the latter. Because if they didn’t like me that is much worse for a multitude of reasons.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I use it for tabletops lol I haven’t thrown any game dev ideas in there but that might be because I already have a backlog of projects cause I’m that guy.

  • superkret@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    The other 23% were replaced by AI (actually, their workload was added to that of the 77%)

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    because on top of your duties you now have to check whatever the AI is doing in place of the employee it has replaced

  • dezmd@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    The Upwork Research Institute

    Not exactly a panacea of rigorous scientific study.

  • tvbusy@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    This study failed to take into consideration the need to feed information to AI. Companies now prioritize feeding information to AI over actually making it usable for humans. Who cares about analyzing the data? Just give it to AI to figure out. Now data cannot be analyzed by humans? Just ask AI. It can’t figure out? Give it more so it can figure it out. Rinse, repeat. This is a race to the bottom where information is useless to humans.

  • JohnnyH842@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Admittedly I only skimmed the article, but I think one of the major problems with a study like this is how broad “AI” really is. MS copilot is just bing search in a different form unless you have it hooked up to your organizations data stores, collaboration platforms, productivity applications etc. and is not really helpful at all. Lots of companies I speak with are in a pilot phase of copilot which doesn’t really show much value because it doesn’t have access to the organizations data because it’s a big security challenge. On the other hand, a chat bot inside of a specific product that is trained on that product specifically and has access to the data that it needs to return valuable answers to prompts that it can assist in writing can be pretty powerful.

  • alienanimals@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    The billionaire owner class continues to treat everyone like shit. They blame AI and the idiots eat it up.

    • kent_eh@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Except it didn’t make more jobs, it just made more work for the remaining employees who weren’t laid off (because the boss thought the AI could let them have a smaller payroll)

    • hswolf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It also helps you getting a starting point when you don’t know how ask a search engine the right question.

      But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.

    • captainlezbian@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      And are those use cases common and publicized? Because I see it being advertised as “improves productivity” for a novel tool with myriad uses I expect those trying to sell it to me to give me some vignettes and not to just tell my boss it’ll improve my productivity. And if I was in management I’d want to know how it’ll do that beyond just saying “it’ll assist in easy and menial tasks”. Will it be easier than doing them? Many tools can improve efficiency on a task at a similar time and energy investment to the return. Are those tasks really so common? Will other tools be worse?

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I mean if it’s easy you can probably script it with some other tool.

      “I have a list of IDs and need to make them links to our internal tool’s pages” is easy and doesn’t need AI. That’s something a product guy was struggling with and I solved in like 30 seconds with a Google sheet and concatenation

      • silasmariner@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Yeah but the idea of AI in that kind of workflow is so that the product guy can actually do it themselves without asking you and in less than 30 mins

        • jjjalljs@ttrpg.network
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Yeah but that’s like using an entire gasoline powered car to play a CD.

          Competent product guy should be able to learn some simpler tools like Google sheets.

          • silasmariner@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            No arguments from me that it’s better if people are just better at their job, and I like to think I’m good at mine too, but let’s be real - a lot of people are out of their depth and I can imagine it can help there. OTOH is it worth the investment in time (from people who could themselves presumably be doing astonishing things) and carbon energy? Probably not. I appreciate that the tech exists and it needs to, but shoehorning it in everywhere is clearly bollocks. I just don’t know yet how people will find it useful and I guess not everyone gets that spending an hour learning to do something that takes 10s when you know how is often better than spending 5 mins making someone or something else do it for you… And TBF to them, they might be right if they only ever do the thing twice.

            • Balder@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              I think the actual problem here is that if the product people can’t learn such a simple thing by themselves, they also won’t be able to correctly prompt the LLM to their use case.

              They said, I do think LLMs can boost productivity a lot. I’m learning a new framework and since there’s so much details to learn about it, it’s fast to ask ChatGPT what’s the proper way to do X on this framework etc. Although that only works because I already studied the foundation concepts of that framework first.

              • silasmariner@programming.dev
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                I think the actual problem is that they won’t know when they’ve got something that compiles but is wrong… I dunno though. I’ve never seen someone doing this and I can only speculate tbh. I only ever asked ChatGPT a couple of times, as a joke to myself when I got stuck, and it spouted completely useless nonsense both times… Although on one occasion the wrong code it produced looked like it had the pattern of a good idiom behind it and I stole that.

        • Melvin_Ferd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Replace joker for media and replace distract you from bank heist with convince you to hate AI then yes.

          • Flying Squid@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Do convince us why we should like something which is a massive ecological disaster in terms of fresh water and energy usage.

            Feel free to do it while denying climate change is a problem if you wish.

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              I wrote this and feed it through chatGPT to help make it more readable. To me that’s pretty awesome. If I wanted I can have it written like an Elton John song. If that doesn’t convince you it’s fun and worth it then maybe the argument below could, or not. Either way I like it.


              I don’t think I’ll convince you, but there are a lot of arguments to make here.

              I heard a large AI model is equivalent to the emissions from five cars over its lifetime. And yes, the water usage is significant—something like 15 billion gallons a year just for a Microsoft data center. But that’s not just for AI; data centers are something we use even if we never touch AI. So, absent of AI, it’s not like we’re up in arms about the waste and usage from other technologies. AI is being singled out—it’s the star of the show right now.

              But here’s why I think we should embrace it: the potential. I’m an optimist and I love technology. AI bridges gaps in so many areas, making things that were previously difficult much easier for many people. It can be an equalizer in various fields.

              The potential with AI is fascinating to me. It could bring significant improvements in many sectors. Think about analyzing and optimizing power grids, making medical advances, improving economic forecasting, and creating jobs. It can reduce mundane tasks through personalized AI, like helping doctors take notes and process paperwork, freeing them up to see more patients.

              Sure, it consumes energy and has costs, but its potential is huge. It’s here and advancing. If we keep letting the media convince us to hate it, this technology will end up hoarded by elites and possibly even made illegal for the rest of us. Imagine having a pocket advisor for anything—mechanical issues, legal questions, gardening problems, medical concerns. We’re not there yet, but remember, the first cell phones were the size of a brick. The potential is enormous, and considering all the things we waste energy and resources on, this one is weighed against it benefits.

              • Flying Squid@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Not being able to use your own words to explain something to me and having the thing that is an ecological disaster that also lies all the time explain it to me instead really only reinforces my point that there’s no reason to like this technology.

                • Melvin_Ferd@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  3 months ago

                  It is my own words. Wrote out the whole thing but I was never good with grammar and fully admit that often what I write is confusing or ambiguous. I can leverage chatgpt same way I would leverage spell check in word. I don’t see any problems there.

                  But if you don’t mind, I’m interested in the points discussed.

              • Melvin_Ferd@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                3 months ago

                For the curious, the message rewritten as lyrics for an Elton John song:

                (Verse 1) I don’t think I’ll convince you, but I’ve got a tale to tell, They say AI’s like five cars, burning fuel and raising hell. And the water that it guzzles, like rivers running dry, Fifteen billion gallons, under Microsoft’s sky.

                (Pre-Chorus) But it’s not just AI, oh, it’s every data node, Even if you never touch it, it’s a heavy load. We point fingers at AI, like it’s the star tonight, But let me tell you why I think it shines so bright.

                (Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

                (Verse 2) I’m an optimist, a techie at heart, AI could change the world, give us a brand new start. From power grids to medicine, it’s a helping hand, Economic dreams and jobs across the land.

                (Pre-Chorus) Yes, it drinks up energy, but what’s the price to pay? For the chance to see the mundane fade away. Imagine doctors with more time to heal, While AI handles notes, it’s a real deal.

                (Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

                (Bridge) If we let the media twist our minds, We’ll lose this gift to the elite, left behind. But picture this, a pocket guide for all, From car troubles to legal calls.

                (Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

                (Outro) First cell phones were the size of a brick, Now they’re magic in our hands, technology so quick. AI’s got the power, to change the way we live, So let’s embrace it now, there’s so much it can give.

                (Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

                (Outro) Oh, it’s the future, it’s the dream, AI’s the bright light, in the grand scheme.

                • rekorse@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  This is the stupidest shit ive seen yet.

                  We dont care about other data centers as much because we get a service in return that people want.

                  Most people didnt ask for or want AI, didnt agree to its costs, and now have to deal with it potentially taking their jobs.

                  But go ahead and keep posting idiotic and selfish posts about how you like it so much and its so fun and cool, look at my shitty song lyrics that make no fucking sense!

                  I’d say touch grass but the lyrics make me want to say touch instrument instead.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              AI is a rounding error in terms of energy use. Creating and worldwide usage of chatGPT4 for a whole year comes out to less than 1% of the energy Americans burn driving in one day.

              • Flying Squid@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                I think I’ll go with Yale over ‘person on the Internet who ignored the water part.’

                https://e360.yale.edu/features/artificial-intelligence-climate-energy-emissions

                From that article:

                Estimates of the number of cloud data centers worldwide range from around 9,000 to nearly 11,000. More are under construction. The International Energy Agency (IEA) projects that data centers’ electricity consumption in 2026 will be double that of 2022 — 1,000 terawatts, roughly equivalent to Japan’s current total consumption.

                • Womble@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  3 months ago

                  Forgive me for not trusting an ariticle that says that AI will use a petawatt within the next two years. Either the person who wrote it doesnt understand the difference between energy and power or they are very sloppy.

                  Chat GPT took 50GWh to train source

                  Americans burn 355 million gallons of gasoline a day source and at 33.5 Kwh/gal source that comes out to 12,000GWh per day burnt in gasoline.

                  Water usage is more balanced, depending on where the data centres are it can either be a significant problem or not at all. The water doesnt vanish it just goes back into the air, but that can be problematic if it is a significant draw on local freshwater sources. e.g. using river water just before it flows into the sea, 0 issue, using a ground aquifer in a desert, big problem.

            • Melvin_Ferd@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              No but most media moved quick to present every article to convince people why they should hate it. Pack mentality like when a popular kid starts spreading rumours about the new kid in class. People quickly adopt the common shared belief and most of those now are Media driven.

              AI is pretty cool new tech. Most people would have been mediocre to interested in it if it were not for corporate media telling us all why we need to hate it.

              I saw an article the other day about “people shitting on the beach” which was really an attack on immigrants. Media is now about forming opinions for us and we all accept it more than ever.

              • rekorse@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                A majority of people have no use, nor want, AI. Just because you and a sub group of people like it, doesnt mean everyone else are idiots being misled by the media.

                Why exactly so you think the media wants people to hate AI anyways? Wouldnt big corporate gain from automating news writing?

      • FarFarAway@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        The summary for the post kinda misses the mark on what the majority of the article is pushing.

        Yes, the first part describes employees struggling with AI, but the majority of the article makes the case for hiring more freelancers and updating “outdated work models and systems…to unlock the full expected productivity value of AI.”

        It essentially says that AI isn’t the problem, since freelancers can use it perfectly. So full time employees need to be “rethinking how to best do their work and accomplish their goals in light of AI advancements.”