• Emmie@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I have just read the features of iOS 18.1 Apple intelligence so called.
    TLDR: typing and sending messages for you mostly like one click reply to email. Or… shifting text tone 🙄

    So that confirms my fears that in the future bots will communicate with each other instead of us. Which is madness. I want to talk to a real human and not a bot that translates what the human wanted to say approximately around 75% accuracy devoid of any authenticity

    If I see someone’s unfiltered written word I can infer their emotions, feelings what kind of state they are in etc. Cold bot to bot speech would truly fuck up society in unpredictable ways undermining fundaments of communication.

    Especially if you notice that most communication, even familial already happens today nowadays

    • 2001zhaozhao@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Future email writing: type the first three words then spam click the auto complete on your LLM-based keyboard. Only stop when the output starts to not make sense anymore.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      So kids will learn to just ‘hey siri tell my mom I am sorry and I will improve myself’.

      What makes you think that kids aren’t already doing things like this? Not with Siri, but it doesn’t take much effort to get ChatGPT to write something for you.

      Also I saw a South Park episode about this. https://en.wikipedia.org/wiki/Deep_Learning_(South_Park)

      • Emmie@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        It isn’t built-in in the very phone operating system where you just tap on generate response in the iMessage. It is always about laziness. First the privacy went away due to path of least effort even though you always had tons of privacy alternatives but they require just 10 seconds of extra effort

    • Omega_Jimes@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      My hope for the future relies on a study indicating that after 5 or so generations of training data tainted with AI generated information, the LLM models collapsed.

      Hopefully, after enough LLMs have been fed LLM data, we will arrive in an LLM-free future.

      <this is unlikely to come true but let me hope >

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Another possibility is LLMs will only be trained on historic data, meaning they will eventually start to sound very old-fashioned, making them easier to spot.

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            When one of two things happens:

            • A new hype starts to replace it (can happen fast though!)
            • The hype starts to specialize into subcategories of the hype (e.g. AI images, AI videos, AI text generation)

            When “AI” hype dies down we are likely to see “AI” removed from various topics because enough people know and understand the hyped parent topic. It’ll just be “image generation”, “video generation”, “generated text”, etc.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

      Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.

      So we have… All this. All this nonsense. All because of stupid managers.

  • rustyfish@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I barely trust organics. Some CEO being rock hard about his newest repertoire of buzzword doesn’t help.

  • Sarmyth@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’ve learned to hate companies that replaced their support staff with AI. I don’t mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.

  • yemmly@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    This is because the AI of today is a shit sandwich that we’re being told is peanut butter and jelly.

    For those who like to party: All the current “AI” technologies use statistics to approximate semantics. They can’t just be semantic, because we don’t know how meaning works or what gives rise to it. So the public is put off because they have an intuitive sense of the ruse.

    As long as the mechanics of meaning remain a mystery, “AI” will be parlor tricks.

    • yemmly@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      And I don’t mean to denigrate data science. It is important and powerful. And real machine intelligence may one day emerge from it (or data science may one day point the way). But data science just isn’t AI.

  • Honytawk@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    AI has some pretty good uses.

    But in the majority of junk on the market it is nothing but marketing bloatware.

  • Todd Bonzalez@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Give me a bunch of open AI models and a big GPU to play with and I’ll have a great time. It’s a wild world out there.

    Shove a bunch of AI nonsense in my face when I didn’t ask for it and I’m throwing your product out a window.

  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    “AI” is certainly a turn-off for me, I would ask a salesman “do you have one that doesn’t have that?” and I will now enumerate why:

    1. LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it’s nothing but an extremely elaborate and expensive party trick. I don’t want actual services like web searches replaced with elaborate party tricks.

    2. In a lot of cases it’s being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word “smart” to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn’t any more “AI” than my 90’s microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I’d like to see fail so spectacularly that people in the advertising business go missing over it.

    3. I already avoided “smart” appliances and will avoid “AI” appliances for the same reasons: The “smart” functionality doesn’t actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they’ll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there’s no reason the “smart” functionality couldn’t run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than “then we don’t get your data.”

    4. AI is apparently consuming more electricity than air conditioning. In fact, I’m not convinced that power consumption isn’t the selling point they’re pushing at board meetings. “It’ll keep our friends in the pollution industry in business.”

  • BallsandBayonets@lemmings.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Maybe I’d be more interested in AI if there was any I with the A. At the moment, there’s no more intelligence to these things than there is in a parrot with brain damage, or a human child. Language Models can mimic speech but are unable to formulate any original thoughts. Until they can, they aren’t AI and I won’t be the slightest bit interested beyond trying to break them into being slightly dirty (and therefore slightly funny).

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Lets see if this finally kills the AI hype. Big tech is pushing for AI because it is the ultimate spyware, nothing more.

    • DudeDudenson@lemmings.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are

      • blarth@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

        • nossaquesapao@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed of!

          • Krauerking@lemy.lol
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            We aren’t allowed new things. That might change their perfectly balanced money making machine.

            And making search worse so it can pretend to be an ex is not what I or anyone is looking for in the search box.

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

  • Wirlocke@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.

    Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.

    It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.

    • Cornelius_Wangenheim@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It’s more of a macroeconomic issue. There’s too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we’re going to keep getting more and more of these bubbles, regardless of what they are.

      • Krauerking@lemy.lol
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Yeah it’s just investment profit chasing from larger and larger bank accounts.

        I’m waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can’t be afforded to be “lost” means it’s just everyone else that has to eat dirt.

    • TimeSquirrel@kbin.melroy.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Internet of Things

      This is very much not a hype and is very widely used. It’s not just smart bulbs and toasters. It’s burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction’s network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.

      • Wirlocke@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.

        • DancingBear@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.

          Don’t even get me started on English muffins.

          With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.

          • barsoap@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it’d otherwise be and I’m in, nowadays it’s probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.

    • explodicle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.

      IMHO it’s a marketing problem. They’re major evolutions taking root over decades. I think AI will gradually become as useful as lasers.

    • kinsnik@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I think that the dot com bubble is the closest, honestly. There can be some kind of useful products (mostly dealing with how we interact with a system, not actually trying to use AI to magically solve a problem; it is shit at that), but the hype is way too large

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

    Still waiting for that first good use case for LLMs.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s because businesses are using AI to weed out resumes.

        Basically you beat the system by using the system. That’s my plan too next time I look for work.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Mostly true before, now 99.99%. The charades are so silly because obviously as a worker all I care about is how much I get paid. That’s it.

          All the company organization will care about. Is that work gets done to their standards or above and at the absolute lowest price possible.

          So my interests are diametrically opposed to their interests because my interest is to work as little as possible for as much money as possible. Their goal is to get as much work out of me as possible for as little money as possible. We could just be honest about it and stop the stupid games. I don’t give a shit about my employer anymore than they give a shit about me. If I care about the work that just means I’m that much more pissed they’re relying on my good will towards people who use their products and or services.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.

      I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.

        So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.

        ML has a huge future, regardless of LLMs.

          • nic2555@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.

            ML in general as a much more usages than only power LLM.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Just look at AlphaProof. Lol we’re all about to be outclassed. I’m sure everyone will still derrid the bots. They could be actual ASI and especially here in the US we’d say “I don’t see any intelligence.” I wish or society and all of us at individualsc would reflect on our limitations and tiny tiny insignificance on the grand scale. Our egos may kill us.

          • markon@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            COVID tried and a lot of people paid the price for being low information and not so bright. Sadly plenty of people who did the right things still got fucked by stupidity of others!

      • markon@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I feel like everyone who isn’t really heavily interacting or developing don’t realize how much better they are than human assistants. Shit, for one it doesn’t cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we’ll know it ain’t an LLM, because I don’t know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Haven’t you been watching the Olympics and seen Google’s ad for Gemini?

      Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

      • psivchaz@reddthat.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.

        Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?

        I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.

        So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.

        In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.

          It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            That’s awesome, and I would probably would find those tools useful.

            Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.

            So idk if it would be worth it once the venture capitalist money dries up.

            • bamboo@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              What are these code generators that have existed for a long time?

                • bamboo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  Neither of those seem similar to GitHub copilot other than that they can reduce keystrokes for some common tasks. The actual applicability of them seems narrow. Frequently I use GitHub copilot for “implement this function based on this doc comment I wrote” or “write docs for this class/function”. It’s the natural language component that makes the LLM approach useful.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Huge time saver. I’ve had GPT doing a lot of work for me and it makes stuff like managing my Arch install smooth and easy. I don’t use OpenAI stuff much though. Gemini has gotten way better, Claude 3.5 Sonnet is beastly at code stuff. I guess if you’re writing extremely complex production stuff it’s not going to be able to do that, but try asking most people even what an unsigned integer is. Most people will be like “what?”

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            but try asking most people even what an unsigned integer is. Most people will be like “what?”

            Why is that relevant? Are you saying that AI makes coding more accessible? I mean that’s great, but it’s like a calculator. Sure it helps people who need simple calculations in the short term, but it might actually discourage software literacy.

            I wish AI could just be a niche tool, instead it’s like a simple calculator being sold as a smartphone.

        • Dran@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.

          E.g

          Have I ever written a bash function that orders non-symver GitHub branches?

          Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.

    • pumpkinseedoil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      LLM have greatly increased my coding speed: instead of writing everything myself I let AI write it and then only have to fix all the bugs

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’m glad. Depends on the dev. I love writing code but debugging is annoying so I would prefer to take longer writing if it means less bugs.

        Please note I’m also pro code generators (like emmet).

    • EvilBit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.

      But 98% of GenAI hype is bullahit so far.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

        Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

        It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.

        • EvilBit@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.

          But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.

    • beveradb@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!

  • cordlesslamp@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    To be honest, I lost all interest in the new AMD CPUs because they fucking named the thing “AI” (with zero real-world application).

    I’m in the market for a new PC next month and I’m gonna get the 7800X3D for my VR gaming needs.