• PenisDuckCuck9001@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    I just want computer parts to stop being so expensive. Remember when gaming was cheap? Pepperidge farm remembers. You used to be able to build a relatively high end pc for less than the average dogshit Walmart laptop.

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      To be honest right now is a relatively good time to build a PC, except for the GPU, which is heavily overpriced. I think if you are content with last gen AMD, this can also be turned to somewhat acceptable levels.

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Personally I can’t wait for a few good bankruptcies so I can pick up a couple of high end data centre GPUs for cents on the dollar

    • bruhduh@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      Search Nvidia p40 24gb on eBay, 200$ each and surprisingly good for selfhosted llm, if you plan to build array of gpus then search for p100 16gb, same price but unlike p40, p100 supports nvlink, and these 16gb is hbm2 memory with 4096bit bandwidth so it’s still competitive in llm field while p40 24gb is gddr5 so it’s good point is amount of memory for money it cost but it’s rather slow compared to p100

      • Scipitie@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Lowest price on Ebay for me is 290 Euro :/ The p100 are 200 each though.

        Do you happen to know if I could mix a 3700 with a p100?

        And thanks for the tips!

      • RegalPotoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Thanks for the tips! I’m looking for something multi-purpose for LLM/stable diffusion messing about + transcoder for jellyfin - I’m guessing that there isn’t really a sweet spot for those 3. I don’t really have room or power budget for 2 cards, so I guess a P40 is probably the best bet?

        • Justin@lemmy.jlh.name
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Intel a310 is the best $/perf transcoding card, but if P40 supports nvenc, it might work for both transcode and stable diffusion.

        • bruhduh@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          12 days ago

          Try ryzen 8700g integrated gpu for transcoding since it supports av1 and these p series gpus for llm/stable diffusion, would be a good mix i think, or if you don’t have budget for new build, then buy intel a380 gpu for transcoding, you can attach it as mining gpu through pcie riser, linus tech tips tested this gpu for transcoding as i remember

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Interesting, I did try a bit of remote rendering on Blender (just to learn how to use via CLI) so that makes me wonder who is indeed scrapping the bottom of the barrel of “old” hardware and what they are using for. Maybe somebody is renting old GPUs for render farms, maybe other tasks, any pointer of such a trend?

      • RegalPotoo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Digging into it a bit more, it seems like I might be better off getting a 12gb 3060 - similar price point, but much newer silicon

  • LemmyBe@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    Wether we like it or not AI is here to stay, and in 20-30 years, it’ll be as embedded in our lives as computers and smartphones are now.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      Is there a “young man yells at clouds meme” here?

      “Yes, you’re very clever calling out the hype train. Oooh, what a smart boy you are!” Until the dust settles…

      Lemmy sounds like my grandma in 1998, “Pushah. This ‘internet’ is just a fad.'”

        • BakerBagel@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Yeah, the early Internet didn’t require 5 tons of coal be burned just to give you a made up answer to your query. This bubble is Pets.com only it is also murdering the rainforest while still be completely useless.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            Estimates for chatgpt usage per query are on the order of 20-50 Wh, which is about the same as playing a demanding game on a gaming pc for a few minutes. Local models are significantly less.

    • FlorianSimon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      Idiots in this thread keep forgetting there’s a climate crisis and that we won’t be able to live the lives we live now forever 🤷‍♀️

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Right, it did have an AI winter few decades ago. It’s indeed here to stay, it doesn’t many any of the current company marketing it right now will though.

      AI as a research field will stay, everything else maybe not.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    It’s like the least popular opinion I have here on Lemmy, but I assure you, this is the begining.

    Yes, we’ll see a dotcom style bust. But it’s not like the world today wasn’t literally invented in that time. Do you remember where image generation was 3 years ago? It was a complete joke compared to a year ago, and today, fuck no one here would know.

    When code generation goes through that same cycle, you can put out an idea in plain language, and get back code that just “does” it.

    I have no idea what that means for the future of my humanity.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      you can put out an idea in plain language, and get back code that just “does” it

      No you can’t. Simplifying it grossly:

      They can’t do the most low-level, dumbest detail, splitting hairs, “there’s no spoon”, “this is just correct no matter how much you blabber in the opposite direction, this is just wrong no matter how much you blabber to support it” kind of solutions.

      And that happens to be main requirement that makes a task worth software developer’s time.

      We need software developers to write computer programs, because “a general idea” even in a formalized language is not sufficient, you need to address details of actual reality. That is the bottleneck.

      That technology widens the passage in the places which were not the bottleneck in the first place.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I think you live in a nonsense world. I literally use it everyday and yes, sometimes it’s shit and it’s bad at anything that even requires a modicum of creativity. But 90% of shit doesn’t require a modicum of creativity. And my point isn’t about where we’re at, it’s about how far the same tech progressed on another domain adjacent task in three years.

        Lemmy has a “dismiss AI” fetish and does so at its own peril.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            Dismiss at your own peril is my mantra on this. I work primarily in machine vision and the things that people were writing on as impossible or “unique to humans” in the 90s and 2000s ended up falling rapidly, and that generation of opinion pieces are now safely stored in the round bin.

            The same was true of agents for games like go and chess and dota. And now the same has been demonstrated to be coming true for languages.

            And maybe that paper built in the right caveats about “human intelligence”. But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

            The real issue is that previously there wasn’t a use case with enough viability to warrant the explosion of interest we’ve seen like with transformers.

            But transformers are like, legit wild. It’s bigger than UNETs. It’s way bigger than ltsm.

            So dismiss at your own peril.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

              Tell me you haven’t read the paper without telling me you haven’t read the paper. The paper is about T2 vs. T3 systems, humans are just an example.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                12 days ago

                Yeah I skimmed a bit. I’m on like 4 hours of in flight sleep after like 24 hours of air ports and flying. If you really want me to address the points of the paper, I can, but I can also tell it doesn’t diminish my primary point: dismiss at your own peril.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  11 days ago

                  dismiss at your own peril.

                  Oooo I’m scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I’m both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is “Dude, but consider FOMO”.

                  That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you’re in that career path because AI BS and not a love for the maths… let’s just say that vacation doesn’t help against burnout. Switch tracks, instead, don’t do what you want but what you can.

                  Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We’re not talking about changes in architecture, we’re about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there’s going to be another sigmoid and you’ll have written one of the papers leading up to it because you actually addressed the fucking core problem.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            11 days ago

            I’ve written something vague in another place in this thread which seemed a good enough argument. But I didn’t expect that someone is going to link a literal scientific publication in the same very direction. Thank you, sometimes arguing in the Web is not a waste of time.

            EDIT: Have finished reading it. Started thinking it was the same argument, in the middle got confused, in the end realized that yes, it’s the same argument, but explained well by a smarter person. A very cool article, and fully understandable for a random Lemming at that.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          And I wouldn’t know where to start using it. My problems are often of the “integrate two badly documented company-internal APIs” variety. LLMs can’t do shit about that; they weren’t trained for it.

          They’re nice for basic rote work but that’s often not what you deal with in a mature codebase.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            Again, dismiss at your own peril.

            Because “Integrate two badly documented APIs” is precisely the kind of tasks that even the current batch of LLMs actually crush.

            And I’m not worried about being replaced by the current crop. I’m worried about future frameworks on technology like greyskull running 30, or 300, or 3000 uniquely trained LLMs and other transformers at once.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              11 days ago

              I’m with you. I’m a Senior software engineer and copilot/chatgpt have all but completely replaced me googling stuff, and replaced 90% of the time I’ve spent writing the code for simple tasks I want to automate. I’m regularly shocked at how often copilot will accurately auto complete whole methods for me. I’ve even had it generate a whole child class near perfectly, although this is likely primarily due to being very consistent with my naming.

              At the very least it’s an extremely valuable tool that every programmer should get comfortable with. And the tech is just in it’s baby form. I’m glad I’m learning how to use it now instead of pooh-poohing it.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                11 days ago

                Ikr? It really seems like the dismissiveness is coming from people either not experienced with it, or just politically angry at its existence.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          12 days ago

          Are you a software developer? Or a hardware engineer? EDIT: Or anyone credible in evaluating my nonsense world against yours?

            • hark@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              That explains your optimism. Code generation is at a stage where it slaps together Stack Overflow answers and code ripped off from GitHub for you. While that is quite effective to get at least a crappy programmer to cobble together something that barely works, it is a far cry from having just anyone put out an idea in plain language and getting back code that just does it. A programmer is still needed in the loop.

              I’m sure I don’t have to explain to you that AI development over the decades has often reached plateaus where the approach needed to be significantly changed in order for progress to be made, but it could certainly be the case where LLMs (at least as they are developed now) aren’t enough to accomplish what you describe.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                12 days ago

                It’s not about stages. It’s about the Achilles and tortoise problem.

                There’s extrapolation inside the same level of abstraction as the data given and there’s extrapolation of new levels of abstraction.

                But frankly far smarter people than me are working on all that. Maybe they’ll deliver.

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              So close, but not there.

              OK, you’ll know that I’m right when you somewhat expand your expertise to neighboring areas. Should happen naturally.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        this is just wrong no matter how much you blabber to support it" kind of solutions.

        When you put it like that, I might be a perfect fit in today’s world with the loudest voice wins landscape.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          I regularly think and post conspiracy theory thoughts about why the “AI” is such a hype. And in line with them a certain kind of people seem to think that reality doesn’t matter, because those who control the present control the past and the future. That is, they think that controlling the discourse can replace controlling the reality. The issue with that is that whether a bomb is set, whether a boat is sea-worthy, whether a bridge will fall is not defined by discourse.

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        they’re pretty good, and the faults they have are improving steadily. I dont think we’re hitting a ceiling yet, and I shudder to think where they’ll be in 5 years.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      I agree with you but not for the reason you think.

      I think the golden age of ML is right around the corner, but it won’t be AGI.

      It would be image recognition and video upscaling, you know, the boring stuff that is not game changing but possibly useful.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        I feel the same about the code generation stuff. What I really want is a tool that suggests better variable names.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Shed a tear, if you wish, for Nvidia founder and Chief Executive Jenson Huang, whose fortune (on paper) fell by almost $10 billion that day.

    Thanks, but I think I’ll pass.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      He knows what this hype is, so I don’t think he’d be upset. Still filthy rich when the bubble bursts, and that won’t be soon.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I’m sure he won’t mind. Worrying about that doesn’t sound like working.

      I work from the moment I wake up to the moment I go to bed. I work seven days a week. When I’m not working, I’m thinking about working, and when I’m working, I’m working. I sit through movies, but I don’t remember them because I’m thinking about work.

      - Huang on his 14 hour workdays

      It is one way to live.

  • billbennett@piefed.social
    link
    fedilink
    Afaraf
    arrow-up
    0
    ·
    12 days ago

    I’ve spent time with an AI laptop the past couple of weeks and ‘overinflated’ seems a generous description of where end user AI is today.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    The stock market is not based on income. It’s based entirely on speculation.

    Since then, shares of the maker the high-grade computer chips that AI laboratories use to power the development of their chatbots and other products have come down by more than 22%.

    June 18th: $136 August 4th: $100 August 18th: $130 again now: $103 (still above 8/4)

    It’s almost like hype generates volatility. I don’t think any of this is indicative of a “leaking” bubble. Just tech journalists conjuring up clicks.

    Also bubbles don’t “leak”.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      The broader market did the same thing

      https://finance.yahoo.com/quote/SPY/

      $560 to $510 to $560 to $540

      So why did $NVDA have larger swings? It has to do with the concept called beta. High beta stocks go up faster when the market is up and go down lower when the market is done. Basically high variance risky investments.

      Why did the market have these swings? Because of uncertainty about future interest rates. Interest rates not only matter vis-a-vis business loans but affect the interest-free rate for investors.

      When investors invest into the stock market, they want to get back the risk free rate (how much they get from treasuries) + the risk premium (how much stocks outperform bonds long term)

      If the risks of the stock market are the same, but the payoff of the treasuries changes, then you need a high return from stocks. To get a higher return you can only accept a lower price,

      This is why stocks are down, NVDA is still making plenty of money in AI

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        11 days ago

        There’s more to it as well, such as:

        • investors coming back from vacation and selling off losses and whatnot
        • investors expecting reduced spending between summer and holidays; we’re past the “back to school” retail bump and into a slower retail economy
        • upcoming election, with polls shifting between Trump and Harris

        September is pretty consistently more volatile than other months, and has net negative returns long-term. So it’s not just the Fed discussing rate cuts (that news was reported over the last couple months, so it should be factored in), but just normal sideways trading in September.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          We already knew about back to school sales, they happen every year and they are priced in. If there was a real stock market dump every year in September, everyone would short September, making a drop in August and covering in September, making September a positive month again

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            11 days ago

            It’s not every year, but it is more than half the time. Source:

            History suggests September is the worst month of the year in terms of stock-market performance. The S&P 500 SPX has generated an average monthly decline of 1.2% and finished higher only 44.3% of the time dating back to 1928, according to Dow Jones Market Data.

    • SturgiesYrFase@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Also bubbles don’t “leak”.

      I mean, sometimes they kinda do? They either pop or slowly deflate, I’d say slow deflation could be argued to be caused by a leak.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          You can do it easily with a balloon (add some tape then poke a hole). An economic bubble can work that way as well, basically demand slowly evaporates and the relevant companies steadily drop in value as they pivot to something else. I expect the housing bubble to work this way because new construction will eventually catch up, but building new buildings takes time.

          The question is, how much money (tape) are the big tech companies willing to throw at it? There’s a lot of ways AI could be modified into niche markets even if mass adoption doesn’t materialize.

            • sugar_in_your_tea@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 days ago

              You do realize an economic bubble is a metaphor, right? My point is that a bubble can either deflate rapidly (severe market correction, or a “burst”), or it can deflate slowly (a bear market in a certain sector). I’m guessing the industry will do what it can to have AI be the latter instead of the former.

              • helenslunch@feddit.nl
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                11 days ago

                Yes, I do. It’s a metaphor that you don’t seem to understand.

                My point is that a bubble can either deflate rapidly (severe market correction, or a “burst”), or it can deflate slowly (a bear market in a certain sector).

                No, it cannot. It is only the former. The entire point of the metaphor is that its a rapid deflation. A bubble does not slowly leak, it pops.

                • sugar_in_your_tea@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  11 days ago

                  One good example of a bubble that usually deflates slowly is the housing market. The housing market goes through cycles, and those bubbles very rarely pop. It popped in 2008 because banks were simultaneously caught with their hands in the candy jar by lying about risk levels of loans, so when foreclosures started, it caused a domino effect. In most cases, the fed just raises rates and housing prices naturally fall as demand falls, but in 2008, part of the problem was that banks kept selling bad loans despite high mortgage rates and high housing prices, all because they knew they could sell those loans off to another bank and make some quick profit (like a game of hot potato).

                  In the case of AI, I don’t think it’ll be the fed raising rates to cool the market (that market isn’t impacted as much by rates), but the industry investing more to try to revive it. So Nvidia is unlikely to totally crash because it’ll be propped up by Microsoft, Amazon, and Google, and Microsoft, Apple, and Google will keep pitching different use cases to slow the losses as businesses pull away from AI. That’s quite similar to how the fed cuts rates to spur economic investment (i.e. borrowing) to soften the impact of a bubble bursting, just driven from mega tech companies instead of a government.

                  At least that’s my take.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        We taking about bubbles or are we talking about balloons? Maybe we should change to using the word balloon instead, since these economic ‘bubbles’ can also deflate slowly.

        • SturgiesYrFase@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          Good point, not sure that economists are human enough to take sense into account, but I think we should try and make it a thing.

  • DogPeePoo@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    Wall Street has already milked “the pump” now they short it and put out articles like this

  • umbraroze@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Have any regular users actually looked at the prices of the “AI services” and what they actually cost?

    I’m a writer. I’ve looked at a few of the AI services aimed at writers. These companies literally think they can get away with “Just Another Streaming Service” pricing, in an era where people are getting really really sceptical about subscribing to yet another streaming service and cancelling the ones they don’t care about that much. As a broke ass writer, I was glad that, with NaNoWriMo discount, I could buy Scrivener for €20 instead of regular price of €40. [note: regular price of Scrivener is apparently €70 now, and this is pretty aggravating.] So why are NaNoWriMo pushing ProWritingAid, a service that runs €10-€12 per month? This is definitely out of the reach of broke ass writers.

    Someone should tell the AI companies that regular people don’t want to subscribe to random subscription services any more.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      As someone dabbling with writing, I bit the bullet and tried to start looking into the tools to see if they’re actually useful, and I was impressed with the promised tools like grammar help, sentence structure and making sure I don’t leave loose ends in the story writing, these are genuinely useful tools if you’re not using generative capability to let it write mediocre bullshit for you.

      But I noticed right away that I couldn’t justify a subscription between $20 - $30 a month, on top of the thousand other services we have to pay monthly for, including even the writing software itself.

      I have lived fine and written great things in the past without AI, I can survive just fine without it now. If these companies want to actually sell a product that people want, they need to scale back the expectations, the costs and the bloated, useless bullshit attached to it all.

      At some point soon, the costs of running these massive LLM’s versus the number of people actually willing to pay a premium for them are going to exceed reasonable expectations and we will see the companies that host the LLM’s start to scale everything back as they try to find some new product to hype and generate investment on.

    • Lenny@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I work for an AI company that’s dying out. We’re trying to charge companies $30k a year and upwards for basically chatgpt plus a few shoddily built integrations. You can build the same things we’re doing with Zapier, at around $35 a month. The management are baffled as to why we’re not closing any of our deals, and it’s SO obvious to me - we’re too fucking expensive and there’s nothing unique with our service.