• TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It’s like the least popular opinion I have here on Lemmy, but I assure you, this is the begining.

    Yes, we’ll see a dotcom style bust. But it’s not like the world today wasn’t literally invented in that time. Do you remember where image generation was 3 years ago? It was a complete joke compared to a year ago, and today, fuck no one here would know.

    When code generation goes through that same cycle, you can put out an idea in plain language, and get back code that just “does” it.

    I have no idea what that means for the future of my humanity.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      you can put out an idea in plain language, and get back code that just “does” it

      No you can’t. Simplifying it grossly:

      They can’t do the most low-level, dumbest detail, splitting hairs, “there’s no spoon”, “this is just correct no matter how much you blabber in the opposite direction, this is just wrong no matter how much you blabber to support it” kind of solutions.

      And that happens to be main requirement that makes a task worth software developer’s time.

      We need software developers to write computer programs, because “a general idea” even in a formalized language is not sufficient, you need to address details of actual reality. That is the bottleneck.

      That technology widens the passage in the places which were not the bottleneck in the first place.

      • tetris11@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        they’re pretty good, and the faults they have are improving steadily. I dont think we’re hitting a ceiling yet, and I shudder to think where they’ll be in 5 years.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        this is just wrong no matter how much you blabber to support it" kind of solutions.

        When you put it like that, I might be a perfect fit in today’s world with the loudest voice wins landscape.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          I regularly think and post conspiracy theory thoughts about why the “AI” is such a hype. And in line with them a certain kind of people seem to think that reality doesn’t matter, because those who control the present control the past and the future. That is, they think that controlling the discourse can replace controlling the reality. The issue with that is that whether a bomb is set, whether a boat is sea-worthy, whether a bridge will fall is not defined by discourse.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I think you live in a nonsense world. I literally use it everyday and yes, sometimes it’s shit and it’s bad at anything that even requires a modicum of creativity. But 90% of shit doesn’t require a modicum of creativity. And my point isn’t about where we’re at, it’s about how far the same tech progressed on another domain adjacent task in three years.

        Lemmy has a “dismiss AI” fetish and does so at its own peril.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Dismiss at your own peril is my mantra on this. I work primarily in machine vision and the things that people were writing on as impossible or “unique to humans” in the 90s and 2000s ended up falling rapidly, and that generation of opinion pieces are now safely stored in the round bin.

            The same was true of agents for games like go and chess and dota. And now the same has been demonstrated to be coming true for languages.

            And maybe that paper built in the right caveats about “human intelligence”. But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

            The real issue is that previously there wasn’t a use case with enough viability to warrant the explosion of interest we’ve seen like with transformers.

            But transformers are like, legit wild. It’s bigger than UNETs. It’s way bigger than ltsm.

            So dismiss at your own peril.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              But that isn’t to say human intelligence can’t be surpassed by something distinctly inhuman.

              Tell me you haven’t read the paper without telling me you haven’t read the paper. The paper is about T2 vs. T3 systems, humans are just an example.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Yeah I skimmed a bit. I’m on like 4 hours of in flight sleep after like 24 hours of air ports and flying. If you really want me to address the points of the paper, I can, but I can also tell it doesn’t diminish my primary point: dismiss at your own peril.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  3 months ago

                  dismiss at your own peril.

                  Oooo I’m scared. Just as much as I was scared of missing out on crypto or the last 10000 hype trains VCs rode into bankruptcy. I’m both too old and too much of an engineer for that BS especially when the answer to a technical argument, a fucking information-theoretical one on top of that, is “Dude, but consider FOMO”.

                  That said, I still wish you all the best in your scientific career in applied statistics. Stuff can be interesting and useful aside from AI BS. If OTOH you’re in that career path because AI BS and not a love for the maths… let’s just say that vacation doesn’t help against burnout. Switch tracks, instead, don’t do what you want but what you can.

                  Or do dive into AGI. But then actually read the paper, and understand why current approaches are nowhere near sufficient. We’re not talking about changes in architecture, we’re about architectures that change as a function of training and inference, that learn how to learn. Say goodbye to the VC cesspit, get tenure aka a day job, maybe in 50 years there’s going to be another sigmoid and you’ll have written one of the papers leading up to it because you actually addressed the fucking core problem.

                  • TropicalDingdong@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    0
                    ·
                    3 months ago

                    I mean I’ve been doing this for 20 years and have led teams from 2-3 in size to 40. I’ve been the lead on systems that have had to undergo legal review at a state level, where the output literally determines policy for almost every home in a state. So you can be as dismissive or enthusiastic as you like. I could truly actually give a shit about ley opinion cus I’m out here doing this, building it, and I see it every day.

                    For any one with ears to listen, dismiss this current round at your at your own peril.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            3 months ago

            I’ve written something vague in another place in this thread which seemed a good enough argument. But I didn’t expect that someone is going to link a literal scientific publication in the same very direction. Thank you, sometimes arguing in the Web is not a waste of time.

            EDIT: Have finished reading it. Started thinking it was the same argument, in the middle got confused, in the end realized that yes, it’s the same argument, but explained well by a smarter person. A very cool article, and fully understandable for a random Lemming at that.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          Are you a software developer? Or a hardware engineer? EDIT: Or anyone credible in evaluating my nonsense world against yours?

            • rottingleaf@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              So close, but not there.

              OK, you’ll know that I’m right when you somewhat expand your expertise to neighboring areas. Should happen naturally.

            • hark@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              That explains your optimism. Code generation is at a stage where it slaps together Stack Overflow answers and code ripped off from GitHub for you. While that is quite effective to get at least a crappy programmer to cobble together something that barely works, it is a far cry from having just anyone put out an idea in plain language and getting back code that just does it. A programmer is still needed in the loop.

              I’m sure I don’t have to explain to you that AI development over the decades has often reached plateaus where the approach needed to be significantly changed in order for progress to be made, but it could certainly be the case where LLMs (at least as they are developed now) aren’t enough to accomplish what you describe.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                It’s not about stages. It’s about the Achilles and tortoise problem.

                There’s extrapolation inside the same level of abstraction as the data given and there’s extrapolation of new levels of abstraction.

                But frankly far smarter people than me are working on all that. Maybe they’ll deliver.

        • Jesus_666@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          And I wouldn’t know where to start using it. My problems are often of the “integrate two badly documented company-internal APIs” variety. LLMs can’t do shit about that; they weren’t trained for it.

          They’re nice for basic rote work but that’s often not what you deal with in a mature codebase.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Again, dismiss at your own peril.

            Because “Integrate two badly documented APIs” is precisely the kind of tasks that even the current batch of LLMs actually crush.

            And I’m not worried about being replaced by the current crop. I’m worried about future frameworks on technology like greyskull running 30, or 300, or 3000 uniquely trained LLMs and other transformers at once.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              3 months ago

              I’m with you. I’m a Senior software engineer and copilot/chatgpt have all but completely replaced me googling stuff, and replaced 90% of the time I’ve spent writing the code for simple tasks I want to automate. I’m regularly shocked at how often copilot will accurately auto complete whole methods for me. I’ve even had it generate a whole child class near perfectly, although this is likely primarily due to being very consistent with my naming.

              At the very least it’s an extremely valuable tool that every programmer should get comfortable with. And the tech is just in it’s baby form. I’m glad I’m learning how to use it now instead of pooh-poohing it.

              • TropicalDingdong@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                Ikr? It really seems like the dismissiveness is coming from people either not experienced with it, or just politically angry at its existence.

    • Grandwolf319@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I agree with you but not for the reason you think.

      I think the golden age of ML is right around the corner, but it won’t be AGI.

      It would be image recognition and video upscaling, you know, the boring stuff that is not game changing but possibly useful.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I feel the same about the code generation stuff. What I really want is a tool that suggests better variable names.