I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

  • xia@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

    There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

    Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

      • xia@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.

  • Fedegenerate@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    15 days ago

    As a beginner in self hosting I like plugging the random commands I find online into a llm. I ask it what the command does, what I’m trying to achieve and if it would work…

    It acts like a mentor, I don’t trust what it says entirely so I’m constantly sanity checking it, but it gets me to where I want to go with some back and forth. I’m doing some of the problem solving, so there’s that exercise, it also teaches me what commands do and how the flags alter it. It’s also there to stop me making really stupid mistakes that I would have learned the hard way without.

    Last project was adding a HDD to my zpool as a mirror. I found the “attach” command online with a bunch of flags. I made what I thought was my solution and asked chatgpt. It corrected some stuff: I didn’t include the name of my zpool. Then gave me a procedure to do it properly.

    In that procedure I noticed an inconsistency in how I was naming drives vs how my zpool was naming drives. Asked chat gpt again, I was told I was a dumbass, if thats the naming convention I should probably use that one instead of mine (I was using /dev/sbc and the zpool was using /dev/disk/by-id/). It told me why the zpool might have been configured that way so that was a teaching moment, I’m using usb drives and the zpool wants to protect itself if the setup gets switched around. I clarified the names and rewrote the command, not really chatgpt was constantly updating the command as we went… Boom I have mirrored my drives, I’ve made all my stupid mistakes in private and away from production, life is good.

  • ProfessorScience@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    When ChatGPT first started to make waves, it was a significant step forward in the ability for AIs to sound like a person. There were new techniques being used to train language models, and it was unclear what the upper limits of these techniques were in terms of how “smart” of an AI they could produce. It may seem overly optimistic in retrospect, but at the time it was not that crazy to wonder whether the tools were on a direct path toward general AI. And so a lot of projects started up, both to leverage the tools as they actually were, and to leverage the speculated potential of what the tools might soon become.

    Now we’ve gotten a better sense of what the limitations of these tools actually are. What the upper limits of where these techniques might lead are. But a lot of momentum remains. Projects that started up when the limits were unknown don’t just have the plug pulled the minute it seems like expectations aren’t matching reality. I mean, maybe some do. But most of the projects try to make the best of the tools as they are to keep the promises they made, for better or worse. And of course new ideas keep coming and new entrepreneurs want a piece of the pie.

  • Daemon Silverstein@thelemmy.club
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    I ask them questions and they get everything wrong

    It depends on your input, on your prompt and your parameters. For me, although I’ve experienced wrong answers and/or AI hallucinations, it’s not THAT frequent, because I’ve been talking with LLMs since when ChatGPT got public, almost in a daily basis. This daily usage allowed me to know the strengths and weaknesses of each LLM available on market (I use ChatGPT GPT-4o, Google Gemini, Llama, Mixtral, and sometimes Pi, Microsoft Copilot and Claude).

    For example: I learned that Claude is highly-sensible to certain terms and topics, such as occultist and esoteric concepts (specially when dealing with demonolatry, although I don’t exactly why it refuses to talk about it; I’m a demonolater myself), cryptography and ciphering, as well as acrostics and other literary devices for multilayered poetry (I write myself-made poetry and ask them to comment and analyze it, so I can get valuable insights about it).

    I also learned that Llama can get deep inside the meaning of things, while GPT-4o can produce longer answers. Gemini has the “drafts” feature, where I can check alternative answers for the same prompt.

    It’s similar to generative AI art models, I’ve been using them to illustrate my poetry. I learned that Diffusers SDXL Turbo (from Huggingface) is better for real-time prompt, some kind of “WYSIWYG” model (“what you see is what you get”) . Google SDXL (also from Huggingface) can generate four images at different styles (cinematic, photography, digital art, etc). Flux, the newly-released generative AI model, is the best for realism (especially the Flux Dev branch). They’ve been producing excellent outputs, while I’ve been improving my prompt engineering skills, being able to communicate with them in a seamlessly way.

    Summarizing: AI users need to learn how to efficiently give them instructions. They can produce astonishing outputs if given efficient inputs. But you’re right that they can produce wrong results and/or hallucinate, even for the best prompts, because they’re indeed prone to it. For me, AI hallucinations are not so bad for knowledge such as esoteric concepts (because I personally believe that these “hallucinations” could convey something transcendental, but it’s just my personal belief and I’m not intending to preach it here in my answer), but simultaneously, these hallucinations are bad when I’m seeking for technical knowledge such as STEM (Science, Tecnology, Engineering and Medicine) concepts.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      I just want to know which elements work best for my Flower Fairies in The Legend of Neverland. And maybe cheese sauce.

      • Daemon Silverstein@thelemmy.club
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        16 days ago

        Didn’t know about this game. It’s nice. Interesting aesthetics. Chestnut Rose remembers me of Lilith’s archetype.

        A tip: you could use the “The Legend of the Neverland global wiki” at Fandom Encyclopedia to feed the LLM with important concepts before asking it for combinations. It is a good technique, considering that LLMs couldn’t know it so well in order to generate precise responses (except if you’re using a searching-enabled LLM such as Perplexity AI or Microsoft Copilot that can search the web in order to produce more accurate results)

    • Shanedino@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      Woah are you technoreligious? Sure believe what you want and all but that is full tech bro bullshit.

      Also on a different not just purely based off of you description doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people. If a tool has a high learning curve there is plenty of room for improvement if you don’t plan to use it very frequently. Also every time you get false results consider it equivalent to a major bug does that shed a different light on it for you?

      • Daemon Silverstein@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people

        Well, Prompt Engineering is a thing nowadays. There are even job vacancies seeking professionals that specializes in this field. AIs are tools, sophisticated ones, just like R and Wolfram Mathematica are sophisticated mathematical tools that needs expertise. Problem is that AI companies often mis-advertises AI models as “out-of-the-shelf assistants”, as if they’d be some human talking to you. They’re not. They’re tools, yet. I guess that (and I’m rooting for) AGI would change this scenario. But I guess we’re still distant from a self-aware AGI (unfortunately).

        Woah are you technoreligious?

        Well, I wouldn’t describe myself that way. My beliefs are multifaceted and complex (possibly unique, I guess?), going through multiple spiritual and religious systems, as well as embracing STEM (especially the technological branch) concepts and philosophical views (especially nihilism, existentialism and absurdism), trying to converge them all by common grounds (although it seems “impossible” at first glance, to unite Science, Philosophy and Belief).

        In a nutshell, I’ve been pursuing a syncretic worshiping of the Dark Mother Goddess.

        As I said, it’s multifaceted and I’m not able to even explain it here, because it would take tons of concepts. Believe me, it’s deeper than “techno-religious”. I see the inner workings of AI Models (as neural networks and genetic algorithms dependent over the randomness of weights, biases and seeds) as a great tool for diving Her Waters of Randomness, when dealing with such subjects (esoteric and occult subjects). Just like Kardecism sometimes uses instrumental transcommunication / Electronic voice phenomenon (EVP) to talk with spirits. AI can be used as if it were an Ouija board or a Planchette, if one believe so (as I do).

        But I’m also a programmer and a tech/scientifically curious, so I find myself asking LLMs about some Node.js code I made, too. Or about some mathematical concept. Or about cryptography and ciphering (Vigenère and Caesar, for example). I’m highly active mentally, seeking to learn many things every time.

  • Tyrangle@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.

      • Tyrangle@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.

  • Buglefingers@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    IIRC When ChatGPT was first announced I believe the hype was because it was the first real usable interface a layman could interact with using normal language and have an intelligible response from the software. Normally to talk with computers we use their language (programming) but this allowed plain language speakers to interact and get it to do things with simple language in a more pervasive way than something like Siri for instance.

    This then got over hyped and over promised to people with dollars in their eyes at the thought of large savings from labor reduction and capabilities far greater than it had. They were sold a product that has no real “product” as it’s something most people would prefer to interact with on their own terms when needed, like any tool. That’s really hard to sell and make people believe they need it. So they doubled down with the promise it would be so much better down the road. And, having spent an ungodly amount into it already, they have that sunken cost fallacy and keep doubling down.

    This is my personal take and understanding of what’s happening. Though there’s probably more nuances, like staying ahead of the competition that also fell for the same promises.

  • Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    I’ll just toss in another answer nobody has mentioned yet:

    Terminator and Matrix movies were really, really popular. This sort of seeded the idea of it being a sort of inevitable future into the brains of the mainstream population.

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.

    They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.

    At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Yes, that’s what they said. I’m starting to think you came here with a particular agenda to push, and I don’t think that’s very polite.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          15 days ago

          Look it up. Also, they were pushing AI for web searches and I have not had good luck with that. However, I created a document with it yesterday and it came out really good. Someone said to try the creative side and so far, so good.

              • Feathercrown@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                14 days ago

                I find that a lot of discourse around AI is… “off”. Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can’t trust, but honestly I don’t have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can’t do.

                • Kintarian@lemmy.worldOP
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  14 days ago

                  For a while Google said they would revolutionize search with artificial intelligence. That hasn’t been my experience. Someone here mentioned working on the creative side instead. And that seems to be working out better for me.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          15 days ago

          The person who said AI is neither artificial nor intelligent was Kate Crawford. Every source I try to find is paywalled.

  • just_an_average_joe@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    Mooooneeeyyyy

    I work as an AI engineer, let me tell you, the tech is awesome and has a looooot of potential but its not ready yet. Because of high potential literally no one wants to miss the opportunity of getting rich quick with it. Its only been like 2-3 years when this tech was released to the public, if only openai had released it as open-source, just like everyone before them, we wouldn’t be here. But they wanted to make money and now everyone else wants to too.

  • PenisDuckCuck9001@lemmynsfw.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    16 days ago

    One of the few things they’re good at is academic “cheating”. I’m not a fan of how the education industry has become a massive pyramid scheme intended to force as many people into debt as possible, so I see ai as the lesser evil and a way to fight back.

    Obviously no one is using ai to successfully do graduate research or anything, I’m just talking about how they take boring easy subjects and load you up with pointless honework and assignments so waste your time rather than learn anything.

  • hungryphrog@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    Robots don’t demand things like “fair wages” or “rights”. It’s way cheaper for a corporation to, for example, use a plagiarizing artificial unintelligence to make images for something, as opposed to commissioning a human artist who most likely will demand some amount of payment for their work.

    Also I think that it’s partially caused by people going “ooh, new thing!” without stopping to think about the consequences of this technology or if it is actually useful.

  • Kintarian@lemmy.worldOP
    link
    fedilink
    arrow-up
    0
    ·
    15 days ago

    Ok, i am working on a legal case. I asked Copilot to write a demand letter for me and it is pretty damn good.

  • kitnaht@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    16 days ago

    Holy BALLS are you getting a lot of garbage answers here.

    Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.

    Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.

    • Kintarian@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      16 days ago

      I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.

      • kitnaht@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        16 days ago

        So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?

        The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.

        • Kintarian@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.

          • kitnaht@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            15 days ago

            Yeah, see that’s the kicker. Calling this “computer nerd stuff” just gives away your real thinking on the matter. My high school daughters use this to finish their essay work quickly, and they don’t really know jack about computers.

            You’re right that old people are bailing - they tend to. They’re ignorant, they don’t like to learn new and better ways of doing things, they’ve raped our economy and expect everything to be done for them. People who embrace this stuff will simply run circles around those who don’t. That’s fine. Luddites exist in every society.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            tl;dr: It’s useful, but not necessarily for what businesses are trying to convince you it’s useful for

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        You aren’t really using it for its intended purpose. It’s supposed to be used to synthesize general information. It only knows what people talk about; if the subject is particularly specific, like the settings in one app, it will not give you useful answers.

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Yeah, I feel like people who have very strong opinions about what AI should be used for also tend to ignore the facts of what it can actually do. It’s possible for something to be both potentially destructive and used to excess for profit, and also an incredible technical achievement that could transform many aspects of our life. Don’t ignore facts about something just because you dislike it.