In just a hundred years, we were able to successfully develop a device that has all of the information in the entire world contained in it, handheld and portable, which has expanded our technological capabilities. This has also created an incredibly frustrating problem for us in our daily lives. We have an exponential amount more information to deal with now than we ever had before. Back in the '70s or '80s you wanted to know something, you go to the library, watch a documentary that’s on a DVD. Now? You have to navigate the terrifyingly bad search engines that are out there today like Bing and Google which are increasingly getting more and more unreliable. You have to take notes which are frustratingly complicated, and unless you are some extraordinary gifted college student who is a master at taking good notes, your notes will probably be not perfect… Then you have to search all of the information available to you, to finally make some sort of decision. Even if you know what you want to say or what you want to communicate, you have to put it into the right format…

This is exhausting. Dealing with this much information, synthesizing it, changing the format and display of it, trying to figure out how to work with so much information every single day of your life. It is insanity! A language learning model can actually help you, especially if you train it or use roles to help you shape the learning model into whatever you want it to be doing. For example if I want to put together some well organized notes on Tableau, that awful piece of garbage technology that exists out there in the world today, I can simply use my local llama 3.1 AI model that I have personally trained with a ton of my notes and information, and get any sort of information out of it that I need to in order to explain a very complex topic to someone else easily. The web is not accessed at all, everything is stored locally on my own device. Miraculous and lifts a huge ass load off of my shoulders because now I don’t have to go and sift through my notes and spend a day and a half trying to figure out how to word things and then putting it into a PowerPoint that’s only going to be used for 5 minutes…

This is what I think The technology should be used for. Not artificial intelligence, no one is ever replaced in this process, And as you can tell, I am utilizing the technology to make myself better, augment my capabilities.

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    LLMs are a form of artificial intelligence. And yes, they are useful and good. So are many other forms of AI. It’s only really bad, same as other technologies, when it’s propriety censored and centrally controlled by one company.

    • treefrog@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      Technology generally isn’t good or bad. A surgeon’s scalpel for example, can both heal or hurt, depending on the skill and intentions of the user.

      What AI is, is powerful. Which makes it good in the hands of some, and very, very bad in the hands of others. This isn’t just about censorship or capitalism. One of the first of the newer AIs was able to generate 1,000s of novel lethal compounds overnight by changing a single parameter. That’s just on the medical/warfare/terrorism side of things.

      Social trust is likely to erode as deep fake technology gets into the hands of literally everyone. State actors will go after other states, as we’re seeing in elections already. And rogue actors will be able to spread misinformation or use the technology to con people. Corporate actors are already using the technology to manipulate people for profit. With harmful outcomes apparent.

      I’m not an AI hater btw. The advances we’ll see in medical technology, microbiology, and understanding large systems like weather and societies, will be and is amazing. But, there’s no guard rails at the moment and won’t be any likely in the years ahead. It will be a wild ride and AI is just one more thing that’s going to drastically change our world in the coming two decades. Some of it will be good. And a lot of it will be bad.

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Evaluating sources and consolidating the information they contain into concise, organized structures is how your brain learns. The notes aren’t the goal of note taking. They’re simply the process you use to internalize the information.

    • Buttflapper@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Yes but the point I’m getting at is that at a certain point, it’s impossible to retain more information or actually ideal not to retain it at all. But to have it accessible at any moment without straining yourself. Look at the youth today and how much they are struggling in school. Grades are down more than ever, education industry is crippled right now. How in the world are they supposed to keep up with all the information out there, seeing how they are at a disadvantage compared to when we were their age? It is almost impossible to fathom how they could do that. So language learning models can help with synthesizing information, and closing the intelligence gap. Would you really want someone who had a much worse education and is marginally less intelligent making critical, incredibly important decisions? Ideally, no. But what if you could augment them with additional information to make their decision making skills even better? That’s my exact point

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 days ago

        They’re struggling because they’re not learning, or learning how to learn.

        LLM outputs aren’t reliable. Using one for your research is doing the exact opposite of the steps that are required to make good decisions.

        The prerequisite to making a good decision is learning the information relevant to the decision, then you use that information to determine your options and likely outcomes of those paths. The internalization of the problem space is fundamental to the process. You need to actually understand the space you’re making a decision about a space in order to make a good decision. The effort is the point.

  • BananaTrifleViolin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    AI is a marketing term at the moment, and it’s all orne big financial speculative bubble. Just look at Nvidia and how it’s share price is so divorced from reality.

    LLMs can bd uaeful tools and have value in themselves. The problem is the hype and misuse of the term AI to promise the earth. Also the big tech companies rushing to push tools that are not yet fit for purpose.

    Any tool which “hallucinates” - I.e. Is error strewn and lies - is fit for nothing. It’s just a curio and these general tools are going AI and LLMs a bad reputation. But well designed and trained LLMs targeted at specific tasks are useful.

  • Dave.@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Train your LLM better.

    You didn’t go to the library in the '80s and watch a DVD of a documentary to get the information you wanted.

    So this is the concern I have with letting LLMs do all the heavy lifting. You’ve put in a nice summary of how we should be using LLMs and then here’s a glaring anachronism. So now that I’ve spotted that, should I take any credence in whatever else you’ve said?