Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

  • count_dongulus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It doesn’t pass judgment. It just knows what “looks” correct. You need a trained person to discern that. It’s like describing symptoms to WebMD. If you had a junior doctor using WebMD, how comfortable would you be with their assessment?

  • AreaKode@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I’ve found it to be extremely helpful in coding. Instead of trying to read huge documentation pages, I can just have a chatbot read it and tell me the answer. My coworker has been wanting to learn Powershell. Using a chatbot, his understanding of the language has greatly improved. A chatbot can not only give you the answer, but it can break down how it reached that conclusion. It can be a very useful learning tool.

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I’ve been using it for CLI syntax and code for a while now. It’s not always right but it definitely helps in getting you almost all the way there when it doesn’t. I will continue to use it 😁

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Chatgpt all versions. I don’t know. I use it a lot and I just know it’s been wrong. Powershell comes to mind. And juniper srx syntax. And Alcatel.

      • m-p{3}@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        It’s really useful to quickly find the parameters to convert something in a specific way using ffmpeg.

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Hell yeah it is. So much faster than reading the man pages and stuff

    • Eldritch@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      It’s great for regurgitating pre written text. For generating new or usable code it’s largely useless. It doesn’t have an actual understanding of what it says. It can recombine information and elements its seen before. But not generate anything truly unique.

      • JohnnyCanuck@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That isn’t what the comment you replied to was talking about so that’s why you’re getting downvoted even though some of what you said is right.

        • Eldritch@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          The first sentence addressed what they talked about. It’s great as an assistant to cut through documentation to get at what you need. In fact, here’s a recent video from Perry Fractic doing just that with microtext for the C64.

          Anything else like having it generate the code itself, it’s more of a liability than an asset. Since it doesn’t really understand what its doing.

          Perhaps I should have separated the two thoughts initially? Either way I’ve said my piece.

          • JohnnyCanuck@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Top level comment is talking about using it for learning. Saying that AI is just regurgitating text doesn’t address that fact at all. In fact it sounds like you were putting down the commentor for using it for learning.

            The bulk of your comment was about how poorly it writes code which isn’t what that comment was talking about. At all. So yes, I agree, you should have separated your two thoughts and probably focused the second thought on a different thread within this post. Perhaps at the top level to say it to the OP.

  • SergeantSushi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I agree AI is a godsend for non coders and amateur programmers who need a quick and dirty script. As a professional, the quality of code is oftentimes 💩 and I can write it myself in less time than it takes to describe it to an AI.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      AI is a godsend for non coders and amateur programmers who need a quick and dirty script.

      Why?

      I mean, it is such a cruel thing to say.

      50% of these poor non coders and amateur programmers would end up with a non-functioning script. I find it so unfair!

      You have not even tried to decide who deserves and gets the working solution and who gets the garbage script. You are soo evil…

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I think the process of explaining what your want to an AI can often be helpful. Especially given the number of times I’ve explained things to junior developer’s and they’ve said they understood completely, but then when I see what they wrote they clearly didn’t.

      Explaining to an AI is a pretty good test of how well the stories and comments are written.

    • rustydomino@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I think you’ve hit the nail on the head. I am not a coder but using chatGPT I was able to take someone else’s simple program and modify for my own needs within just a few hours of work. It’s definitely not perfect and you still need to put in some work to get your program to run exactly the way you want it to but it’s using chatGPT is a good place to start for beginners, as long as they understand that it’s not a magic tool.

  • boatswain@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    As a cybersecurity guy, it’s things like this study, which said:

    Overall, we find that participants who had access to an AI assistant based on OpenAI’s codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

    • eerongal@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      FWIW, at this point, that study would be horribly outdated. It was done in 2022, which means it probably took place in early 2022 or 2021. The models used for coding have come a long way since then, the study would essentially have to be redone on current models to see if that’s still the case.

      The people’s perceptions have probably not changed, but if the code is actually insecure would need to be reassessed

      • boatswain@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Sure, but to me that means the latest information is that AI assistants help produce insecure code. If someone wants to perform a study with more recent models to show that’s no longer the case, I’ll revisit my opinion. Until then, I’m assuming that the study holds true. We can’t do security based on “it’s probably fine now.”

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I think it’s more appalling because they should have assumed this was early tech and therefore less trustworthy. If anything, I’d expect more people to believe their code is secure today using AI than back in 2021/2022 because the tech is that much more mature.

        I’m guessing an LLM will make a lot of noob mistakes, especially in languages like C(++) where a lot of care needs to be taken for memory safety. LLMs don’t understand code, they just look at a lot of samples of existing code, and a lot of code available on the internet is terrible from a security and performance perspective. If you’re writing it yourself, hopefully you’ve been through enough code reviews to catch the more common mistakes.

  • dependencyinjection@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    My workplace of 5 employees and 2 owners have embraced it as an additional tool.

    We have Copilot inside Visual studio professional and it’s a great time saver. We have a lot of boiler plate code that it can learn from and why would i want to waste valuable time writing the same things over and over. If every list page follows the same pattern then it’s boring we are paid to solve problems not just write the same things.

    We even have a tool powered by AI made by the owner which we can type commands and it will scaffold all our boiler plate. Or it can watch the project and if I update a model it will do the mutations and queries in c# set up the graphql layer and then implement some views in react typescript.

  • Eczpurt@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Sounds like it’s just another tool in a coding arsenal! As long as you take care to verify things like you did, I can’t see why it’d be a bad idea. It’s when you blindly trust that things go wrong.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    I don’t think that the current approaches being used by generative AIs are sufficient to reliably produce correct code; I think that they’re more-amenable to human-consumable output (and even there, I’m much more enthusiastic about their use for images than text, as things stand). A human needs approximately-correct material to cue their brain; CPUs are more particular.

    We’ll probably get there, in the same sense that we can ultimately produce human-level AI for anything, but I think that it’ll entail higher-level reasoning about a problem, which present generative text approaches don’t do.

    I did start with internet search…I could not find how to pass values into the function and return from a function easily,

    So, now, this I have a hard time with.

    When I search for “pass value function bash”, this is the first page I get, which clearly shows an example:

    https://stackoverflow.com/questions/6212219/passing-parameters-to-a-bash-function

    This isn’t where I’d consider generative AI to be a useful example; it’s something that there will be existing material already readily-available via a search.

    The other issue with using generative AI for coding is that for taking pre-existing code for common tasks and using it in multiple programs, we already have an approach: use libraries. That way code gets maintained and such, but doesn’t need to be reimplemented by humans over-and-over.

    Say someone says “I need linked-list code”. Okay, I mean, that’s a pretty common, plain Jane thing to need.

    But if you use a library, and there’s a bug in that code, and it gets fixed, then the bugfix propagates when you update to a newer library. If you generate a linked-list implementation, even if you wind up with working linked-list code at the end, then that isn’t gonna happen.

  • small44@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Many lazy programmers may just copy paste without thinking too much about the quality of generated code. The other group of person who oppose it are those who think it will kill the programmer job

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Many lazy programmers may just copy paste without thinking too much about the quality of generated code

      Tbf, they’ve been doing that LONG before AI came along

      • leftzero@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Sure, but if you’re copying from stack overflow or reddit and ignore the dozens of comments telling you why the code you’re copying is wrong for your use case, that’s on you.

        An LLM on the other hand will confidently tell you that its garbage is perfect and will do exactly what you asked for, and leave you to figure out why it doesn’t by yourself, without any context.

        An inexperienced programmer who’s willing to learn won’t fall for the first case and will actually learn from the comments and alternative answers, but will be completely lost if the hallucinating LLM is all they’ve got.

  • HakFoo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    My objections:

    1. It doesn’t adequately indicate “confidence”. It could return “foo” or “!foo” just as easily, and if that’s one term in a nested structure, you could spend hours chasing it.
    2. So many hallucinations-- inventing methods and fields from nowhere, even in an IDE where they’re tagged and searchable.

    Instead of writing the code now, you end up having to review and debug it, which is more work IMO.

    • CarbonatedPastaSauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I stopped using it after the third time it just wholesale made up powershell cmdlets that don’t exist.

      Until it has fidelity it’s just a toy.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Lemmy is an outlier where anything “AI” immediately triggers the luddites to scream and rant (and occasionally send threats over PMs…) that it is bad because it is “AI” and so forth. So… massive grain of salt.

    Speaking as (for simplicity’s sake) a software engineer who wears both a coder and a manager hat?

    “AI” is incredibly useful for charlie work. Back in the day you would hire an intern or entry level staff to write your unit tests and documentation and utility functions. But, for well over a decade now, documentation and even many unit tests can be auto-generated by scripts for vim or plugins for an IDE. They aren’t necessarily great but… the stuff that Fred in Accounting’s son wrote was pretty dogshit too.

    What LLMs+RAG do is step that up a few notches. You still aren’t going to have them write the critical path code. But you can farm off a LOT more charlie work to the point where you just need to do the equivalent of review an MR that came from a plugin rather than a kid who thinks we don’t know he reeks of weed.

    And… that is good and bad. Good in that it means smaller companies/teams are capable of much bigger projects. And bad because it means a lot fewer entry level jobs to teach people how to code.

    So that is the manager/mentor perspective. Let’s dig a bit deeper on your example:

    I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

    I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

    Honestly? That sounds to me like foundational issues. You already articulated what you need but you wanted to find an all in one guide rather than googing “bash function input example” or “bash function return example” or “strip trailing strash from directory path linux” and so forth. Also, I am pretty sure I very regularly find a guide that covers every one of those questions except for string processing every time I forget the syntax to a for loop in bash and need to google it.

    And THAT is the problem with relying on these tools. I know plenty of people who fundamentally can’t write documentation because their IDE has always generated (completely worthless) doxygen for them. And it sounds like you don’t know how to self-educate on how to solve a problem.

    Which is why, generally speaking:

    I still prefer to offload the charlie work to newbies because it helps them learn (and it lets me justify their paycheck). And usually what I do is tell them I want to “walk you through our SDLC. it is kind of annoying” to watch over their shoulder and make sure they CAN do this by hand. Then… whatever. I don’t care if they pass everything through whatever our IT/Cybersecurity departments deem legit.

    Which… personally? I generally still prefer “dumb” scripts to generate the boilerplate for myself. And when I do ask chatgpt or a “local” setup: I ask general questions. I don’t paste our codebase in. I say “Hey chatgpt, give me an example of setting the number of replicas of a pod based upon specific metrics collected with prometheus”. And I adapt that. Partially to make sure I understand what we are adding to our codebase and mostly because I still don’t trust those companies with my codebase and prompts. Which… is probably going to mean moving away from VSCode within the next year (yay Copilot) but… yeah.

  • BougieBirdie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    A lot of the criticism comes with AI results being wrong a lot of the time, while sounding convincingly correct. In software, things that appear to be correct but are subtly wrong leads to errors that can be difficult to decipher.

    Imagine that your AI was trained on StackOverflow results. It learns from the questions as well as the answers, but the questions will often include snippets of code that just don’t work.

    The workflow of using AI resembles something like the relationship between a junior and senior developer. The junior/AI generates code from a spec/prompt, and then the senior/prompter inspects the code for errors. If we remove the junior from the equation to replace with AI, then entry level developer jobs are slashed, and at the same time people aren’t getting the experience required to get to the senior level.

    Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

    Another argument would be that if I generate code that I have to take time to review and figure out what might be wrong with it, it might just be quicker and easier to write it correctly the first time

    Business often doesn’t understand these subtleties. There’s a ton of money being shovelled into AI right now. Not only for developing new models, but for marketing AI as a solution to business problems. A greedy executive that’s only looking at the bottom line and doesn’t understand the solution might be eager to implement AI in order to cut jobs. Everyone suffers when jobs are eliminated this way, and the product rarely improves.

    • clif@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Generally speaking, programmers like to program (many do it just for fun), and many dislike review. AI removes the programming from the equation in favour of review.

      This really resonated with me and is an excellent point. I’m going to have to remember that one.

      • vinnymac@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        A developer who is afraid of peer review is not a developer at all imo, but more or less an artist who fears exposing how the sausage was made.

        I’m not saying a junior who is nervous is not a dev, I’m talking about someone who has been at this for some time, and still can’t handle feedback productively.

        • mbtrhcs@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          They’re saying developers dislike having to review other code that’s unfamiliar to them, not having their code reviewed.

              • vinnymac@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                3 months ago

                I did, and I stand by what I said.

                Review is both taken and given. Peer review does not occur in a single direction, it is a conversation with multiple parties. I can understand if someone misunderstood what I meant though.

                • mbtrhcs@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  3 months ago

                  Your reply refers to a “junior who is nervous” and “how the sausage is made”, which makes no sense in the context of someone who just has to review code

  • madsen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language.

    Last time I checked (because I was writing Bash scripts based on the same assumption), Python was actually present on more Linux systems out of the box than Bash.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    If you’re a seasoned developer who’s using it to boilerplate / template something and you’re confident you can go in after it and fix anything wrong with it, it’s fine.

    The problem is it’s used often by beginners or people who aren’t experienced in whatever language they’re writing, to the point that they won’t even understand what’s wrong with it.

    If you’re trying to learn to code or code in a new language, would you try to learn from somebody who has only half a clue what he’s doing and will confidently tell you things that are objectively wrong? Thats much worse than just learning to do it properly yourself.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I’m a seasoned dev and I was at a launch event when an edge case failure reared its head.

      In less than a half an hour after pulling out my laptop to fix it myself, I’d used Cursor + Claude 3.5 Sonnet to:

      1. Automatically add logging statements to help identify where the issue was occurring
      2. Told it the issue once identified and had it update with a fix
      3. Had it remove the logging statements, and pushed the update

      I never typed a single line of code and never left the chat box.

      My job is increasingly becoming Henry Ford drawing the ‘X’ and not sitting on the assembly line, and I’m all for it.

      And this would only have been possible in just the last few months.

      We’re already well past the scaffolding stage. That’s old news.

      Developing has never been easier or more plain old fun, and it’s getting better literally by the week.

      Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.

        The problem (one of the problems) is that people do lean too heavily on the AI tools when they’re inexperienced and never learn for themselves “where to draw the X”.

        If I’m hiring a dev for my team, I want them to be able to think for themselves, and not be completely reliant on some LLM or other crutch.

  • PixelProf@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Lots of good comments here. I think there’s many reasons, but AI in general is being quite hated on. It’s sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here’s a few perspectives:

    • Training data is questionable/debatable ethics,
    • Amateur programmers don’t build up the same “code muscle memory”,
    • It’s being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
    • The time saved writing code isn’t being used to review and test the code more carefully than it was before,
    • The AI is being used for problem solving, where it’s not ideal, as opposed to code-from-spec where it’s much better,
    • Non-Local AI is scraping your (often confidential) data,
    • Environmental impact of the use of massive remote LLMs,
    • Can be used (according to execs, anyways) to replace entry level developers,
    • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
    • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what’s important and don’t see the skills they strengthen along the way to the answer.

    I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

    I think there is a lot of reasons to hate on it, but I think it’s because the reasons to use it effectively are still being figured out.

    Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don’t really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It’ll take time to adopt and adapt.

    • Em Adespoton@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Spot-on.

      I spend a lot of time training people how to properly review code, and the only real way to get good at it is by writing and reviewing a lot of code.

      With an LLM, it trains on a lot of code, but it does no review per-se… unlike other ML systems, there’s no negative and positive feedback systems in place to improve quality.

      Unfortunately, AI is now equated with LLM and diffusion models instead of machine learning in general.