Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash’s syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

  • bruhduh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    That is the general reason, i use llms to help myself with everything including coding too, even though i know why it’s bad

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      I’m fairly sure Linus would disapprove of my "rip everything off of Stack Overflow and ship it " programming style.

    • dezmd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      This is a good quote, but it lives within a context of professional code development.

      Everyone in the modern era starts coding by copying functions without understanding what it does, and people go entire careers in all sorts of jobs and industries without understanding things by copying what came before that ‘worked’ without really understanding the underlying mechanisms.

      What’s important is having a willingness to learn and putting in the effort to learn. AI code snippets are super useful for learning even when it hallucinates if you test it and make backups first. This all requires responsible IT practices to do safely in a production environment, and thats where corporate management eyeing labor cost reduction loses the plot, thinking AI is a wholesale replacement for a competent human as the tech currently stands.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    If you’re not an experienced developer, it could be used as a crutch rather than actually learning how to write the code.

    The real reason? People are just fed up with AI in general (that has no real-world use to most people) being crammed down their throats and having their personal code (and other data) being used to train models for megacorps.

    • sirblastalot@ttrpg.network
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      There are probably legitimate uses out there for gen AI, but all the money people have such a hard-on for the unethical uses that now it’s impossible for me to hear about AI without an automatic “ugggghhhhh” reaction.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    Two reasons:

    1. my company doesn’t allow it - my boss is worried about our IP getting leaked
    2. I find them more work than they’re worth - I’m a senior dev, and it would take longer for me to write the prompt than just write the code

    I just dont know anything about bash’s syntax

    That probably won’t be the last time you write Bash, so do you really want to go through AI every time you need to write a Bash script? Bash syntax is pretty simple, especially if you understand the basic concept that everything is a command (i.e. syntax is <command> [arguments...]; like if <condition> where <condition> can be [ <special syntax> ] or [[ <test syntax> ]]), which explains some of the weird corners of the syntax.

    AI sucks for anything that needs to be maintained. If it’s a one-off, sure, use AI. But if you’re writing a script others on your team will use, it’s worth taking the time to actually understand what it’s doing (instead of just briefly reading through the output). You never know if it’ll fail on another machine if it has a different set of dependencies or something.

    What negative experience do you have with AI chatbots that made you hate them?

    I just find dealing with them to take more time than just doing the work myself. I’ve done a lot of Bash in my career (>10 years), so I can generally get 90% of the way there by just brain-dumping what I want to do and maybe looking up 1-2 commands. As such, I think it’s worth it for any dev to take the time to learn their tools properly so the next time will be that much faster. If you rely on AI too much, it’ll become a crutch and you’ll be functionally useless w/o it.

    I did an interview with a candidate who asked if they could use AI, and we allowed it. They ended up making (and missing) the same mistake twice in the same interview because they didn’t seem to actually understand what the AI output. I’ve messed around with code chatbots, and my experience is that I generally have to spend quite a bit of time to get what I want, and then I still need to modify and debug it. Why would I do that when I can spend the same amount of time and just write the code myself? I’d understand the code better if I did it myself, which would make debugging way easier.

    Anyway, I just don’t find it actually helpful. It can feel helpful because it gets you from 0 to a bunch of code really quickly, but that code will probably need quite a bit of modification anyway. I’d rather just DIY and not faff about with AI.

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      You boss should be more worried about license poisoning when you incorporate code that’s been copied from copyleft projects and presented as “generated”.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Perhaps, but our userbase is so small that we’d be very unlikely that someone would notice. We are essentially B2B with something like a few hundred active users. We do vet our dependencies religiously, but in all actuality, we could probably get away with pulling in some copyleft code.

  • obbeel@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    6 days ago

    I have worked with somewhat large codebases before using LLMs. You can ask the LLM to point a specific problem and give it the context. I honestly don’t see myself as capable without a LLM. And it is a good teacher. I learn much from using LLMs. No free advertisement for any of the suppliers here, but they are just useful.

    You get access to information you can’t find on any place of the Web. There is a large structural bad reaction to it, but it is useful.

    (Edit) Also, I would like to add that people who said that questions won’t be asked anymore seemingly never tried getting answers online in a discussion forum - people are viciously ill-tempered when answering.

    With a LLM, you can just bother it endlessly and learn more about the world while you do it.

  • Soup@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Because despite how easy it is to dupe people into thinking your methods are altruistic- AI exists to save money by eradicating jobs.

    AI is the enemy. No matter how you frame it.

  • Numuruzero@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I have a coworker who is essentially building a custom program in Sheets using AppScript, and has been using CGPT/Gemini the whole way.

    While this person has a basic grasp of the fundamentals, there’s a lot of missing information that gets filled in by the bots. Ultimately after enough fiddling, it will spit out usable code that works how it’s supposed to, but honestly it ends up taking significantly longer to guide the bot into making just the right solution for a given problem. Not to mention the code is just a mess - even though it works there’s no real consistency since it’s built across prompts.

    I’m confident that in this case and likely in plenty of other cases like it, the amount of time it takes to learn how to ask the bot the right questions in totality would be better spent just reading the documentation for whatever language is being used. At that point it might be worth it to spit out simple code that can be easily debugged.

    Ultimately, it just feels like you’re offloading complexity from one layer to the next, and in so doing quickly acquiring tech debt.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 days ago

      Exactly my experience as well. Using AI will take about the same amount of time as just doing it myself, but at least I’ll understand the code at the end if I do it myself. Even if AI was a little faster to get working code, writing it yourself will pay off in debugging later.

      And honestly, I enjoy writing code more than chatting with a bot. So if the time spent is going to be similar, I’m going to lean toward DIY every time.

  • bitwolf@lemmy.one
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    We built a Durable task workflow engine to manage infrastructure and we asked a new hire to add a small feature to it.

    I checked on them later and they expressed they were stuck on an aspect of the change.

    I could tell the code was ChatGPT. I asked “you wrote this with ChatGPT didn’t you?” And they asked how I could tell.

    I explained that ChatGPT doesn’t have the full context and will send you on tangents like it has here.

    I gave them the docs to the engine and to the integration point and said "try using only these and ask me questions if you’re stuck for more than 40min.

    They went on to become a very strong contributor and no longer uses ChatGPT or copilot.

    I’ve tried it myself and it gives me the wrong answers 90% of the time. It could be useful though. If they changed ChatGPT to find and link you docs it finds relevant I would love it but it never does even when asked.

    • socialmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Phind is better about linking sources. I’ve found that generated code sometimes points me in the right direction, but other times it leads me down a rabbit hole of obsolete syntax or other problems.

      Ironically, if you already are familiar with the code then you can easily tell where the LLM went wrong and adapt their generated code.

      But I don’t use it much because its almost more trouble than its worth.

  • john89@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    Personally, I’ve found AI is wrong about 80% of the time for questions I ask it.

    It’s essentially just a search engine with cleverbot. If the problem you’re dealing with is esoteric and therefore not easily searchable, AI won’t fare any better.

    I think AI would be a lot more useful if it gave a percentage indicating how confident it is in its answers, too. It’s very useless to have it constantly give wrong information as though it is correct.

  • OmegaLemmy@discuss.online
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 days ago

    I use ai, but whenever I do I have to modify it, whether it’s because it gives me errors, is slow, doesn’t fit my current implementation or is going off the wrong foot.

  • Smokeydope@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 days ago

    Its not just AI code but AI stuff in general.

    It boils down to lemmy having a disproportionate amount of leftist liberal arts college student types. Thats just the reality of this platform.

    Those types tend to see AI as a threat to their creative independent business. As well as feeling slighted that their data may have been used to train a model.

    Its understandable why lots of people denounce AI out of fear, spite, or ignorance. Its hard to remain fair and open to new technology when its threatening your livelihood and its early foundations may have scraped your data non-consentually for training.

    So you’ll see AI hate circle jerk post every couple days from angry people who want to poison models and cheer for the idea that its just trendy nonesense. Dont debate them. Dont argue. Just let them vent and move on with your day.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 days ago
    • issues with model training sources
    • business sending their whole codebase to third party (copilot etc.) instead of local models
    • time gain is not that substantial in most case, as the actual “writing code” part is not the part that takes most time, thinking and checking it is
    • “chatting” in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you’re half-competent. We’ve known that since customer/developer meetings have existed.
    • the dev have to actually be competent enough to review the changes/output. In a way, “peer reviewing” becomes mandatory; it’s long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
    • some business thinking that LLM outputs are “good enough”, firing/moving away people that can actually do said review, leading to more issues down the line
    • actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
    • making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
    • using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of “optimized out” in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term

    Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:

    • sometimes, it does produce useful output in which I only have to edit a few parts to make it works
    • local autocomplete is sometimes almost as useful as the regular contextual autocomplete
    • the chatbot turning short code into longer “natural language” explanations can sometimes act as a rubber duck in aiding for debugging

    Note the “sometimes”. I don’t have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it’s not even a matter of the tool becoming better, it’s the usefulness to begin with that’s uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare “gains”.

    • confuser@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      a lot of your issues are effeciency related which i think can realistically be solved given some time for development cycles to take hold on ai. if they were better all around to whatever standard you think is sufficiently useful, would you then think it would be useful? the other side related thing too is that if it can get that level of competence in coding then it most likely can get just as competant in a variety of other domains too.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    If you’re a seasoned developer who’s using it to boilerplate / template something and you’re confident you can go in after it and fix anything wrong with it, it’s fine.

    The problem is it’s used often by beginners or people who aren’t experienced in whatever language they’re writing, to the point that they won’t even understand what’s wrong with it.

    If you’re trying to learn to code or code in a new language, would you try to learn from somebody who has only half a clue what he’s doing and will confidently tell you things that are objectively wrong? Thats much worse than just learning to do it properly yourself.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 days ago

      I’m a seasoned dev and I was at a launch event when an edge case failure reared its head.

      In less than a half an hour after pulling out my laptop to fix it myself, I’d used Cursor + Claude 3.5 Sonnet to:

      1. Automatically add logging statements to help identify where the issue was occurring
      2. Told it the issue once identified and had it update with a fix
      3. Had it remove the logging statements, and pushed the update

      I never typed a single line of code and never left the chat box.

      My job is increasingly becoming Henry Ford drawing the ‘X’ and not sitting on the assembly line, and I’m all for it.

      And this would only have been possible in just the last few months.

      We’re already well past the scaffolding stage. That’s old news.

      Developing has never been easier or more plain old fun, and it’s getting better literally by the week.

      Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 days ago

        Edit: I agree about junior devs not blindly trusting them though. They don’t yet know where to draw the X.

        The problem (one of the problems) is that people do lean too heavily on the AI tools when they’re inexperienced and never learn for themselves “where to draw the X”.

        If I’m hiring a dev for my team, I want them to be able to think for themselves, and not be completely reliant on some LLM or other crutch.

  • PixelProf@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Lots of good comments here. I think there’s many reasons, but AI in general is being quite hated on. It’s sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here’s a few perspectives:

    • Training data is questionable/debatable ethics,
    • Amateur programmers don’t build up the same “code muscle memory”,
    • It’s being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
    • The time saved writing code isn’t being used to review and test the code more carefully than it was before,
    • The AI is being used for problem solving, where it’s not ideal, as opposed to code-from-spec where it’s much better,
    • Non-Local AI is scraping your (often confidential) data,
    • Environmental impact of the use of massive remote LLMs,
    • Can be used (according to execs, anyways) to replace entry level developers,
    • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
    • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what’s important and don’t see the skills they strengthen along the way to the answer.

    I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

    I think there is a lot of reasons to hate on it, but I think it’s because the reasons to use it effectively are still being figured out.

    Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don’t really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It’ll take time to adopt and adapt.

    • Em Adespoton@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      Spot-on.

      I spend a lot of time training people how to properly review code, and the only real way to get good at it is by writing and reviewing a lot of code.

      With an LLM, it trains on a lot of code, but it does no review per-se… unlike other ML systems, there’s no negative and positive feedback systems in place to improve quality.

      Unfortunately, AI is now equated with LLM and diffusion models instead of machine learning in general.