• Eheran@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    So you predicted that security flaws in software are not going to vanish with AI?

      • melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        My point exactly, now you have genAI code written by AI, who doesn’t know what it is doing. Instructed by a developer, who doesn’t understand the programming language. Reviewed by a co-worker, who doesn’t know what is doing on. It’s madness I tell you!

    • melroy@kbin.melroy.org
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I predicted that introducing AI on software engineer (especially juniors) will result in overall worse code, since apparently people don’t feel responsible for the genAI code. While I believe the responsibility is still fully at the humans who try to deliver code. And on top of that, most devs are not doing good code reviews in general (often due to lack of time or … skill issue). And now we have AI that generates code which are too easily accepted on top of reviewers who blindly accept code… And no unit tests or integration tests… And then we have this current situation. No wonder this would happen. If you are in software engineering, you would know exactly where I’m talking about. Especially if you would work at larger companies.