• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I mean, most complex weapons systems have been some level of robot for quite a while. Aircraft are fly-by-wire, you have cruise missiles, CIWS systems operating in autonomous mode pick out targets, ships navigate, etc.

    I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:

    • Verification is extremely difficult, especially with weapons that are optionally-autonomous, as with. FCAS, for example, the fighter that several countries in Europe are working on, is optionally-manned. You can’t physically tell by just looking at such aircraft whether it’s going to be flown by a person or have an autonomous computer do so. If you think about the Washington Naval Treaty, Japan managed to build treaty-violating warships secretly. Warships are very large, hard to disguise, can be easily distinguished externally, and can only be built and stored in a very few locations. I have a hard time seeing how one would manage verification with autonomy.

    • It will very probably affect the balance of power. Generally-speaking, arms control treaties that alter the balance of power aren’t going to work, because the party disadvantaged is not likely to agree to it.

    I’d also add that I’m not especially concerned about autonomy specifically in weapons systems.

    It sounds like your concern, based on your follow-up comment, is that something like Skynet might show up – the computer network in the Terminator movie series that turn on humans. The kind of capability you’re dealing with isn’t on that level. I can imagine one day, general AI being an issue in that role – though I’m not sure that it’s the main concern I’d have, would guess that dependence and then an unexpected failure might be a larger issue. But in any event, I don’t think that it has much to do with military issues – I mean, in a scenario where you truly had an uncontrolled, more-intelligent-than-humans artificial intelligence running amok on something like the Internet, it isn’t going to matter much whether-or-not you’ve plugged it into weapons, because anything that can realistically fight humanity can probably manage to get control of or produce weapons anyway. Like, this is an issue with the development of advanced artificial intelligence, but it’s not really a weapons or military issue. If we succeed in building something more-intelligent than we are, then we will fundamentally face the problem of controlling it and making something smarter than us do what we want, which is kind of a complicated problem.

    The term coined by Yudkowsky for this problem is “friendly AI”:

    https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

    Friendly artificial intelligence (also friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

    It’s not an easy problem, and I think that it’s worth discussion. I just think that it’s mostly unrelated to the matter of making weapons autonomous.

    • model_tar_gz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Reward models (aka reinforcement learning) and preference optimization models can come to some conclusions that we humans find very strange when they learn from patterns in the data they’re trained on. Especially when those incentives and preferences are evaluated (or generated) by other models. Some of these models could very well could come to the conclusion that nuking every advanced-tech human civilization is the optimal way to improve the human species because we have such rampant racism, classism, nationalism, and every other schism that perpetuates us treating each other as enemies to be destroyed and exploited.

      Sure, we will build ethical guard rails. And we will proclaim to have human-in-the-loop decision agents, but we’re building towards autonomy and edge/corner-cases always exist in any framework you constrain a system to.

      I’m an AI Engineer working in autonomous agentic systems—these are things we (as an industry) are talking about—but to be quite frank, there are not robust solutions to this yet. There may never be. Think about raising a teenager—one that is driven strictly by logic, probabilistic optimization, and outcome incentive optimization.

      It’s a tough problem. The naive-trivial solution that’s also impossible is to simply halt and ban all AI development. Turing opened Pandora’s box before any of our time.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I don’t expect that that genie will ever go back in the bottle. To do it, you’d need an arms control treaty, and there’d be a number of problems with that:

      The worst problem of arms control treaties is that Usa never adheres to one.