As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
No you see the corporations will just lobby until the courts get enough money to classify AI as it’s own individual entity, just like with citizens united.
The person who allowed the AI to make these decisions autonomously.
We should do it like Asimov has shown us: create “robot laws” that are similar to slavery laws:
In principle, the AI is a non-person and therefore a person must take responsibility.
No you see the corporations will just lobby until the courts get enough money to classify AI as it’s own individual entity, just like with citizens united.
The whole point of Asimov’s three laws were to show how they could never work in reality because it would be very easy to circumvent them.