As they improve, we’ll likely trust AI models with more and more responsibility. But if their autonomous decisions end up causing harm, our current legal frameworks may not be up to scratch.
My guess is that it’s gonna wind up being a split, and it’s not going to be unique to “AI” relative to any other kind of device.
There’s going to be some kind of reasonable expectation for how a device using AI should act, and then if the device acts within those expectations and causes harm, it’s the person who decided to use it.
But if the device doesn’t act within those expectations, then it’s not them, may be the device manufacturer.
Why would the lawyer defendant not know they’re interacting with AI? Would the AI generated content appear to be actual case law? How would that confusion happen?
The person who decided to use the AI
My guess is that it’s gonna wind up being a split, and it’s not going to be unique to “AI” relative to any other kind of device.
There’s going to be some kind of reasonable expectation for how a device using AI should act, and then if the device acts within those expectations and causes harm, it’s the person who decided to use it.
But if the device doesn’t act within those expectations, then it’s not them, may be the device manufacturer.
Yeah, if the company making the ai makes false claims about it, then it’d be on them at least partially.
There are going to be a lot of instances going forward where you don’t know you were interacting with an AI.
If there’s a quality check on the output, sure, they’re liable.
If a Tesla runs you into an ambulance at 80mph…the very expensive Tesla lawyers will win.
It’s a solid quandary.
Why would the lawyer defendant not know they’re interacting with AI? Would the AI generated content appear to be actual case law? How would that confusion happen?