I simply want a law, state federal or whatever, that requires orgs to be legally on the hook for whatever an AI that they represent as speaking on their behalf says.
There was a case in Canada that so found, dunno if there’s any US precedent yet.
We had a case here where Air Canada had deployed a bot to answer customer service questions and, of course, it gave some customers false information. Air Canada argued in court that they were not responsible for that guidance. Thankfully, the court disagreed. arstechnica.com/tech-policy/...
Air Canada appears to have quietly killed its costly chatbot support.
This is the technology of the future, with limitless potential and worth every penny, but also sometimes it just be out here sayin shit
As somebody who had to write an internal white paper about AI testing recently this was one of the risks I explicitly called out as something that needed to be tested for, on the assumption any promises made by an AI would end up being legally binding.
In the US, the FTC has been unequivocal and forceful in pointing out that you are, in fact, on the hook. There is no “but it was an AI” exception.
Why would a business not be on the hook for that? Businesses are already on the hook for things that an employee does and their inanimate communications like ads and price listings. To my knowledge there's no AI exception.
Just watch all that shit stop *immediately*.
Didn't AirCanada get burned with that in the recent past?
There has been a suit about it in the US. Dunno where it currently stands, though. It's extremely likely that the "ai" would result in the company being just as liable as when a customer service rep gets something horrendously wrong.
how about one better: someone suing such an org with the assumption it is the case and subsequently winning and establishing case law that's extremely difficult to overturn