A bill under consideration in New York would provide a private right of action, allowing people to file lawsuits against chatbot owners who violate the law.
It’s problematic imho bc the “advice” is often incomplete, without context, or wrong. So you end up having to verify it yourself anyway. But if you don’t then you could have harmful advice.
It’s problematic imho bc the “advice” is often incomplete, without context, or wrong. So you end up having to verify it yourself anyway. But if you don’t then you could have harmful advice.
Which to be fair is not any different from a lawyer. They’re not perfect either.
The difference is that a lawyer can be held responsible for malpractice. When a chatbot gives harmful advice, who is responsible?
(Obviously, whoever is running it, but so far that hasn’t been established in court.)