Such a good and sane outcome. Not only am I happy for the affected customers, I’m also going to file this away in my bookmarks for later use.
One of the main concerns I bring up every time one of the managers wants to throw “AI” at something because it’s trendy is “who’s responsible when it just makes something up?”.
I know this is a Canadian ruling instead of a US one, but at least I can point to it and say “probably us”.
Yup! I immediately sent this link to anyone who’s had to deal with the “throw a chatbot at it” management response
Good. I’m sure the chatbot will be back up and running soon, but anything that reminds companies there are risks to replacing humans with “AI-enhanced” chatbots is good. Unfortunately, I’m sure the lesson companies are going to take away from this is to include a disclaimer that the chatbot isn’t always correct. Which kind of defeats the whole point of using a chatbot to me. Why would I want to use something to try and solve a problem that you just told me could give me inaccurate information?
Expedia does it right. Just stop stupid customers at the brigade, and connect people with the real need immediately to real people.
Amazon is similar but the real people there are useless as fuck in my country. They’re foreign part timers barely speaking the language if my country… Can’t do anything specific.
Yeah, I bet now we’ll be seeing some real people in chats while they scramble to cover their asses.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto.
In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked.
Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.
Last March, Air Canada’s chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”
“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.
It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”
Saved 81% of original text.
Ironically the bot summary missed the crucial point that Air Canada’s chatbot gave inaccurate information.
There are two disturbing tendencies being demonstrated here:
- Using useless AI to engage and disperse complaining customers. The AI can’t make meaningful solutions to many customer complaints. But companies use it to annoy the customers into giving up, so that they can save the cost of real customer support.
- Either blaming the AI or insisting that it’s right, when it makes a mistake. AI by nature is biased and unpredictable. But that doesn’t stop the companies from saying ‘the computer says so’.
These companies need a few high profile hefty penalties as a motivation to avoid such dirty tricks.
\3. Asserting that their IT system is a “separate legal entity” and that they are not responsible for the accuracy of the system. They are eating legal loco weed.
Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,”
Another step back for the AI Liberation Front… can’t file patents, can’t own copyrights, can’t be a legal entity, can’t incorporate… what’s next, denying AI sentience? This dehumanizing and discrimination against AIs needs to stop. 🤡