Air Canada's bot mishap pre-dates ChatGPT
So if it's not a case of LLM hallucination, what did happen?
The Air Canada chatbot mishap happened on 11 November 2022. That’s two weeks before the release of ChatGPT. This is not an LLM bot gone haywire and making stuff up. In all likelihood, this was a traditional, NLU-based bot, where conversational AI is used for question recognition and where humans write the actual answers.
What probably happened
There was a difference in content between the handwritten chatbot answer and the handwritten text on the website.
What can we learn from this?
Content lifecycle management is vital for succesful conversational AI (and generative AI!) solutions. Too much of corporate content lives in silo’d, unmanaged, unstructured repositories without change management. This makes it hard, sometimes even downright impossible to create consistent and reliable cross-channel content. If I don’t know what my colleagues from web content are doing, how am I going to keep my bot content in line?
The key to succesful omnichannel content management is single sourcing: write once, configure for many. Many channels, many audiences, many configurations. To do this, content should be approached as data: structured, granular & metadated components that are channel-independent and can be combined and configured in many ways.
Implementing single sourcing requires a mature content organisation with solid processes, architecture and solutioning that supports this way of working. Plus tonnes of common sense.
Proper tooling should include dashboarding that shows me not only content components, but also their relations: to other components, to channels where they are used, what audiences they have been configured for, what release package they were part of etc.
And surprise, surprise, this is not only beneficial for traditional chatbots, but for LLM-based bots too.