Microsoft: "you're using Bing wrong! You're not supposed to talk with it!"
You built a conversational interface and then expected people to use it just for search?
OK, the title is a bit clickbaity, but yes, in their blog, Microsoft explains that they didnāt expect people to use Bing Chat āa tool for more general discovery of the world, and for social entertainmentā.
Back to the basics: conversation 101
Really Microsoft? You built a conversational interface and then expected people to use it just for search? Did it ever occur to you that people talk to Sydney because you designed it that way? Even ChatGPT could have told you that if you design a funny persona, give users an interface with text bubbles and a button labelled āChatā, they might do slightly more than search for cake recipes.

Regardless, itās always good to reconsider the true nature of conversation, so letās do some Conversation 101. Here are some quotes from N.J. Enfieldās book How we talk (a must read for everyone in this field) and some thoughts to ponder on a Sunday afternoon.
The nature of conversation
āConversation is where language lives and breathes. Conversation is the medium in which language is most often used. When children learn their native language, they learn it in conversation. When a language is passed down through generations, it is passed down by means of conversation.ā
Take-away: task-based solution solving is just a subset of all the possible functions of conversation. We use conversation for so much more, and so much more fundamentally than doing a search or a QA.
Conversation as advanced collaboration
āAn individualās ability to learn a process language is an unbeatable skill in the animal world, but it is the teamwork of dialogue that reveals the true genius of language. Even the simplest conversation is a collaborative and precision-timed achievement by the people involved. [ā¦] When two people talk, they each become an interlocking piece in a single structure, driven by [ā¦] the conversation machine.ā
Take-away: conversation is a collaborative effort. As soon as humans feel theyāre part of a conversation, there are expectations of the other party. Even if itās a machine. And even though Sydney/Venom superficially manages kind of OK in simple task-based dialogue, it canāt manage the subtleties of real conversation, where people might be subtle, upfront, confrontational, or manipulative. It mirrors, rather than cooperates and guides. Itās not an equal conversational partner.
Conversation is about intentions and moral commitment
āLanguage would not be what it is without our speciesā highly cooperative and morally grounded ways of thinking. For the conversation machine to operate, humans apply high-level interpersonal cognition: we infer otherās intentions beyond the explicit meanings of their words, we monitor othersā personal and moral commitment to interaction and if necessary hold them to account for that commitment.ā
Take-away: so what is the intention of a Sydney/Venom? Can it have its own stake in a conversation in the first place? Sure enough, a LLM can echo words and conventions, but without a capability to actually understand or ground these words in reality, what good are they?
Likewise, how inclined would people be to help Sydney/Venom to stay on track and make their conversation successful? Some of the Sydney/Venom insults happened spontaneously, but of course, many of them were elicited after long and careful prompting. Apparently, we like to test the boundaries of this new conversational interface. I certainly did :-)
And this is of course not something we do to humans in everyday conversation. At least, not this extreme, Iād say. Because weāre aware that weāre dealing with another party that has its own stake in the conversation, and is working with us to be that single, intertwined conversation machine. Will we ever be able to form this same system with a machine?
Bing vs ChatGPT
Itās interesting to note that ChatGPT, contrary to Sydney/Venom, seems to uphold a clear intention: āIām a machine, Iām here to help, and Iām not going down the road of being provokedā (that is, until you jailbreak it). And somehow, for me at least, that made it much easier to stick to my part of the script, of being a fair conversational partner.

In that sense, ChatGPT does a much better job at keeping its part of the conversational deal. Yes, it will make errors, but it wonāt pretend itās something itās not. And as such, it perhaps does not take an equal role in the conversational machine, but at least its intentions are clear: it is a machine, and it will behave like one. Which makes it much easier for me to define my role in the conversation.