It’s a mistake to ascribe sentience and self-sovereignty to the artificial intelligence chatbots that are parroting bizarre things back to humans, as captured in the recent dialogue between a New York Times reporter and Bing’s new AI chatbot.
A chatbot’s erratic behavior reflects how the algorithms were trained, rather than revealing some autonomous being who inhabits the space behind our screens. AI chatbots have trained themselves on us: our writing, our online interactions, our absurdity. They don’t know what they’re saying. They’re just able to recombine the things we’ve said into something coherent but often untrue or disconnected from context and nuance.
As the amount of content produced by AI scales exponentially, an even higher premium will be placed on humans to apply contextual knowledge, leverage ethical frameworks to steer through complex situations, and use good judgment to separate disinformation from truth. We should be more afraid of losing our judgment to AI than our jobs.