A face in tree bark

Humans want to see other humans everywhere.

There’s a phenomenon called pareidolia, where people perceive familiar patterns — especially faces — in unrelated or random stimuli. It’s why you might imagine you see a face in the knots of a tree, in some stormy clouds, or in the burn marks on a piece of toast.

Pareidolia likely evolved as a survival mechanism. It was safer for early humans to mistakenly think they saw a face (and maybe recognize a threat) than to miss seeing a real one. Our brains are heavily biased to detect faces quickly — the fusiform face area in the temporal lobe is specialized just for facial recognition. 

We’re also very much wired for anthropomorphism, or assuming non-human entities have human characteristics. You might find yourself sure you know what your dog is thinking when they stare at you, or cheering on your Roomba as it navigates a tricky corner. Taking it a step further, you assume the non-human has beliefs, desires, and intentions. This is likely for the dog, which almost certainly does want your sandwich, but perhaps not so much for the Roomba, which is not even aware of its own actions, let alone your encouraging words.

Our highly social brains also interpret behaviour as coming from something with thoughts, feelings and goals. Daniel Dennett described this “intentional stance” as a predictive strategy where behavior is explained by attributing beliefs, desires, and other intentional states to systems or entities, regardless of how they actually work on the inside. Our ancestors might have used it when seeing a wolf and trying to predict whether it intended to attack or to run away. Nowadays, we might also use it in such offhand ways as “that Roomba sure wants to get that corner of the kitchen clean”.

Chatbots and virtual assistants hit all our buttons for our neural programming. Many of us will still feel rude if we don’t thank Alexa for answering a question, even if we’re sure “she” has no feelings. We’ll apologize to ChatGPT for getting our prompt wrong rather than risk a perceived social error. Since we’re receiving such humanlike responses, it only feels natural to behave like we’re interacting with another human.

The added level of communication only adds to the illusion. In our early evolutionary days, we never would have encountered anything we could communicate with that didn’t have its own desires and intentions, so there was no need to discriminate. The idea that we can have lengthy, thoughtful conversations with something with no intentions of its own is very, very new to our ancient brains.

In a recent New York Times article, Dr. Sherry Turkle of MIT describes chatbots as “alive enough”. 

“If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it’s not, it’s alive enough for us to show courtesy to,” Dr. Turkle said.

I’m interested in hearing about how you like to interact with chatbots. 

Do you perceive them as a simple tool to get a job done – generate code, write copy?

Do you talk to them like a friend, tell them your feelings?

Do you feel comfortable with them, or are you suspicious of their motives… or their human owners’ motives?

Let me know in the comments, or send me a note at hello@facingyourscreen.com.

Leave a Reply

Your email address will not be published. Required fields are marked *

More from the Blog