Neuroscience researchers have documented the oft-described Uncanny Valley effect and managed to map its location in our human brains.
The research, carried out by academics from University Duisburg-Essen, RWTH Aachen University and Cambridge, will be of interest to AI and robotics researchers as they try and develop artificial agents that don’t leave consumers at best unsettled, at worst send them off-grid.
As the researchers write in the Journal of Neuroscience, “The Uncanny Valley hypothesis—an influential psychological framework—captures the observation that human responses to artificial agents are nonlinear: we like increasingly anthropomorphic artificial agents, but feel uncomfortable if they become too human-like.”
Such artificial agents can “elicit positive emotions but … also make humans uncomfortable and even induce repulsion, leading to rejection as social partners.”
Getting a better understanding of the underlying mechanisms could be useful in fine-tuning human-machine – and human to human interaction, they continue.
To understand the phenomenon, the researchers constructed artificial agents using pictures of humans, humans with physical impairments, “artificial humans”, android robots, humanoid robots and mechanoid robots. These were then rated by the subjects for their “likeability” and human-likeness, whilst subjects’ brain activity was measured using MRI scanning.
In a second task, the subjects’ brains were scanned while they were asked to consider which of the agents they would trust to choose a gift for a human.
Perhaps unsurprisingly, it seemed the agents were more likeable the more human they were – except for the near human synthetic humans on the human/non-human boundary. Likewise, the more human the agent, the more the subjects like them- again, except for those on the boundary.
Turning to the MRI information, the researchers concluded that activity in the ventromedial prefrontal cortex “signalled the subject likability of artificial agents as a nonlinear function of human-likeness, with selective low likability for highly human-like agents”.
They also said, “A distinct amygdala signal predicted rejection of artificial agents.” The amygdala is part of the limbic system, and central to the processing of emotions – like repulsion – and survival instincts.
The researchers concluded, “That human reactions toward artificial agents are governed by a neural mechanism that generates a selective, nonlinear valuation in response to a specific feature combination (human-likeness in nonhuman agents). Thus, a basic principle known from sensory coding—neural feature selectivity from linear-nonlinear transformation—may also underlie human responses to artificial social partners. “
And by transferring that insight from neural to computer coding, developers and designers might prevent themselves from producing agents – including those onscreen – that are likely to repel rather than engage users.