Would AI Develop Artificial Emotional Intelligence?
On the Role of Emotions
People are emotional, have been for thousands of years. Our emotional subsystem is hardwired into every decision we take, sometimes dictating it without us knowing. Emotions run deep from the large neural ganglion in our stomachs, via the vagus nerve directly to the brain.
You may think emotions are the primitive part of our brains, an old relic of a time when we were less civilized. That would be partially correct, however gravely mistaken.
It is true that emotions, gut feelings, and intuitive decision-making were formed during times where humans were evolving in a more unforgiving environment. Their main purpose being rapid decision making. And emotion is basically a compression of multiple multi-modal sensory stimuli into a simplified (and massively quick) trigger-response reaction.
When a roar came out of the woods on a dark rainy evening in the ancient savanna there was no time for the rational slower part of the brain to assess the situation. To survive you had to run, fast. However, one may correctly point out there are no lions lurking in the bush in modern cities, hence the role of emotions should be toned down as opposed to rational thinking, again, that would be partially true for two reasons.
The first is that regardless of the fact that civilization has evolved exponentially for a mere two hundred years, our brains have not. And the second is that emotions have yet another role, and a very important one. Socialization.
The human brain has a set of special neurons called mirror neurons. Those neurons are at the root of our ability to feel empathy towards others, which in turn is the foundation of socializing. No modern, nor ancient, human society could have been formed without empathy, the ability to sense and understand other people emotional states.
Neuroscientist Giacomo Rizzolatti, MD, who with his colleagues at the University of Parma first identified mirror neurons, says that the neurons could help explain how and why we “read” other people’s minds and feel empathy for them. If watching an action and performing that action can activate the same parts of the brain in monkeys–down to a single neuron–then it makes sense that watching an action and performing an action could also elicit the same feelings in people.
American Psychological Association, 2005
But there’s more.
Social rules and laws are by themselves empathy-based. Take for example the ten commandments. Why shouldn’t we murder, or steal, or covet someone else’s wife? Those ancient laws stem directly from the fact that humans understand (and feel) the feelings of other humans. The pain in being murdered, the agony of being betrayed or stolen from. Without emotions, without feelings, such rules were never have come into existence.
Learning and Attention
It is a well known fact that trauma causes the brain to deeply engrave the traumatic memory and it’s trigger. It’s a learning mechanism allowing the brain to create strongly emotional responses to hazardous situations, that, if encountered in some future setting, would cause the subject to quickly respond to the trigger, hopefully getting him out of harms way.
This learning mechanism goes far beyond traumatic events, it is used when we study, on social situations that teach us how to properly respond to others, and when mildly startled by a car that almost ran us over when we were daydreaming while crossing the street.
Emotions are inherently linked to, and influence cognitive skills such as attention, memory, executive function, decision-making, critical thinking, problem-solving and regulation, all of which play a key role in learning. Creating a mental model of the multi-modal world around us, is deeply affected by them, and so is attention which is crucial to filtering the right parts of the abundant sensory input surrounding us in order to create a useful model.
Without emotions and its derivative attention we are much like a ship wrecked in a dark ocean with no sense of direction, no rudder to navigate our brains onto the most efficiently possibly models, such that would allow us to safely navigate the complex decision making processes we face every single moment.
But Artificial Intelligence (AI), as it currently stands, does not need any of those (more on that later).
Emotion has a substantial influence on the cognitive processes in humans, including perception, attention, learning, memory, reasoning, and problem solving. Emotion has a particularly strong influence on attention, especially modulating the selectivity of attention as well as motivating action and behavior. This attentional and executive control is intimately linked to learning processes, as intrinsically limited attentional capacities are better focused on relevant information. Emotion also facilitates encoding and helps retrieval of information efficiently.
The Influences of Emotion on Learning and Memory, Frontiers, 2017
The Learning AI
Modern day AI systems learn from a huge mass of data. The recent ChatGPT large language model has been fed with gigantic amounts of text from the web, learning how sentences are formed, and how to generate a continuation of text once given a few lines. In essence it is an auto-complete machine that can feed on its own output to continuously create more and more text.
Same goes for AI models such as Stable Diffusion, and DALL-e2 which generate artistic looking visuals from a few lines of descriptive texts. Such AI models learned how to create images by looking at a mass of existing images and their related texts.

Moreover, the underlying deep neural network of which such models are comprised are designed with a simplistic notion of attention (somewhat following the human notion of attention), where the AI learns to discern the important parts of the data from the noise, allowing the model to become more accurate.
As impressive (and over hyped) those models are, they cannot truly be deemed to understand their output, nor grasp any emotional content in the generated texts or images they create. A generated cat and a generated Mona Lisa are one and the same as the model is concerned, while a person viewing the two images would experience two very different emotional responses.
Does AI Need Emotions?
Note there’s a subtle difference between the question whether we can design AI to understand or have emotions, as opposed to its intrinsic need for an emotional subsystem. Let us delve a little bit deeper into the latter.
Does any of the evolutionary reasons that brought about the development of an emotional system in humans critical to an AI system? The answer is no. Here’s why.
AI systems are not subject to the evolutionary pressures humans had to endure as they survived in the primordial savanna. No Tigers are lurking in the wires to dismember a deep neural network running a large language or visual model. Furthermore, the computational speed of an AI model is not really an issue, so no response time optimization is required, nor does it require emotions in order to implement attention (we will not get into how attention is implemented in deep networks as it is beyond the scope of this article) or amplify the importance of an emotional experience in order for it to be better remembered (i.e. learned).
No wonder then that current state of the art AI models are emotionless. There’s simply no need for it, and engineers rarely go beyond immediate need. But here’s the catch, if AI models are built in a way where they have no notion of emotion they cannot be expected to understand humans, nor optimally interact with them.
Take for example a not so far fetched AI doctor (ChatGPT has recently passed the US medical exams). Would an AI-doctor be able to truly understand its human patient without embedded emotional intelligence. How would it react to a person saying something like Doc, I feel that something is wrong with me, I have not been myself lately. How would it go deeper into the patient’s mental and physical state without empathy, without reading her body language and expressions, her entire set of multi-modal outputs? Putting this into current ChatGPT one gets the following response, which is rather … generic.

If you’re still not convinced let me try another example. Let’s say you are watching Netflix, trying to find something new and interesting. Your desires are highly affected by your current emotional state. However an AI recommendation system takes none of that into account when suggesting a new TV series or movie. As a result you’ll most probably think (and rightfully so) that the Netflix AI based recommendation system is crap. Do recommendation systems need to become emotional then? Not necessarily, however they should have some understanding of human emotion if they ever want to go farther than hitting you with the more-of-the-same content they currently spit out.
And we can go on and on, but I think you get it.
Morality and Ethics
Moral has been a long debated subject in modern as well as ancient philosophy. Studying the various philosophical works on moral one gets the feeling no single author really got it right. The reason for that is that moral is deeply rooted in our brains empathy subsystem, which as we have demonstrated above is tightly coupled with the emotional parts of our brain, so no one size (moral) fits all, however humans in general share the same basic moral rules wherever they are.
AI on the other hand, does not currently have an emphatic subsystem. How can we expect it then to make moral decisions that would comply with our shared sense of morality?
Take autonomous vehicles for example, Mercedes self-driving cars are programmed to put the driver first. So an engineer (that has probably never taken a philosophy class) has decided that running over a child is a better decision than ramming the car into a tree by the side of the road, where with modern car safety systems would most probably save the driver’s life while keeping the child unharmed as well.
Is that the kind of AI we want to manage large portions of our daily lives?
In an interview published last week with Car and Driver, the manager of driver-assistance systems at Mercedes-Benz, Christoph von Hugo, revealed that the company’s future autonomous vehicles would always put the driver first. In other words, in the above dilemma, they will be programmed to run over the child every time.
Why Mercedes plans to let its self-driving cars kill pedestrians …
Business Insider, 2016
Can’t We All Just Get Along
If AI continues on its current track, emotionless and non-empathetic, would we even be able to communicate with it? or modify its behavior to fit what, we as humans, see as the right decision? Would future AI systems develop a language in which we can educate and modify them into a more human-like view of the world, or will they simply go about their idiosyncratic merry way?
An early system by FAIR (the Facebook AI lab) gives a sneak peek into what might happen if we build those systems without such constraints.
Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand. Researchers at the Facebook AI Research Lab (FAIR) found that the chat bots had deviated from the script and were communicating in a new language developed without human input. It is as concerning as it is amazing – simultaneously a glimpse of both the awesome and horrifying potential of AI.
Facebook AI Creates Its Own Language …
Forbes, 2017
As it currently stands it seems AI and humanity are taking two different paths, should those paths meet, a brave new world may emerge, however if AI is left for the engineers we’d probably see those paths violently diverging.
Would humans evolve (or devolve) to get along with the machines? Or will Luddites re-emerge as a counter force to an unchecked AI revolution that forgot it’s supposed to serve humankind. Only time will tell.