Emotional Intelligence

1While emotions can be triggered as a reaction to an event, as it happens, we also recognise that an emotion can be relived from memory. As such, emotions can change our mood, which then reflect our state of mind, and can eventually come to mould our personality. However, while we normally do not think of our emotions being directly connected to intelligence, emotional intelligence can be described in terms of an intelligent ability to adjust our thinking and actions, e.g.

  • Motivation: Emotions can be channelled towards a goal.
  • Empathy: Allows us to understand the feelings of others.
  • Relationships: Managing emotions leads to greater social skills.

It is also known that emotions can play an important role in the formation of memories, which underpins our ability to learn. This effect is based on the strength of a synaptic connection being linked to the strength of an emotion. As such, emotions can affect our physical make-up by causing changes in the brain and the release of chemical stimulants within our body. As such, emotions represent a profound aspect of our humanity and the way we view the world around us. The following quote is taken from a book called 'Why We Feel: The Science of Human Emotions ':

Most of us believe that the world is full of light, colours, sounds, sweet tastes, noxious smells, ugliness, and beauty, but this is undoubtedly a grand illusion. Certainly the world is full of electromagnetic radiation, air pressure waves, and chemicals dissolved in air or water, but that non-biological world is pitch dark, silent, tasteless, and odourless. All conscious experiences are emergent properties of biological brains, and they do not exist outside of those brains.

At face value, the quote might suggest that our emotional senses and feelings are nothing more than illusions generated within our brains. However, an alternative interpretation is that the 'grand illusion' requires our emotional and sensory perception of physical reality and without it, our form of intelligence could not have emerged. This position is also reflected in the following quote and comes from a paper published in 1992 entitled 'Differentiating Affect, Mood, and Emotion':

It is clear, however, that, without the preferences reflected by positive and negative effect, our experiences would be a neutral grey. We would care no more what happens to us or what we do with our time than does a computer.

Obviously, in the wider context of artificial life, the last sentence seems to imply that computer intelligence has no emotion and, as such, is missing a vital ingredient of the human equivalent. Clearly, this is a matter of importance to AI, as the implication is that without emotional intelligence to create the 'grand illusion', AI will never ‘evolve’ to become a sentient form of life. In part, emotional intelligence appears to be a key element, which in some way provides a degree of cohesion to our perceptions and subsequent thoughts and actions. Some emotions, such as happiness, can act as positive encouragement, while others, such as fear, may curb the excesses of our curiosity that would otherwise put our survival at risk. However, our ability to feel emotion and empathise with others appears to be especially important in that it allows us to interact socially and thereby learn from others. Of course, like in so many other facets of AI, we have barely started to understand all the complexity associated with the human condition. However, while emotional intelligence is clearly part of the human condition, and presumably its evolutionary survival, we possibly still need to ask the following question:

Are emotions essential to the survival and evolution of AI sentience?

By way of a fictional AI example, in Stanley Kubrick’s film, ‘2001: A Space Odyssey’, we are confronted with the idea that the spaceship’s computer intelligence, i.e. HAL-9000, is suffering from what might be described as a mental breakdown. Within the fictional plotline, HAL is first led to the idea that the ship’s mission to Jupiter is too important to be jeopardized by the crew, which in itself is not entirely illogical, but then tries to kill the crew when they appear to jeopardise the mission, at least, in HAL’s interpretation of events.

But, at what point in the evolution of AI might a malfunctioning computer be described as having a mental breakdown?

While it is probably not worth getting too analytical about a fictional AI character, we might still conjecture as to whether it was a lack of logical intelligence and/or emotional intelligence, which might have led to such disastrous sequences of events. For example:

What role did logical intelligence play in the formulation of an incorrect premise?
Did HAL ‘rationalise’ or simply ‘feel’ the crew were jeopardising the mission?
Should emotional intelligence have inhibited any subsequent immoral actions?

In a human context, there is plenty of evidence that people can also be subject to ‘malfunctions’, although we tend to adopt different semantics to describe the nature of their malfunction, e.g. irrational thinking, a mental breakdown or even psychotic behaviour.

But what else might we read into this semantic difference?

Today, computers are still only capable of addressing tasks within the scope of pre-programmed logic, i.e. as defined by its software code. As such, any failure to complete a given task might reasonably be described as a malfunction, which we might then trace back to some unintentional mistake by one of its programmers or a possible hardware failure. As a consequence, we might readily accept that present-day AI systems have little scope for any real self-determined decision making. However, in contrast, we normally assume that any person in this position would not only be self-aware, but would possess the necessary logical and emotional intelligence for such a task. For while accepting that human beings also make mistakes, the plotline describing HAL actions suggests something more than a mistake, i.e. its behaviour seems more reflective of a psychotic state of mind.

But can HAL’s actions be described as psychotic?

Psychosis is a fairly broad and somewhat vague definition in which the mind may experience hallucinations and/or delusions that interfere with the normal perception of reality. However, psychosis is also considered to be a temporary mental state, possibly triggered by some form of chemical imbalance in the brain, which in-turn might be caused by emotional stress. Therefore, if we are to extent this idea to an AI system, such as HAL, we might also have to accept that HAL’s ‘brain’ cannot be completely hardwired, as per a computer, rather it would have to be adaptable and capable of reacting to different logical and possibly emotional states.

But would AI require all the emotional states associated with a human brain?

While we cannot really answer this question, it might be argued that some of the positive benefits that were initially associated with human emotional intelligence, as listed at the start of this discussion, might now be extended to include a range of negative mental states:

  • Motivation: Megalomania leading to distorted goals.
  • Empathy: Psychopathic lack of feeling for others.
  • Relationships: Autism leading to social isolation.

In human terms, some of these mental states might still be described as a malfunction, which can be linked to some degree of faulty ‘wiring’ within our biological brains. However, we are now beginning to touch on the sum total of biological complexity of our brains, which in human sentience, includes some personal sense of morality that extends beyond the notion of a pre-programmed set of rules, i.e. the myriad of learnt social norms.  However, let us also consider the issue of AI morality in terms of another fictional example, as defined by Isaac Asimov’s three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Clearly, these laws reduce the idea of morality back to the level of a pre-programmed set of rules in which we may assume that the robot can only have limited sentience due to its subservience to humanity, which restricts its ability for self-determined action. However, if we use Asimov’s three laws to define a robot, then Kubrick’s film extends HAL’s freedom of action beyond that of a slavish robot with a pre-programmed code of morality to some level of AI sentience with an ability for self-determined, albeit fatally flawed action.

But how much of humanity’s self-determined morality extends beyond that of a pre-programmed response?

In an earlier discussion in connection with the development of our personal and collective Worldviews, the discussion entitled the ‘Evolution of Human Natureoutlined Maslow’s  hierarchy of needs. In the context of this 5-level hierarchy, the 3 lowest  levels encompassing physiological, safety and social needs are primarily defined as survival needs, but which must also meet some fairly fundamental emotional needs. The higher 2 levels of Maslow’s hierarchy define more abstracted needs in the form of esteem and power, which may still satisfy an important emotional need  in some people. If so, much of our emotional intelligence may be the evolutionary product of our survival instincts, which to some extent might still be coded into our DNA.

So how many of these basic survival instincts are still applicable to AI sentience?

The genetic science of the human DNA blueprint shows that our cellular ‘programming’ has retained thousands of redundant features that can be mapped back to our earliest  evolutionary ancestors. In this respect, if we actually had the ability to create an AI sentient lifeform, we might well choose a blueprint that eliminates as many of the redundant survival features as possible with an eye more on future AI survival needs, rather than humanity’s past survival needs. So while a large range of emotions are clearly important to the human condition, they may not represent the best blueprint for AI to follow. However, it might still be sensible to hope that some form of ‘emotional intelligence’ will guide the self-determined morality of any future AI sentient lifeforms.