2020: Is this the Year of Empathetic Artificial Intelligence?


This may be the possible future of space travel if AI were effectively taught to be empathetic. Astronauts could receive mental health support on long missions if this were the case.

Michelle Kim, General Editor

     At a conference at Dartmouth College in 1956, the word “artificial intelligence” was first uttered. Now, nearly seventy years later, artificial intelligence (AI) is at the cusp of becoming emotionally intelligent with the capability to empathize with astronauts on Mars. Artificial intelligence has had a tumultuous history throughout the past century; nonetheless, despite the controversy regarding the extent to which AI can mimic humans, this emerging technology has the potential to better our lives and innovate the future. 

     One such emerging technology is the artificially intelligent robots designed for space travel as the companions of astronauts. Back in December of 2019, the European aerospace company Airbus and IBM’s Watson collaborated to launch the CIMON 2 from Cape Canaveral Air Force Station in Florida. The CIMON 2 is not just an astronaut assistant, but also an “empathetic conversational partner,” said IBM representatives. Ergo, it seems that scientists and engineers are moving towards the creation of machines that can not just mimic the emotions of humans but also respond and react accordingly to the needs of humans. 

The CIMON 2 (Crew Interactive Mobile Companion) robot is a companion for astronauts on the International Space Station.

More recently, in January of 2020, NASA’s Jet Propulsion Laboratory (JPL) alongside the Australian tech firm Akin are currently developing an AI that could potentially provide emotional support for astronauts on deep-space missions. This collaboration is specifically geared towards space travel as astronauts are most likely to be at-risk for mental problems as they spend many months and even years confined in a vessel where access to mental-health services is non-existent. Current robots, according to Tom Soderstrom, CTO at NASA’s JPL, are limited by the lack of emotional intelligence. 

The Akin partnership utilizes JPL’s Open Source Rover project to develop Akin’s unique emotionally intelligent AI.  The resulting creation is called Henry the Helper. It is currently in use in the JPL facilities and converses with employees and site visitors. The Akin AI is advanced and different from current AI in that it is able to both interact with humans and recognize human emotions. Akin CEO Liesl Yearsley clarified that the Akin AI is not for simple purposes such as setting reminders but more for providing emotional support services. 

Henry the Helper makes use of deep learning in order to identify patterns in the speech of humans as well as facial expressions since they relate to emotional intent. Subsequently, this information prompts the AI to respond to these emotional cues in fitting and empathetic manners. JPL and Akin plan to release two more prototypes in 2020: Eva the Explorer and Anna the Assistant. The creators hope Eva will be a more autonomous version of Henry. They plan to outfit Eva with more sensors to enable her to identify subtle speech and facial expressions when participating in higher-level conversations. On the other hand, Anna will be similar to the traditional AI in the sense she will be an autonomous lab assistant that anticipates the needs of JPL employees.  

However, in order for future versions of Henry the Helper to truly be in use in space, an isolated location, the robots’ systems will have to rely upon edge computing; this means that rather than relying on large centers of data and computation, energy footprints will be reduced through dependence on local storage and caching. Furthermore, another obstacle that is present, according to psychologist Lisa Feldman Barrett from Northeastern University, is the truth that most tech firms train AI to recognize human emotions by inferring emotions from physical movements. This method is flawed because AI does not truly understand and recognize psychological meaning. Nevertheless, testing emotionally intelligent AI in space presents a perfect opportunity to improve the current technology as the AI can provide emotional support to a few astronauts in a closed environment.  

This may be the possible future of space travel if AI were effectively taught to be empathetic. Astronauts could receive mental health support on long missions if this were the case.

Artificial intelligence at the Bergen County Academies is a subject of high interest and relevance as many students participate in related research projects and there are classes regarding this subject. When students were asked what they thought of the idea of empathetic AI both generally and in the context of space travel, they generally thought that as a space travel companion it was a great idea. But in terms of the broad future, there could be certain risks. 

Miray Samuel, an AMST junior stated: “I think that by producing emotionally intelligent AI you risk essentially replacing what makes people human. It blurs the line between humans and technology, which is risky. But I do see that AI is beneficial for new therapy methods.” This sentiment was also shared and elaborated upon by Dr. Kenny, BCA’s psychology teacher. 

Dr. Kenny shared that there exists remote psychotherapy, which, through video chats, allows an individual to receive therapy remotely, and it has been proven a “wild success.” Furthermore, the assumption that a therapist needs to meet face to face with his or her patient is not necessarily true. In fact, for space travelers, they may just “simply need an ear” or someone willing to listen and respond accordingly. Dr. Kenny stressed that in the short term, if human needs could be temporarily resolved through video chats, this would be a “big leap forward” for therapy. However, in the long term, this technology could be potentially “dangerous and distressing.” 

Laurence Lu, an AAST junior offered a slightly different opinion regarding the implications of emotionally intelligent AI: “It may be uncomfortable having non-human things supply seemingly human-exclusive services, but if an emotionally intelligent AI is able to react to emotions in a natural or conscious way, there is no fundamental reason to have an issue with this.” Laurence does not associate emotionally intelligent AI with ethical issues; he offers a more practical view. 

With regard to the role of AI in the future, Laurence stated: “AI will always be a means of automating jobs that would otherwise take humans immense time to calibrate and compute on their own. In the future, we may eventually be able to simulate social interaction entirely by artificial intelligence… replacing the core need for actual human interaction. Fears that AI might one day run the world seem unfounded. For the moment, AI conferences maintain somewhat explicit principles upon which AI should be developed. ” 

However, Dr. Kenny provides a different perspective regarding the role of AI in our future. Dr. Kenny mentions that creative depictions such as 2001: A Space Odyssey, denote a negative view of technology—this whole notion that technology will “take over” humans, who effectively “lose control” of the technology they have created. Furthermore, Dr. Kenny brings up a possible limitation of AI with regards to their capacity to be perfectly emotionally intelligent: she mentions the importance of touch as an integral part of empathy. Touch, especially in the context of maternal love, is essential, and babies need it to grow. It is awfully difficult for a machine to replicate this, proving the significance of human-human interactions. 

Ultimately, it is clear that AI, when developed and utilized correctly, can serve as a technological boon in which humanity stands to gain, rather than lose. However, we, as a society should be wary and venture into this new era with caution.