Image generated with DALLE-3
The connection between artificial intelligence and human intimacy can often be a controversial topic, and perhaps a subject most would expect to see solely in science fiction movies.
However, taking a step back and looking at the times we’re living in — in which driverless cars, advanced 3D printing, and even weight-loss medication such as Ozempic are a reality, we come to understand that these “far-fetched” concepts are quite often very workable.
AI is by far the most talked about and controversial topic in the tech space. It carries the potential to enormously transform a wide bevy of industries — but there are also safety concerns, as well as the risk of potentially making millions of people redundant in the workforce.
One thing that isn’t often talked about is the ethically and psychologically tenuous usage of AI in personal, intimate roles. Can a machine, coded by someone else, often beset by biases, truly understand human emotions, mimic them, and provide comfort? Let’s take a closer look at these questions.
The development of Emotion AI has made it easier for artificial intelligence to analyze human emotions. This subset of the AI field, also referred to as affective computing, examines and reacts to human emotions through a combination of natural language processing, sentiment analysis, facial movement AI, voice emotion AI, gait analysis, and physiological signaling.
Not all of this is brand new — some of the underlying tech and approaches have been in use for years now. Sentiment analysis, for example, is a popular method used by marketing companies, who use it to study customer behavior and provide recommendations, as well as in the finance sector, where it is used to try and predict changes in the stock market.
Emotion AI takes this tool and focuses it solely on one subject — a human being. The end goal is widespread adoption in terms of delivering therapy, with the potential to diagnose depression and other mental health disorders on a wide scale, while the solution would be much more accessible than traditional therapy is.
Because of these possibilities, it is predicted that the Emotion AI market could grow at a CAGR of 22.7% and reach $13.8 billion in size by 2032.
The advent of AI significant others is a recent trend that has received a mixed response — it’s a more intimate, personal vector of the already burgeoning AI ethics debate ethical considerations — and experts in the field of psychology are worried.
It’s easy to see why — although a temporary benefit could be produced, this use case also holds the potential to further alienate people from their communities and society at large.
More forward-thinking experts have also posed the question of how the widespread adoption of “AI partners” could negatively impact everything from the modes of social interaction in newer generations, all the way to birth rates.
Human interactions are already decreasing in frequency. Many people may work from home five days a week, and organizations such as banks (although they’re far from the only ones) rely on chatbots to deliver customer service and deliver services. For example, a person could even take out a property loan or mortgage with no need to speak to anyone in person — and we’re talking about the present day.
Amplified to include the personal and emotional space of people, this could have a big impact on a person’s mental health and grasp of real-world environments going forward.
However, it would not be fair and impartial to only discuss the potential negatives of any emerging technology. The ability of AI systems to understand and even interact with people on a more emotional level is not inherently negative.
AI companions can have a positive impact on people who may feel lonely or isolated, providing regular social interactions, and creating a safe, private environment where a person can share their thoughts and feelings.
AI partners will be readily available at any time of day, providing emotional support, with no judgment, which may be a concern when talking about sensitive subjects with a human.
The ability to customize these companions can make them easy to converse with in a friendly, understanding setting free from judgment. Potentially, AI partners could also serve as a learning tool of sorts — providing the lonely and isolated with pointers in terms of what is considered a required level of empathy, understanding, and behavior for an intimate relationship.
For people who may suffer from anxiety or may not have the time to maintain regular human relationships, AI companions could be a perfect stopgap measure for regular social interaction, potentially making the road back to a healthy social life easier.
The same technological foundations that allow for AI partners can also be applied to the field of mental health. Recently, we’ve seen the rise of AI therapists, mental health coaches, and counselors.
The unfortunate fact is that while we’ve witnessed great strides regarding mental health, they are usually limited to discourse — getting therapy isn’t nearly as stigmatized as it was even a decade ago, the topic is an ongoing societal conversation, and even a priority when it comes to public policy.
However, things aren’t going nearly as well on the ground — with therapy remaining inaccessible, either due to price or time constraints, to many people.
By leveraging affective computing, AI can analyze emotional states and patient responses, identifying signs of depression or providing relevant advice.
With the abilities of LLMs to converse just like humans only increasing, and the ability to evolve and learn from past conversations and analyze vast quantities of data quickly, AI can provide around-the-clock support, advice, and care that is fully customized to the individual patient.
AI Therapy In Practice
As a recent phenomenon, AI therapy remains under intense scrutiny so that its efficacy can be ascertained. Human therapists still have a higher rate of success, and the technology still isn’t at the point where it can provide sufficiently accurate diagnoses.
AI therapy can still be beneficial for individuals who are struggling with mild mental health issues like anxiety or stress and require easily accessible support. However, as soon as we move into tougher mental health challenges, such as PTSD, bipolar disorder, BPD, schizophrenia, or even moderate to severe cases of depression, the use of AI is not recommended.
Put simply, we’re still far away technologically from AI prescribing medication — and even when we do reach that point, it will still be a heavily polarizing ethical issue. On top of that, privacy is also an issue — and with several highly-publicized data leaks, how AI models that include sensitive patient information will handle the issue of HIPAA compliance is an open question.
Experts remain optimistic about the possibilities of AI therapy but it is extremely unlikely to ever replace human therapists. Instead, the technology can be used to supplement and support mental health therapies, giving patients an outlet to discuss their issues 24/7.
Emotion AI and the algorithms that power AI therapy software are advancing at a rapid rate, with the capabilities of the technology improving year-on-year. This has resulted in the growing popularity of using AI in an intimate capacity, but that is not to say the technology is without limitations.
AI companions can provide people with regular social interaction and provide a level of entertainment. This can be ideal for people who lead a busy lifestyle or perhaps suffer from social anxiety. However, it can also result in people disconnecting from the real world and missing out on daily human interaction.
Meanwhile, AI therapists can play a key role in supplementing human-to-human therapy but are restricted in terms of how accurately it can diagnose mental health issues, and can also miss certain non-verbal signals. Currently, AI therapists are suited for providing support for mild mental health issues, allowing individuals to talk about their problems 24/7.
Nahla Davies is a software developer and tech writer. Before devoting her work full time to technical writing, she managed—among other intriguing things—to serve as a lead programmer at an Inc. 5,000 experiential branding organization whose clients include Samsung, Time Warner, Netflix, and Sony.