THE COLLEGE HILL INDEPENDENT


And How Does That Make You Feel?

On AI and Mental Health Care

by Joseph Frankel

Illustration by Lynnette Munoz

published March 18, 2016


The word “therapy” might evoke some archaic characters and images: the patient lying on a leather couch, the older male therapist brandishing a pipe, a beard, and bookshelves full of decades of psychoanalytic theory. One person speaks. The other mostly listens. Though the pipe, the couch, and the theory have (mostly) fallen to the wayside, the presence of two people has been a constant in psychotherapy and mental health care.

But things change. And while the rise of cell phones and Skype have bred moments of controversy in mental health care, the introduction of technology into psychological and psychiatric care seems a logical, if not necessary thing. The use of telecommunications in therapy has become a question of how, not why. Last fall, as reported by the MIT Technology Review, Google Life Sciences hired Thomas Insel, the former head of the National Institute of Mental Health, to help bring data analytics to bear on psychiatric diagnosis. But the development of Multisense, a computational tool from the University of Southern California and the Defense Advanced Research Projects Agency (DARPA) has introduced another question: what place does artificial intelligence have in mental health care? 

 

+++

 

To read a person’s expressions and gestures is the closest we get to mind reading. We expect faces to convey important information. A raised eyebrow, a smile, and a nose wrinkled in disgust are just a few of the many signs we take in without a thought. Someone looks away or smiles half-heartedly, and through some mix of cultural conditioning and intuition, we understand something they’re not saying. 

USC’s Institute for Creative Technologies aims to make a computerized tool that can recognize these signs as well as, and maybe one day better than, humans. These rules can be abstracted and “taught” to computers through machine learning. Developed at USC, the Multisense system evaluates users’ physical movements and vocal tics for indicators of mental disorders. The product looks like something from The Terminator: a camera screen maps red lines and green meshes over parts of the face, while meters and gauges that look like the dashboard of a car measure “Gaze Attention” and “Body Movement.” In a demo video, an interviewee leans backwards and bright green block letters reading “LEAN BACKWARDS” flash in response. 

For Louis-Phillipe Morency, a principal researcher on the project, one of the primary goals for Multisense was to recognize patterns of nonverbal behaviors that are common among people experiencing depression and PTSD The group published a study in 2013 in which participants with depression, anxiety, and PTSD smiled with significantly less intensity than participants who were not experiencing these conditions. 

The Multisense system was integrated into another USC project: SimSensei. The program presents the interviewee with a virtual counselor named Ellie. An avatar of a woman with an even speaking voice and a beige cardigan, Ellie sits in a virtual armchair and goes through the steps of the clinical interview. All the while, she monitors the patient’s reactions and modifies her responses based on the physical cues they give her. In a demo video clip published on the Atlantic’s website, Ellie asks a young man to remember the last time he felt “really happy.”

“I don’t know,” he says, after a brief pause. He looks away from the camera and several patches on the screen begin to flicker as he turns his head to the side. The screen—this part isn’t visible to the interviewee—reads: “Low Gaze Attention is detected…”

Colors change and markers go off as the man becomes distressed.

“I noticed you were hesitant on that one. Would you say you’re generally happy?”

“I’m generally happy,” the words stumble out arhythmically, “but there are…things, just keeping me down.” 

The demo ends there. 

 

+++

 

Appearances aside, Ellie is not a therapist. Both in a lecture at Carnegie Mellon and in an interview with the Atlantic, Morency stresses that SimSensei and MultiSense are “decision support tools” and not “decision making tools.” Instead, the system is meant as a “scaleable” aid to help therapists and clinicians decide what to do. This nuance is lost in some headlines, ranging from “Would you Want a Computerized Psychologist?” to “DARPA” to “Troubled Soldiers: Meet your New Simulated Therapist.” In an email interview with the Independent, Morency wrote that he sees a future in which SimSensei can provide objective measures of mental illness, comparable to a blood test. But it seems like those objective measures would have to be developed before SimSensei could be applied clinically.  

 “In the short-term,” he sees it as an aid, not a substitute, for a clinician. He then laughs and adds that he “would not want a computer to make that decision,” referring to the judgment of whether or not a person is depressed.  

 

+++

 

What Ellie does is monitor interviewees for changes in nonverbal behaviors and speech patterns that Morency’s group believes to be more common in people with mental disorders. These descriptors (e.g. looking downwards instead of straight ahead) were found by looking at which behaviors were more common in participants who had been diagnosed with depression and PTSD. But according to the paper, these participants were recruited through a Craigslist ad, with no mention of demographic information. And though this research is in its early stages, it points to a challenge in finding automatic descriptors for conditions that might look different depending on age, gender, or culture. What physical signs, if any, can fit all kinds of people? When it comes to how MultiSense deals with the effects of culture, the USC group has made steps in that direction.

In a 2014 study, Morency’s group studied the effect of gender on nonverbal expressions of people with PTSD and depression; they found that men who are depressed frown more than men who are not, while women who are depressed frown less than women who are not.  The same study acknowledges the importance of cultural knowledge in evaluating behavior that human clinicians can judge in context: “different cultures showcase different baselines of ‘normal’ behavior, and genders follow different social norms of ‘accepted behaviors.’” As Morency later said on the Spreaker podcast in 2015, “a lot more research is needed.”  

Assuming that Ellie is a relevant aid for the population of interviewees in which she’s been tested (participants seem to have been recruited through craigslist with little accounting for demographic diversity) what could that mean for everyone else? Recent studies argue both that symptoms of mental health disorders are expressed differently across cultures, and that clinician bias can affect diagnosis. When presented with patients of a different ethnic background than their own, human providers may be more likely to misread symptoms. As a result Black men in the U.S. are more often misdiagnosed when it comes to depression, according to Jennifer Shepard Payne’s 2012 study in the Journal of the Society for Social Work and Research. Another study in the Journal of Culture, Medicine, and Psychiatry (Adeponle et al, 2015) outlines the imprecise diagnosis of depression of refugee populations in Canada. 

It’s heartening to think what good a tool like SimSensei could do if refined with data from populations of different ages, genders, ethnicities, and races. It might even have the potential to correct for human bias in diagnosing different groups of people. But it’s troubling to think of what might happen if measures developed from data limited to one population were taken as objective. 

The problems in precisely mapping behavior to psychological state seem to present questions of accuracy, not ethics. But they’re very much tied to questions of ethics. What if an employer could set up a camera that scanned the faces of potential interviewees? What about college admissions interviews? The military? The line between diagnosis and discrimination could be easily blurred. It’s not too hard to fly off the rails with the implications of a technology like this, if it’s given the authority Morency explicitly states it should not have. 

But it’s important to keep in mind that the application of digital technology to mental health care is still in its infancy. Especially as a layperson, it can be tempting to let speculation run wild. But if this system works, it could help a lot of people. There are many problems currently facing the mental health care system—shortages of mental health care providers throughout the U.S., limited affordability, and social barriers to access to care are just a few items on a long list. If a system like MultiSense is able to achieve its goals, it could allow many people in distress to get pointed towards the care they need. 

Ideas about mental health care have changed a lot in the past century—mostly for the better. Hopefully that’s a trend that will continue, though it’s hard to say where technology will lead. But whatever happens, it’s hard to imagine humans will go completely out of style. 

 

JOSEPH FRANKEL B’16 has bred moments of controversy in mental health care.