Could smartphones become the new therapists?

Dr. Skorburg holds a doctorate in philosophy from the University of Oregon. His research is in applied ethics, moral psychology, and the philosophy of cognitive science. Photo courtesy of the University of Oregon

According to Statista, it is predicted that in 2019, around 2.5 billion people in the world use smartphones and the number will continue to rise in the coming years. We’ve heard of how our smartphones cause us more harm than good, even stories of how phones have caught fire in the hands of its user.
However, what if our smartphones can detect a crisis just from how we are using it? What if our phones could help detect and prevent suicide? Dr. Gus Skorburg, a postdoctoral student at Duke University, spoke with students and faculty at WCU about this possibility and some of the ethical problems it entails.

Hosted by the Department of Philosophy and Religion, Skorburg spoke Wednesday, April 10 to a group of around 30 students and a few faculty at a Degree Plus event.

According to the National Alliance on Mental Illness, around 18.5 % of adults in the United States experience mental illness in a given year.

According to Skorburg, very few ever receive mental health treatment, especially those in low income areas. On college campuses, depression is the leading mental health issue, and when the American College Health Association conducted a survey of 90,000 students in 108 schools, a third of the population reported being too depressed to function.

The Association for University and College Counseling Center Directors conducted a survey of counseling directors at 380 institutions including Western Carolina University. The survey found that the number of students on the wait list for counseling almost doubled from 35 to 62 between 2010 and 2012. Additionally, the survey found that some students can wait up to four weeks before they see a counselor.

Recently, a team of researchers at Northwestern University have developed a way for our smartphones to try to combat these long wait times and aid those struggling with mental illness. The app they developed can detect signs of depression through sensors in the phone.

“Many college students carry around smartphones and these smartphones have all kinds of sensors on them that can send relevant information about mental health,” said Skorburg.

These sensors can detect how often someone is using their phone, their movement during the day, and more. After conducting this study, the researchers found that those who suffered from depression tended to move around less, use their phone for longer periods of time, and have fewer conversations, said Skorburg.

According to Unilad, the app, called Purple Robot, had an 87 % accuracy rate at detecting depression in smartphone users.

Skorburg then proposed that this data could be used to help those struggling with mental illnesses in real time, especially on nights and weekends when services are not readily available.

In addition to depression, these sensors are also able to detect patients with bipolar disorder, Alzheimer’s disease, early signs of Parkinson’s disease, and Schizophrenia.

Skorborg explained that the idea behind this technology is that it can help people in rural areas get access to the care they need when they otherwise would not be able to get it.

“These tools, if used well, can help us reach population that we couldn’t reach before,” he said.

Skorburg explained that the artificial intelligence can discover patterns that humans are unable to pick up on and that it can predict suicide attempts with an 85 % accuracy rate.

One social media platform that has created a way to do this is Facebook. Facebook’s algorithm looks through posts, conversations in messenger, comments and more to see who is in imminent danger. Artificial intelligence then sends a message to a Facebook staffer who decides whether or not to call the police. In an article published on NPR, Facebook’s Global Head of Safety, Antigone Davis, explains that in 2018, Facebook received over 3,500 reports which averages to around 10 wellness checks a day.

In addition to Facebook, the Crisis Text Line is a text-based service that can be contacted at any time for help with suicide prevention, with the promise that someone will respond within at least five minutes.

“They basically have an AI that analyses all the text messages that come in,” explained Skorburg.

This artificial intelligence then decides which texts are more serious than others and responds in order of importance.

Interestingly, the artificial intelligence has been able to identify over 10,000 words that are more predictive of suicide risk than the word “suicide” itself. For example, the word “military” is two times more likely to lead to suicide than the word “suicide,” said Skorburg.

Though it appears that artificial intelligence can detect suicide attempts, problems arise when discussing the ethics of such practice.

Skorburg went on to discuss that these methods of social suicide prevention lack many of the safeguards that exist in the medical arena. For example, clinicians are expected not to share information provided with anyone else. However, the social media and text services are not as private.

Recently, Crisis Text Line teamed up with Facebook so that users could message the help line though the Facebook messenger app. Yet, Skorburg claims this pairing only allows for more access to one’s data.

“While they might be texting a service they think is anonymous, that data is being relinked with all their data across Facebook,” said Skorburg.

This question of whether it is ethical of these companies using artificial intelligence to have access to all our data is one that is becoming more and more prevalent as time goes on.

For senior WCU psychology student Stephanie Powell, ethics are most important.

“Ethics in psychology are the most important thing. It is more important to be ethical than to be significant,” said Powell. Powell was interested in the topic of artificial intelligence and mental health but the discussion of ethics was intriguing for her as a psychology student.

“It’s a weird line of ethical because once its better rooted out, I think helping people is ethical, but stealing their information isn’t. So it’s a weird line to tread whether what’s more important protecting your anonymity or protecting your life,” said Powell.

When a person visits the doctor, explained Skorburg, they sign a consent form, but text companies don’t have to do that. Instead, in the case of the Crisis Text Line, they provide a terms and conditions after the first text that most people do not read.

“That’s not informed consent. At best that’s implied consent,” said Skorburg.

Ultimately, smartphones and artificial intelligence can be beneficial in determining who is at risk of harm to themselves and providing much needed help to those who otherwise may not have access to treatment. But the questions remains of whether collecting all this data is helpful or harmful.

Skorburg claims that Facebook does not collect follow up data, and there is no record of how someone is doing after Facebook conducts a wellness check.

“I’d like to see more evidence of does it actually work,” said Skorburg.

For apps such as Purple Robot and services like the Crisis Text Line that have processed over 100 million messages, it seems only time will tell the benefits or pitfalls of such artificial intelligence.