Mariano Sigman's Psychosis Algorithm Will Make Therapy a Science or a Sentencing
If you listen closely enough, you can hear fate.
Psychiatrists, psychologists, and therapists are paid by the hour to turn their clients’ ramblings into actionable advice or diagnoses. They are licensed to do so only after years of training, because intuition has its limits and they have very few tools at their disposal. There is no stethoscope that makes anxiety audible; no EKG that measures loneliness. That might sound like a poetic problem, but to Mariano Sigman it’s a practical issue. The Argentinian neuroscientist believes he can create a diagnostic tool capable of helping shrinks parse the big data coming at them in the form of words, words, and more words.
“It’s been an industrial revolution in the process of understanding another’s mind using language,” Sigman tells Inverse.
Sigman figured that he could track a patient’s mental stability by analyzing their conversational stability. If the patient’s mind is a jumble of paranoid thoughts or illogical connections, their speech will reflect that state. He took a subjective, intuitive practice and attempted to make it objective and analytical. He wrote an algorithm.
To test his work, Sigman and his team interviewed 34 youths, all of whom were clinically high-risk for psychosis, then ran the interview transcripts through the speech analysis program. The program predicted that five of the 34 patients would develop psychoses. Two and a half years later, each of those five subjects were displaying symptoms of psychosis. The researchers had managed to predict, with 100-percent accuracy, the onset of mental illness. But one question lingers: If you were one of those five, would you want to be told?
Schizophrenia means a split mind, but that’s an oversimplification of the disorder. The possible symptoms are multitudinous, but there are two that mental health professionals focus on: auditory hallucinations and disorganized speech.
The difference between schizophrenics and non-schizophrenics, Sigman explains, “is not so much that they create these inner voices, but that we — non-schizophrenics — have the ability to recognize that we are the owners of these inner voices.” For schizophrenics, thinking is closer to dreaming, a process in which thoughts are externalized then perceived as realities.
Three millennia ago, when Socrates was referencing his “daimonion,” a sort of divine voice that spoke to him, and thoughts were often conflated with divine revelation, schizophrenic thinking may have been fairly normal. But this is the age of introspection, which the scholar Julian Jaynes, whom Sigman is fond of citing, credits for giving rise to modern consciousness. “Introspection,” Sigman says, is essentially “the ability to understand that we are the creators of our own thoughts.” Schizophrenics can fail to understand that. And a lack of introspection yields the next major symptom, one with which Socrates would never have been diagnosed: disorganized thought. This is why truly disorganized expressions of thought, known clinically as “word salads,” are red flags for psychiatrists. This is also why Sigman wanted to build a program capable of dissecting speech.
Sigman has, in effect, constructed a city of thought. This city is made up of conceptual clusters, or neighborhoods of ideas, like Introspection, Mental Disorders, and History. Within each neighborhood you’ll find all the words that fall under the main concept. Within Introspection, for instance, you’ll find Guilt, Melancholia, and Self-Reflection, among many other reflective ideas. “Of course, the clusters are not perfect,” Sigman says. “They’re intermingled. But overall, there are neighborhoods.”
When you and I are speaking, we’re navigating this city. We may remain in one neighborhood; we may explore a whole borough. If the discussion gets particularly wild, we might find ourselves in the suburbs. Sigman’s algorithm tracks our conversational jaunt. “You can think of speech as a trajectory within this space,” he says. “Something which was not quantitative now becomes something which is very concrete. It’s just movement in space.” He can measure the rate at which you travel, because he knows the time it takes you to travel various distances. He calls this measure of conversational velocity “semantic coherence.” And he can tell if a subject is speeding or getting lost.
Feed the program a transcript and it spits out a number. “Semantic coherence means that I am staying, for a reasonable amount of time, on the same topic,” Sigman explains. “I’m not just hopscotching from one neighborhood to the next.” We can achieve semantic incoherence if we want, and, sometimes, hopscotching is even desirable. “If you are doing poetry, and you feel inspired and want to relate unrelated things, you’ll get low semantic coherence. That’s fine.” However, if you’re attempting to remain coherent — participating in a study, for instance, in which you’re asked to describe “something very concrete” in an interview format — and you fail, Sigman explains, “this may be a sign that something is going wrong.” This is where his program’s predictions come in. However, though he’s proud of the positive early results, Sigman isn’t resting on his laurels.
“The 100 percent part is certainly not going to be replicable,” he says. “We did it in a relatively small group. Thirty-four is not a huge group. It’s a medium size. It’s robust, but it’s still in the scale of a small, experimental group.”
He knows that a mental health app sophisticated enough to bypass human evaluation is very far away. Despite that fact, the algorithm could simply track semantic coherence in app form, rather than offer diagnoses.
“None of these things will be diagnostic, at least not for a long time,” Sigman says. “It’s not like your phone is going to tell you, ‘Joe, take a pill, you are doing wrong: Your semantic coherence is too low.’”
Sigman does, however, anticipate that his work will dramatically improve diagnostic success rates, which hover at a measly 30 percent. The only clear downside — other than the possibility of false positives sneaking through the analysis process — is that telling someone their mental health is at risk might be harmful to their mental health. And if semantic coherence scores leaked, the potential ramifications could be serious: Employers or insurers could use the information; so could law enforcement.
Sigman says he’s thought a lot about whether he himself would want to be told of an impending calamity. For him, there are three factors.
1) Would knowing help him prepare? If the answer were no, Sigman says he wouldn’t want to see his results. But with respect to psychosis, knowing ahead of time definitely could be useful. “There is a significant difference between knowing this in advance and knowing it too late,” he says.
2) If he were told, would treatment mitigate the misfortune? If a doctor tells you that you’re at risk of having a heart attack, then you can eat less salt and start going for daily ambles, Sigman says. If a psychiatrist tells you that you have a high risk of developing psychosis, the treatment is not so straightforward.
3) If you are given such a prognosis — and if it is only a prognosis, meaning that the outcome is merely probable — what is the cost of such knowledge? Might it become a self-fulfilling prophecy? If you’re told by an authority that you’re on the brink of insanity, for instance, you might well use this information to drive yourself insane.
Regardless of Sigman’s own criteria, he knows that the ramifications of prognostic medicine technologies are something that we, as a society, must address. “All this creates a complicated concert of consequences, which needs to be well-guided by the society — as is the case with other technology.” Luckily for us, the true mental health app remains a long way off. Just try to stay on the map, in familiar neighborhoods — and moving about at a reasonable rate — until it arrives.