Fits and Starts: Predicting the (Very) Near Future of Technology and Behavioral Healthcare

It seems that advancing and digital technologies are quite the rage in the popular media as well as medicine. Albeit, it was not until 2016 that the term “machine learning” first appeared in the (arguably) top two American medical journals — The New England Journal of Medicine and the Journal of the American Medical Association and now big data applications (and concomitant speculations) are quite ubiquitous. It has been a curious evolution looking back from the initial development of ELIZA, one of the first natural language-processing computer programs in the 1960s which was capable of passing the Turing Test. It was used to mimic a Rogerian-styled therapist and patient-users were not able to distinguish the algorithm from a real therapist. Though even lay technophobes would remark on how basic this program is, at the time it was considered revolutionary. The progress to today’s modern artificial intelligence (AI) has been a technologically and morally winding journey since that day, and today’s world of curiously-lifelike machines that were born from four simple words, “Hello, I am Eliza”.

In my first years out of graduate school as a young clinical psychologist, I fretted that my nascent differential diagnostic skills may not be sensitive or sophisticated enough to tease out any number of “medical” conditions with symptoms similar to those of a psychiatric etiology — “… is it depression or an endocrine disorder..?” Thus, I put my undergrad computer science skills (as they were) to the task of developing software and the process to help me out. While effective for my purposes, this program was far from state-of-the-art two decades ago. From the foundation laid by the innovative, groundbreaking technologies in the 1960s — those that created ELIZA — and the contemporaneous capabilities my very rudimentary (yet contemporary) “programs” in the late 1980s, one would have thought that the technologies of today would be much further advanced.

Technology and behavioral health often have seemingly (and ironically) been at odds. This is particularly true in the annals of psychiatric history — think of the technologies of trephining or Nobel Prize-winning frontal lobotomies. Perhaps we have kinder and gentler technological approaches today of transcranial magnetic stimulation and apps with much less risk of untoward side effects. There is a nonlinear nature of progress of medical technologies, and no one specialty seems to consistently excel. Equally, such efforts should not be seen as a race or competition, either. Technological advances need not supersede societies’ moral-readiness to accept them, nor the ability for “AI-therapies” to completely replace human interactions. There are regular arguments today as to whether a “program” is as good as a human/expert. Personally, I see a this as a bit ironic, given that “Turing-qualified psychotherapists” existed over fifty years ago, and presently radiological exam data can be “read” by algorithms that are more than on par with a human’s ability — they can generally far exceed their human counterparts. To paraphrase William Gibson: the future is here, but may not be evenly distributed. This is certainly the case in medicine.

Moving beyond diagnostics, current practice involves psychotherapies that are based on approaches different than reflective, empathic, Rogerian responses, such as the currently popular cognitive-behavioral therapy (CBT). Evidence-based practices are predicated on having a robust empirical foundation of peer-reviewed studies from which to learn. Once that exists, both humans and machines can then get to work and learn, apply, practice, and iterate.

While there tend to be few panaceas in medicine or technology, the same is true for machine-learning applications. A full focus-shift to the growing number of mobile apps and sensors in healthcare is not without concern. Not all apps are created equal. In terms of user-emptor, Saeb and colleagues note that the standard approach to evaluate predictive algorithm accuracy is via cross-validation. However, not all are statistically meaningful. They noted that “…record-wise, cross-validation often massively overestimates the prediction accuracy of the algorithms… (and) …that this erroneous method is used by almost half of the retrieved studies… (to) predict clinical outcomes.” We thus need to give greater scrutiny to these proliferating apps and not simply assume face-validity is sufficient. Indeed, United States Food and Drug Administration has recently instituted a vetting program for prescription digital therapeutics or “digiceuticals.” Their present focus is on chronic disease states such as Alzheimer’s disease, dementia, asthma, congestive heart failure, type II diabetes, and obesity, along with psychiatric conditions of clinical depression, anxiety, substance abuse, and attention deficit hyperactive disorder. CBT is the typical engine in such programs, even those not solely psychological5.

There seems to be a somewhat new breed — although not so new as to be without proper vetting and testing — of technologies that have been evaluated and published in legitimate peer-reviewed journals that are supported by Randomized Controlled Trials (RCTs), and are validated, accessible, and augment rather than replace face-to-face interventions. Present-day technology is sophisticated enough to handle the necessary nuances of care needed by such tools. The economics and modern shift to patient-centric care have also created the opportunity for such tools to now thrive.

When it comes to mental illness and substance abuse, technologies supported by RCTs and delivered via an app or a digital therapeutic, become even more valuable vis-à-vis mitigating the stigma that may still inhibit some from seeking care as well as those in settings (e.g., rural) where access to care is limited or there is inadequate insurance coverage, or there is a shortage of trained providers, poor coordination among primary care providers, or a simple lack of transportation. Such tools also expand the treatment options of both primary care physicians and emergency department providers.

I’m of a renewed optimism that the current and evolving tools that have been empirically developed and vetted have now caught-up in psychiatry and behavioral healthcare. As added side-effects, they also bring welcome benefits of affordability, privacy, access, and effectiveness.

# # #

This piece originally was published on LinkedIn. If you’d like to learn more or connect, please do at http://DrChrisStout.com. You can follow me here,on LinkedIn, or find my Tweets as well. And goodies and tools are available via http://ALifeInFull.org.