Improving Results for Digital Therapeutics with Supportive Personal Contact

Jeb S. Brown, Center for Clinical Informatics

Edward R. Jones, PhD, Learn to Live, Inc.

Introduction

The past decade has witnessed the rise of digital therapeutics.  This multi-billion-dollar healthcare segment provides consumer self-help tools with behavioral health as a dominant focus. A number of companies offer online self-guided therapy programs, including SilverCloud, Ginger,  and SpringHealth. Learn to Live is one such company with self-paced digital modules for stress, depression, anxiety, and other conditions.  In addition, coaching and other supportive personal contact is available while users complete their digital lessons.

Previous analyses of the effectiveness of these programs, based on user-completed outcome questionnaires, showed significant clinical change.  This study addresses the question of whether these clinical results have been maintained or changed for a subsequent cohort of users.   Brown et al. (2020) reported results for users of the Learn to Live program between January 1, 2019 and March 15, 2020 (n= 4242), including a comparison of results to clients receiving outpatient psychotherapy (ACORN collaboration sample, n=120,671). This sample will be referred to as the baseline cohort.    

That study showed digital users completing seven lessons reported improvement greater than those completing seven sessions of outpatient therapy.  However, the drop out rate for digital users was higher, and reported change in early lessons was less than that found for psychotherapy clients.    Results from this period serve as a baseline against which to evaluate results for a subsequent period in which the company sought to improve outcomes.  The improvement of results from digital therapeutics is a new area of study since these tools have been prominent only during the past decade.

Learn to Live supports an active program of continuing evaluation and initiatives designed to improve results. For example,  Brown & Jones (2021) propose an algorithm to identify Learn to Live users at risk for a poor outcome by the second lesson in order to encourage the at-risk users to accept additional personal support. Data is currently being collected to evaluate the effects of this particular intervention with at risk users.

Digital therapeutics companies may report positive outcomes, but they are naturalistic studies without the rigors of randomized, controlled designs.  Therefore, follow-up validation studies can help assess to what extent real world studies are providing fortuitous findings.    The current study of the Learn to Live program evaluates results for users of the platform between July 1, 2020 and December 31, 2021.  This second group of users will be referred to as the quality improvement (QI) follow-up cohort. The sample size of 1,128 includes people who used the digital platform without any other support, those who chose to receive supportive coaching, and those who elected to receive automated text messages encouraging mindfulness to promote emotional health.

Method

The magnitude of improvement is reported as effect size.  For purposes of this article, effect size is calculated as pre-post change divided by the standard deviation of the outcome measure at intake.The use of the effect size statistic, also known as Cohen’s d, is important for benchmarking results since it provides a common metric independent of the questionnaire used (Cohen, 1988).  For purposes of this paper, effect size is calculated as pre-post change divided by the standard deviation of the outcome measure at intake .

Most journals today require the use of effect size when reporting results.  Many decades of research on psychotherapy outcomes yields an average effect size for psychotherapy of approximately 0.8.  For this reason, we have classified effect sizes of .8 or greater as indicative of “highly effective” treatment. However, it bears noting that there is no evidence that effect size has increased over decades of researching so called evidence-based treatment. In contrast, evidence mounts that the ability of the therapists to form a positive relationship with the client is far more important than the method of therapy (Wampold & Imel, 2015; Minami et al., 2012; Brown et al., 2015a). This line of evidence points strongly to the importance of contact with a helping professional and human contact in delivering various therapies.  The results from this study further support that conclusion.

The methodology for benchmarking outcomes has been refined over the past decade by participants in the ACORN collaboration, notably Minami and Brown (Minami et al., 2007; Minami et al., 2008a; Minami et al., 2008b). The ACORN methodology for benchmarking outcomes has been well documented and validated across thousands of clinicians working in a variety of settings treated a wide range of problems and disorders, with services funded through multiple types of payers such as private insurance, publicly funded, self-pay, employee assistance programs, etc. The methodology employs multi-variate predictive to account for differences in case mix, the results reported as effect size (Brown et al., 2015b). Brown et al. (2020) provides an in-depth discussion of how effect size was calculated for the Learn to Live samples.

Treatment outcomes can be evaluated for those completing treatment (which is seven lessons for Learn to Live modules) or as the “intent to treat” based on the last lesson completed. The intent to treat method results in smaller effect sizes given the failure to complete all lessons, but the calculation more accurately reflects the experience of most users. For this reason, the intent to treatment method is employed, and this includes reporting effect size for those ending treatment at each lesson.

The results for the Learn to Live sample are broken out for four conditions based on whether they receive any type of personal support in addition to using the digital platform:  a) coaching, b) coaching and mindfulness texts, c) mindfulness texts, and d) no personal support.  Within the coaching condition, support from a non-licensed coach was provided via phone, text, or email, depending on the users wishes.  In the analysis of the baseline cohort,  the effects of coaching appear to be independent of the contact method (Brown & Jones, 2020). Coaching results are therefore reported in aggregate rather than by contact method.

Results

The main comparison involves results for the baseline and quality improvement cohorts.  However, the results for both groups can be compared with results for those in the ACORN psychotherapy sample.  Results for psychotherapy represent benchmark results for purposes of this analysis.

 Table 1 presents the intent to treat results for the baseline and quality improvement cohorts. There is a dramatic increase in effect size for the quality improvement cohort (0.83 versus 0.5268).  The effect size for this QI cohort significantly exceeds the effect size for the psychotherapy comparison sample (p<.001) and represents a 6022% in self-reported improvement. Also of note, the average number of lessons used remained relatively constantThe average number of completed lessons increase (from4.13.9 and to 4.25, a difference that was significant (p<.01). Analysis of variance confirmed that the in increase in effect size was significant even after controlling for lesson count (p<.05).  , and so the gains do not appear to be driven by utilization.  The Part of the increase in  increased effect size instead appears to be the result of achieving more improvement per session.



Table 2 summarizes the types of services received by the digital therapeutics users.  It shows an increase in the percentage of users receiving some form of personal support while using the digital therapeutics.  The percentage of users receiving no personal support dropped from 5351% for the  baseline cohort to 3935% for the quality improvement cohort.  The mindfullness text messages, received alone or in combination with coaching, increased as a form of support in the quality improvement cohort.

Graphs 1 and 2 display the effect sizes for each type of support received for the baseline and quality improvement cohorts.  These clearly display a substantial increase in effect sizes for all the conditions, including no support.  All changes in effect size for support conditions are significant (p<.01). The increase if effect size for mindfulness texts was significant (p<.01) as was the increase for coaching and mindfulness texts (p<.05) even after controlling for the increase in lesson count.

Discussion

 The results confirm the earlier finding that personal support tends to increase effect size. During the follow-up period for the quality improvement cohort, the percentages of users choosing options for personal support increase significantly, from 4749% to 65%.  Platform users getting personal support experience more improvement per session. The combination of personal coaching and mindfulness texts appears to have an additive effect to the coaching results alone. 

These are real world findings, and the lack of random assignment with a control group make any interpretation a matter of speculation.  The clinicians and technical staff with Learn to Live are constantly enhancing the platform to make it more engaging.  They also craft mindfulness text messages with the intent of making people feel they are getting personal therapeutic input, and these are modified over time.  While these results are encouraging, it cannot be concluded that these quality improvement efforts caused these results.

Results can change not only due to quality improvement efforts, but also based on the changing composition of the people using the platform.  Analyses were conducted according to membership to see if new business customers (e.g., health plans, large employers) were driving these changes in outcome.  Group membership issues did not appear to be a source of any changes in outcomes. Also, data from the baseline cohort were reanalyzed to determine if an upward trend in results had been present all along.  This was found not to be the case.

The current findings support the value of the digital therapeutic platform used without any personal support.  Yet the evidence also suggests that results increase to the highly effective range for those who accept support.  It is noteworthy that people receiving minimal support through mindfulness text messages improved to the highly effective range.  This poses the question of how much personal support is needed to achieve better outcomes.  Text messages are significantly less costly than coaching, and so the answer has important implications for providing the most cost-effective services.

As with any novel treatment, it is important to evaluate risks and contraindications. While the current dataset does have a percentage of users that fail to report any improvement, this is relatively small and it is difficult to determine. However the authors are currently the planning phase of integrating the use of the product into routine outpatient psychotherapy with a very large multiple site clinic serving a heterogeneous population with a wide variety of payers and very good data regarding case mix, including client age, diagnosis and payer type.  Results from clients using Learn to Live will be matched with their utilization and effect size from psychotherapy. This study will hopefully provide useful information on both indications and contraindications for offering the Learn to Live program to clients with known psychiatric diagnoses.  


 References

Brown, G. S. J., Simon, A., & Minami, T. (2015a). Are you any good…as a clinician? [Web article]. Retrieved from http://www.societyforpsychotherapy.org/are-you-anygood-as-a-clinician

 

Brown, G. S. (J.), Simon, A., Cameron, J., & Minami, T. (2015b). A collaborative outcome resource network (ACORN): Tools for increasing the value of psychotherapy. Psychotherapy, 52(4), 412–421. https://doi.org/10.1037/pst0000033

 

Brown, J. S., Jones, E., Cazauvieilh, C. (2020, May). Effectiveness for online cognitive behavioral therapy versus outpatient treatment: A session by session analysis. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/effectiveness-for-online-cognitive-behavioral-therapy-versus-outpatient-treatment

 

Brown, J.S., & Jones, E. (2020, December). Impact of coaching on rates of utilization and clinical change for digital self-care modules based on cognitive behavioral therapy. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/impact-of-coaching-on-rates-of-utilization-and-clinical-change-for-digital-self-care-modules-based-on-cognitive-behavioral-therapy

 

Brown, G. S., & Jones, E. (2021, March). Improving clinical outcomes for digital self-care. [Web article]. Retrieved from http://www.societyforpsychotherapy.org/improving-clinical-outcomes-for-digital-self-care

 

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, N.J: Lawrence Erlbaum Associates.

 

Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., & Brown, G. S. (2007). Benchmarks for psychotherapy efficacy in adult major depression. Journal of Consulting and Clinical Psychology, 75, 232-243. doi:10.1037/0022-006X.75.2.232

 

Minami, T., Serlin, R. C., Wampold, B. E., Kircher, J. C., & Brown, G. S. (2008a). Using clinical trials to benchmark effects produced in clinical practice. Quality and Quantity, 42, 513525. doi:10.1007/s11135-006-9057-z

 

Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S., & Kircher, J. C. (2008b). Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology, 76, 116- 124. doi:10.1037/0022-006X.76.1.116

 

Minami, T., Brown, G. S., McCulloch, J., & Bolstrom, B. J. (2012). Benchmarking clinicians: Furthering the benchmarking method in its application to clinical practice. Quality and Quantity, 46, 1699-1708. doi:10.1007/s11135-011-9548-4

 

 

Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., & Brown, G. S. (2007). Benchmarks for psychotherapy efficacy in adult major depression. Journal of Consulting and Clinical Psychology, 75, 232-243. doi:10.1037/0022-006X.75.2.232

 

Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S., & Kircher, J. C. (2008b). Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology, 76, 116- 124. doi:10.1037/0022-006X.76.1.116

 

Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: The evidence for what makes psychotherapy work (2nd ed.). New York, NY. Routledge.

 

Ashley Simon