false
Catalog
Integration of Digital Mental Health Interventions ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Okay. Hi, everyone. I am excited to be here today to share with you about my experience implementing digital interventions in clinical settings through the Stanford Digital Mental Health Clinic. And I'll just start with a brief introduction. So I'm Sarah Johansson. I work in the Department of Psychiatry and Behavioral Sciences at Stanford. And there I direct the Digital Mental Health Clinic, which we'll be talking about as sort of a case study today. And this clinic, to give just a brief window, it offers a hybrid of both teletherapy and digital interventions for mental health care. I have no related financial disclosures or conflicts of interest. I'm speaking independently today and not on behalf of Stanford. And the graphs and figures and representations that are not my own are all attributed to the corresponding reference on the slide. So why are we talking about digital interventions for mental health today? And this is something most of us here are familiar with, but mental illness being both a common and growing problem where globally, one in every eight people or 970 million people were living with a mental disorder in 2019, with anxiety and depressive disorders being the most common. And we know that number has only increased since the pandemic. Seventy-five percent of people in low-income countries cannot get treatment. And we have an access problem in developed countries as well, including here in the U.S. In 2020, 53 million adults in the U.S. or approximately one in five people had mental illness, and yet only 46% received treatment. So this speaks to the enormous supply-demand problem in mental health care, where our demand for care far exceeds the supply of trained mental health care professionals. And we really can't hire our way out of this problem. We really have to find new and innovative ways to address our need for mental health services. And that has really inspired many people to begin exploring digital interventions, which tend to be more scalable and more easily accessed than traditional one-on-one care. So the term digital intervention, I just want to start with a bit of definition setting here, because there are a lot of different terms used and it can get a little bit confusing. So the term digital intervention refers to digitally delivered tools that are designed to address mental health symptoms. So you'll see this definition here, software accessed by mobile devices and personal computers that is designed to prevent, treat, or manage mental symptoms. Now, many digital interventions fit into a category that is termed wellness products, meaning that they don't claim to treat a disease. They may address mental health symptoms, but they're not claiming to be a disease treatment. And these products are not subject to regulations and they do not need to be supported by testing and evidence. So this would be sort of your not clinically validated digital mental health interventions. We tend to term these kind of wellness products. The vast majority of apps fit into this category and they're not actually clinically validated. There are some apps that have gone through trials to demonstrate clinical evidence, but they still fit into this category because they only claim to address mental health symptoms and they're not claiming to be a disease treatment. So once an app claims that it intends to treat a disease disorder or condition, it falls into a different category of digital interventions called digital therapeutics. So digital therapeutics are designed to deliver disease treatment directly to patients. And these products are held to much more rigorous regulatory requirements and they must demonstrate clinical evidence. So the definition here is software intended to treat or alleviate a disease, disorder, condition, or injury by generating and delivering a mental health, a medical intervention, in this case, a mental health intervention that has a demonstrable positive impact on a patient's health. And so, you know, these products being held to more rigorous requirements means that actually many of them or some of them go through FDA clearance and that then results in a subcategory called prescription digital therapeutics. These require FDA clearance and a clinician to prescribe them. So that's just some level setting here on definition. Now, how are these interventions regulated? So as we, you know, the majority of mental health apps are not considered to meet the FDA's definition of software as a medical device and thus are not subject to FDA regulations. So according to the FDA, software applications that run on a digital device may be subject to device regulation if they are intended for use in the diagnosis or the cure, mitigation, treatment, or prevention of disease or to affect the structure or any function of the body of man. So the intended use here is what really matters. And this can be shown by labeling claims, advertising materials, oral statements by the companies, written statements by the companies, and it's the intended use that the FDA sort of sets this definition around. And there are a handful of apps out there for mental health conditions that have received FDA clearance, but the vast majority are not considered prescription digital therapeutics and are not going through the process of FDA clearance and regulation. So with that background, here are our objectives for today. We're going to explore the market for digital mental health interventions and talk about the gap between this expanding digital health market and limited adoption of digital interventions, both in the consumer space and also in clinical settings. And we'll discuss some of the barriers to integration of digital treatments into clinical settings and also explore some of the literature that supports human interaction as being really critical to patient engagement with these digital interventions. So it helps with that integration, and we'll talk about that today. And then we'll introduce a case study of a blended care model with the Stanford Digital Mental Health Clinic and review some key principles of digital integration that we've learned through this clinic. So first, we're going to review this context that our work is happening in. So this is a market map, and it represents a very small fraction of the mental health apps that are available to consumers today. So there are an estimated 10 to 20,000 different apps for mental health available. And the point here is that the mental health tech landscape is very complex, and it's a very crowded space with a lot of companies pursuing development here. And part of the reason why the market is so saturated is because there's been historically enormous funding driving this development. So it's not just clinicians who are recognizing that we have this supply-demand problem in mental health, but investors also see this opportunity. And funding in this sector grew to 5.5 billion globally in 2021, and this represented an 139% increase year over year. Funding has since cooled down, and part of that has been the market downturn. But also, there are other factors at play, and we can talk a little bit more about that perhaps later during the questions. But there are other factors at play that sort of explain this downturn in the market, and companies are having a more difficult time funding now for mental health applications. But overall, it's still a space that has a lot of interest and, I think, growing optimism in the space. So we know we have a lot of demand for mental health care, and we have all this funding to startups that drive solutions. And so really, what does this mean for consumers? And what it really means is that consumers face a very bewildering array of different options, and it's very hard to know what's good and what works. And what makes it harder is that in this direct-to-consumer market, companies make claims that are not always backed up by evidence. So this is a recent study reviewed the App Store descriptions for the top-ranking consumer-focused mental health apps. And out of the 73 apps that they reviewed in the top-ranking category, 64% made a positive claim related to effectiveness in diagnosis or improving symptoms. And 44% used general scientific language, like evidence-based treatment, to support their claims, but most did not reference any high-quality scientific evidence. Only two cited low-quality primary evidence, and only one included a citation to published literature. So you can imagine that if you were to go to the App Store and search for mental health, as I've shown in this little GIF here, it would be really hard to distinguish between apps that are making these sort of pseudoscientific claims and apps that have actually done the research to demonstrate their effectiveness and clinical validity. So this is an important point here. Many apps are based on evidence-based treatments, but that does not mean that the apps themselves are evidence-based. So shown here are examples of different training manuals for different therapeutic modalities, and these therapies are commonly used in the design of digital interventions. So essentially taking these published manuals and digitizing aspects of them and then claiming that the app is what they call based in evidence, which is very different from evidence-based. So cognitive behavioral therapy is an evidence-based treatment for depression and anxiety. So an app that uses CBT principles may say that it's evidence-based, but unless that app has gone through its own research studies to demonstrate its effectiveness, the app itself would not be considered evidence-based. So it's not a bad thing, of course, to build your product from these evidence-based treatments. Actually, it's a good thing. I think that companies look to established methods that are known to be effective, but it's important to be careful with language and make sure that we really think about what it means to say evidence-based. And again, the vast majority of apps have not been put through research studies to demonstrate their effectiveness, and there are some companies who have researched their products, but they are the exception to the rule. And this is an opportunity here to highlight some of those apps that have actually done RCTs. This was published just recently in 2024, and it is a meta-analysis reviewing the data on 176 RCTs of digital apps aiming to treat depression and anxiety symptoms. I think this paper is a great read, so I really recommend it. And overall, this study showed that apps have a small but significant effect on symptoms of depression and anxiety. Effect sizes for both depression and anxiety were increased when the app incorporated CBT features as opposed to, say, mindfulness alone. Effect sizes for depression increased when the app incorporated chatbot technology, and for anxiety, effect sizes increased when the app incorporated mood monitoring. So, it's possible that these features of chatbot technology and mood monitoring allowed for greater personalization and enhanced engagement, and that may have contributed to that effect size increase, but we should really interpret these results with some caution since the number of studies in these subgroups that incorporated the chatbot technology or the mood monitoring was small. So, overall, I think this meta-analysis offers some promising data that digital apps can be effective in addressing depression and anxiety symptoms. Okay, so with that background, I'd like to dive into some of the barriers to using digital apps to address mental health conditions. So, we know that clinicians face a lot of barriers that impact their use of digital interventions, and, you know, as we've discussed earlier, there are so many apps out there and not all have sound clinical evidence, and much like consumers, providers are equally bewildered by this landscape. Many clinicians don't know how to select apps among all the many available products, and they have concerns about the safety, privacy, and efficacy of these products, and they also face systemic barriers. There are issues with insurance reimbursement and how to actually pay for these products. There's also issues with how to compensate clinicians for the time spent in onboarding patients to digital programs. There's also often a lack of time in the clinical workflow, and it becomes very difficult, you know, to just sort of add an app on top of an existing clinical workflow. As we'll discuss later, it's really important that the clinical workflow is adjusted to allow for the app to be incorporated, and then there's also, you know, I think a lack of workplace training where in order to incorporate these apps into clinical care, there needs to be some level of staff training in both how to select the apps, but also how to talk about them with patients and how to use them in follow-up appointments, and so all of this really contributes to the lack of engagement with digital tools by clinicians in a lot of clinical settings, and we know that patients also face barriers to using digital treatments, so this is from the University of Washington Creative Lab. They conducted a survey study to explore patient preferences on digital tools, and they surveyed 164 adults, and these were from racial and ethnic minority communities, so Native American, African-American, Spanish-speaking, living in rural areas in the U.S., and in these populations, accessing mental health care can be a particular challenge, and this access issue is the very problem that many companies sort of reference when describing the potential benefit of their interventions. However, you know, there are a lot of concerns around using digital interventions for care, so in this study, 73 percent of the sample said that in the future they would be likely to try individual in-person psychotherapy, and 72 percent said that they would be, sorry, 73 percent said that in the future they would be likely to try individual in-person psychotherapy, and 72 percent said they would be likely to try digital psychotherapy in the future, so in this population, people were really equally open to both of those options, so virtual versus in-person, but only a quarter, about 25 percent of respondents, preferred a self-guided digital treatment, and the options of self-guided digital psychotherapy enhanced with peer or expert support were actually the least preferred, so, you know, the majority had concerns about the efficacy or effectiveness of digital psychotherapy relative to in-person psychotherapy, and about half of participants raised questions about the relative success rates between in-person psychotherapy versus digital psychotherapy, so I think this really emphasizes the need to involve patients in the conversation and provide education about digital treatments and to allow the appropriate time and space to answer these questions about effectiveness, so we need to increase exposure to these options and understand that it really takes time to build trust in alternatives. So, people download apps, and attrition is just a really major barrier. People really quickly stop using digital apps for mental health, and, you know, we have a lot of these… there's often a lot of hope in sort of downloading one of these options, but in a study, research shows that, you know, in this study in particular, it was 100,000 downloads of mental health apps, and they found that the median retention rate after 15 days was just under 4%. So this graph shows a really clear and swift drop in these retention rates from the day of installation across all different types of mental health apps. And adherence does get better if a clinician refers the patient to the digital tool. So in this analysis of 100,000 participants, the median duration of use was only five and a half days, but if the participant was recommended to join the study by a clinician, which you can see here, and that's the purple colored line, the median retention time increased by 40 days. So in this study, it suggests that clinician referral can improve adherence. And adherence gets even better if the clinician keeps checking in with the patient to support them in using the digital tool. So this data is from a meta-analysis looking at dropout rates in RCTs of smartphone apps for depression. So for studies in which participants had a human interaction and received real person feedback while using the digital tool, the dropout rate was significantly lower. It was 11% compared to 33% in groups that did not receive any human interaction. So we know, we see here that human support can really improve adherence to the digital tool. So let's talk a little bit about why human interaction is so important. So there are a couple of theories as to why combining digital tools with human support is really critical. So we know that two key determinants of technology adherence are the performance expectancy or how much the user believes that the technology will actually be helpful for them. And then also social influence or how strongly the user feels that other people think it's important for them to use the technology. And a third key determinant of technology adherence is provider endorsement. So one study of veterans with diagnosed depression, anxiety, or PTSD, participants reported significantly higher interest in using digital tools when the tool was recommended by a mental health provider. And interest was lower when the app was recommended by a primary care provider and even lower when no expert recommendation was received. So we can think a little bit about why this might be. And the difference is likely due to the patient's interaction with the mental health care clinician where the clinician is really seen as an expert making a recommendation for a tool which helps the patient believe that the digital program will be helpful for them. That's the performance expectancy. And also that it's important for them to adhere to it. That's that social influence. So the clinician can also help address issues with motivation by using motivational interviewing techniques and they can also work with the patient. I think really important here is on addressing how to integrate the skills that they're learning from the digital tool and apply it to their life. So in this case, we're really beginning to see that the app and the human are complementary and they can actually enhance each other. So blended care models are sort of a demonstration of this hybrid of the digital app and the human. They offer a hybrid approach that combines the strengths of both human delivered and digitally delivered care. So this model leverages the strength of the digital tools to help patients learn skills, build habits and monitor symptoms. And it combines it with the human delivered care which can help promote engagement and deepen the patient's capacity to integrate the skills that they are learning. So this is a figure from a paper from 2021 and it sort of outlines this idea of how the app could be integrated. So it's sort of starting with this telehealth or in-person visit with shared decision-making and clinical goals to determine the app to use and then this sort of support for app setup, customizing the app for data collection or any intervention needs. And then the patient's able to use the app outside of sessions and then receive ongoing support around the app for any troubleshooting or engagement. And in this particular figure, you're sort of seeing in the top left corner, the clinician and then in the top right corner, you see that this sort of support person for app setup and customizing the app. And then later in the bottom left, again, for any troubleshooting, for engagement, this would not have to be actually a clinician and people are beginning to talk more about these, what's called digital health navigator roles or digital navigator roles where staff can be trained almost like a coach to help with these aspects of app engagement and usability. And then the patient sees their clinician for that skills integration and ongoing therapy. So as we've previewed blended care models, I'll now introduce our new clinic that we've established here at Stanford. And this is called the Digital Mental Health Clinic. And I'm also gonna introduce a few other examples of digital clinics that are happening in academic centers. And these folks have really inspired a lot of the work that went into the Digital Mental Health Clinic at Stanford. So I'll start with Northwestern. They have their Center for Behavioral Intervention Technologies. And I wanna highlight this group because I think they just have done really great work in this space. And they've done a lot of research on clinical implementation of digital interventions. So this center, they design web or app-based programs and then research how to implement the program in a real world context with patients and digital coaches. So they have done a lot of research into how human support can increase adherence through accountability to a coach who's seen as trustworthy and having expertise. So again, this sort of blended care model. And they published a supportive accountability model which formalized the approach. And actually, they're freely available on their website. There is a manual with the supportive accountability model outlined. And they've also published an efficiency model which expands on the supportive accountability model to emphasize the role of the coach, not just in addressing engagement, but in helping the patient apply what they're learning in the digital program to their own circumstances so they can address what was bringing them to care. And they did a study using a stepped care approach showing that programs that combine a digital CBT program with digital coaching sessions are less costly in terms of time and therapist resources, but no less effective compared to traditional CBT sessions for treatment of depression. So this graph shows traditional CBT versus stepped care with the kids' depression scale as the outcome on the y-axis. And you can see that from baseline to six months post, there was no significant difference in efficacy of the treatment between traditional CBT and the stepped care model in which participants or patients were given a digital CBT program with a digital coach. So in this particular study, 20% of patients stepped up from digital CBT to traditional CBT. But overall, the therapist time in digital CBT was five versus 10 hours in traditional CBT, was less costly. And then no difference in efficacy there. And another group, Beth Israel in affiliation with Harvard, they have a division of digital psychiatry for research and clinical work and digital interventions. And they have a digital clinic, which connects patients with an app that was developed by researchers in-house. It's called MindLamp. And in this model, data is collected asynchronously from the face-to-face appointment and then later used to inform care. So the MindLamp app tracks both active data, such as surveys and cognitive tests, and also passive data. And the platform that the provider uses is called LAMP. And the LAMP platform allows the provider to see the passive and active data, and then they can use that data to better inform care. So for example, if the app is tracking the quantity and the quality of sleep, as well as subjective mood reports, then the provider could use that correlation between say poor sleep and low mood to discuss the importance of sleep hygiene with the patient. And this group puts out a lot of research in the digital clinic. So at Stanford, we have the Digital Mental Health Clinic, and we use a combination of digital interventions and also teletherapy sessions. So we offer digital interventions delivered via a smartphone app, and we provide supportive therapy and digital coaching through telehealth. And we can connect patients with additional therapy resources if they need to step up from this level of care. So here's our clinical team. And I started this clinic when I was a resident, and Dr. Kim Bullock, who you heard from in the previous session, she supported me in that project when I was a resident, and she was the attending for the clinic. So this wouldn't have been possible without her. And then Dr. Eric Kuhn, he works with us in our research team. And Dr. Paul, Dr. Burkhead, Dr. Glick are part of our team of clinicians in the clinic. So as we consider how to bring digital interventions into the room with clinicians and patients, it's really important that we have frameworks for clinicians to learn about which apps have good evidence and how to think about privacy issues and safety and how to assess the usability of these products. So I'd say, one of the biggest questions I get about the clinic at the digital clinic is around how we actually select the products that we use. And there are a number of factors that go into that. So we ground our selection in a particular model that the American Psychiatric Association released, which is a framework for evaluating apps called the App Advisor. And this assessment guides clinicians through evaluation of a digital program using five levels of questions. So the full evaluation requires about 37 questions per app. And then in 2020, the digital psychiatry division at Beth Israel, they expanded on this APA model and created an online database called MindApp. So they use trained app raters to answer 105 questions about each app. And they build on the APA model to address issues related to accessibility. So is the app accessible for users, privacy and security? Does the app uphold user safety by protecting data privacy and security? Clinical foundation, is the app supported by research? Engagement style, is the app usable? And then the therapeutic goal, can the app share data in a clinically meaningful way? So the apps are not ranked on this database, but they are searchable and you can customize the results. So if you recall back to this very confusing, massive, complex and crowded space of apps, we do have tools to help us navigate it as clinicians. So this is the MindApp library on mindapps.org. You can see and interact with the full database. So you can search for apps that have specific conditions, have supporting studies and a published privacy policy and more, and this is freely available. So both individuals and clinicians can use this tool to help make more informed decisions. So then in our clinic, we use the same model. The framework is based on the APA app evaluation framework and the MindApp database. And I think these resources offer a really comprehensive overview of many of the most popular apps and they teach us how to evaluate apps ourselves. So for every program we use, we wanna ensure that it meets really important standards with regards to efficacy, privacy, safety and the user experience. And in our clinic, we do want to see apps that have actually published research demonstrating effectiveness. So one of the programs that we've partnered with is Meru Health. And Meru is a digital CBT program. It's a 12-week CBT program delivered via a smartphone app. And participants receive daily psychoeducation in CBT through exercises that they complete right on the app. They also have access to peer support. And when they start the program, they have a video call with a therapist on the platform. And thereafter, they can actually chat with their therapist through the app and they can also elect to have monthly check-ins by video with their therapist. So Meru is itself doing a bit of a blended care model here. And Stanford has had a long relationship with Meru Health and done lots of research with them. And we really appreciate their commitment to doing research. So this is one of the primary programs that we use. And you can think about it as providing a lot of that psychoeducation that in CBT that would normally occur during the visit. And then when patients meet with us, we're able to talk about what they're learning and also talk about how they can integrate the skills that they're learning to address what they are experiencing and what was bringing them to care. We also work with Headspace for meditation and mindfulness training. We use it as an adjunctive treatment with Meru, typically for patients who want to deepen their meditation practice. And we want to just think a little bit about which patients are really best suited for this resource. So we've developed our inclusion and exclusion criteria for the clinic. And I just want to chat about that a little now. So of course, this does require that patients have access to a smartphone and also internet. And we are seeing right now patients who are diagnosed with depression or anxiety disorders. And we also, we want to explore upfront if patients are open to this form of treatment with teletherapy and daily use of a digital program. In the exclusion criteria, having other diagnoses, so schizophrenia or other psychotic disorders, bipolar disorder, OCD, PTSD, substance use disorders, we're not currently focused in these groups. And we also, we are not seeing folks with active suicidal ideation or suicidal behaviors. And then really, I think it can be very difficult to have significant impairment in daily functioning if symptoms are so severe that a patient's really not able to tend to their activities of daily living. It can be very difficult to also engage with an app and a higher level of care may be required. So this is a really important part of the work that we're doing. We're learning which patients are best suited for a hybrid of therapy and a digital program. And it won't be the right solution for all patients, but for some, it can really effectively meet their needs. So this is just to give you a sense of our workflow. We take direct referrals from primary care providers or other clinicians, and the referrals come to our intake for an initial screening call. And then once we determine that they meet our inclusion criteria, we see them for an intake. And then our follow-up appointments can be 30 minutes to an hour, depending on people's needs. And we see people either bi-weekly or weekly, depending again on the patient's needs and their level of engagement with the program. So after completing the 12-week digital program with MIRU, we discuss next steps, including graduating from therapy or a referral to continue therapy, either in group therapy or in the community. So I wanna just kind of double-click on our model here for a moment. So in our intake appointment, that's our opportunity to assess the patient's needs. We determine their eligibility, and we also introduce this idea of the blended care model. And then we have a second appointment for onboarding. I think it does take quite a bit of time, so I think it really is best when it's separated into two appointments. And then we will talk about selecting the digital intervention and assist with onboarding. We can define our goals and discuss roadblocks. I think it's really important to discuss roadblocks and common things that come up that make engagement in this type of care difficult. If you discuss that upfront, I think it really normalizes the experience for the patients and it helps with patients feeling like, you know, if they have a few days or they don't check in with the program, that that's okay and that they can re-engage when they're ready and that will help them through that process. We talk about a plan for daily use. So we talk about times of day that might work well for them to use it. And we also obtain commitment from them. And then patients engage in daily use of the app at home. And we talk about the digital tool as serving as a thread between our appointments. So it's a commitment to engaging with their mental health care every single day. And when we see them for follow-up, that's when we review their symptoms, we assess their adherence, and we help them with integrating the skills, revisiting their goals, and again, discussing any roadblocks and barriers that are coming up for them. And then we renew that commitment to the program. So we use measurement-based care to assess our patients' symptoms. And we typically gather these surveys every two weeks. The questionnaire is securely sent to the patients prior to each appointment. And we review the data together with the patient during the visit. So the data is housed securely. We have Stanford IRB approval for QI-QA. And we have been, during my residency, we ran a pilot sort of proof of concept version of the clinic. And then when I joined as faculty in the last year, we've been sort of ramping up and becoming fully operational. And we've had good results in terms of both subjectively patient feedback, but also our outcomes through this measurement-informed care. So this is just an example of PHQ-9 scores for one patient as an example of some of the symptom tracking that we do in our clinic. And I thought this was kind of an interesting example because the symptoms just tracked so well with the use of the digital tool. So about midway through the program, initially after his PHQ-9 scores decreased from the moderate to the very strange to the mild range, and then about midway through the program, they experienced a worsening of symptoms. And this correlated with a time when they were traveling for several weeks and weren't able to participate in our therapy sessions. And then also subsequently, they stopped using the digital tool during that time. And since they returned to regular attendance and daily use of the program, then the scores continued to sort of downtrend. So I thought this was sort of an interesting example of how one, the human support, I think can be really important for the adherence. And then also how in this particular case, the symptoms were actually really positively addressed by using the digital tool. And so that's a little bit about sort of the operations of the clinic. And now I'd like to talk a bit more about some of the key lessons that we've learned from our pilot study in the clinic, and that we continue to sort of see demonstrated as we're growing the clinic now. So the first kind of lesson learned is that expectation setting is really important. So we initially tried a waitlist model where we recruited patients to the clinic who were already on the waitlist for individual therapy. And when we offered waitlisted patients to join our program without losing their position in line for their individual therapy, we recruited very few patients from the waitlist. So the majority of the patients that we called, they told us that they wanted what they were waiting for. They wanted to see a therapist on a weekly basis. And they really weren't interested at that point. In fact, in many cases, they were sort of frustrated that we were offering them something different than what they'd been referred for. And I think this really makes sense. And they had already gone to their primary care provider and talked about their mental health care needs and decided that individual therapy was the right course of action for them. And then they had to wait. And then after being sort of on the waitlist for a while, at that point, they really weren't open to even hearing about a different model of care. And so the waitlist model really did not work in our first pilot version. So we shifted to a new model, and now patients are referred directly to our program. So this involved partnering with primary care clinicians to sort of bring them on board to this idea of referring to our program, helping them learn about what we offer and to sort of design a referral process that would work well for them. So now the clinician, the primary care clinician actually introduces the idea of the program to the patients as the recommended treatment for them. And in this context, we find that patients are much more open to sort of hearing about a new care model of digitally enhanced therapy or a hybrid model of care. We've had much more success bringing patients into the clinic by this direct referral from primary care. And we really think this has to do with the expectations that are set from the moment the program is offered to the patient. We've also learned that human interaction is really important to people. So we've talked today a lot about some of the research that supports that. And it really, you know, the lived experience in this clinic is that people really want to talk through their problems and be listened to. They don't just want to do CBT homework or meditation sessions on their phone. It's just not enough. They really want to have that relationship with the therapist. And that's so key to getting buy-in for the program, to adherence and to recovery. So, you know, the therapeutic alliance is something that we track in our clinic and we can consider the therapeutic relationship to really be the cornerstone that we need. So I can't really emphasize enough how important it is to combine the digital intervention with the human contact. You know, I think the research definitely supports that. And then in my own practice, I've just seen that time and time again, that although the requirement of, you know, some level of human support may not be as scalable as a standalone digital intervention, I think it is certainly more effective. And as we're thinking about how to build, how to increase access to care, we want to make sure that we're not just increasing access to care, but that we're increasing access to really quality care and care that we know will work. And, you know, it's really not enough to just hand someone a nap and expect them to be able to navigate it on their own. You know, we've seen that most people will use it for about five days and then stop. And so that's really not offering a sustainable solution. That's really not improving the access problem. But when those interventions can be combined with a therapeutic relationship that helps with adherence to the program, helps them get the benefit of the digital program, and then supports them throughout their whole time in the digital program, I think that's actually, you know, a much more effective model. So I think that these blended, it, I think, supports use of these blended care models. And, you know, in these models, you can actually increase patient flow through the program. You can, for some patients, reduce the amount of therapist time since some of that education is happening offline in the app. But it is a, so it can help with sort of that concept of access. But, you know, I think it's also important to consider the efficacy of what we're offering and the human interaction, I think is just so important for that. And then another consideration here is who actually pays for digital tools? And this is one of the biggest issues, I think, with getting digital interventions incorporated into clinical settings. I will say that, you know, most apps plan to sell their programs to payers, so to insurance companies, or to employer groups. So in that case, really, it's the, the customer is really not the patient, the customer is the payer or the, who's buying the product. And so that, what that means is that unless a patient has the program as a benefit on their insurance, or through their employee, the out-of-pocket cost for patients can be high. And, you know, there's, as I've seen, there's really a limited desire to pay for out-of-pocket, pay out-of-pocket for these types of interventions. And especially when payment is asked for upfront. So this can be a really big roadblock when patients are asked to provide a credit card before their insurance is processed. They don't know what their expected co-pay will be, and patients have struggled with that. And you can kind of see how it might be confusing, particularly as patients are, if they receive the recommendation for the app from their care provider, then they're probably gonna expect sort of a similar payment model from what they, you know, from what they experience in going to that healthcare setting where they get their insurance, they typically receive a quote of exactly what their co-pay is gonna be for that visit. And so they know ahead of time what they're gonna be asked to pay, but then they receive the service. So they come to the visit, they receive the service, and then they get the bill later. And so I think when it's flipped in a digital context and people are asked to sort of pay up front by giving a credit card, even if it's not charged right away, I think that's a very difficult kind of flip for patients. And I've actually had patients who have been referred to programs where they're asked to provide a card up front, and that's a roadblock and they won't engage. So I think the payment for digital tools is a really important question, and this is a really evolving landscape as we may have some new codes coming out that may offer reimbursement, but it'll be sort of a learning curve to see what that means for both providers and companies alike. And lastly, you know, I wanna acknowledge that acceptance of new care models really evolves over time. So our historical approach, we had the doctor that came into the home and then have shifted now to a model where care is received in clinics and people actually go to see their doctor in a different setting. And now we're exploring this kind of new horizon where care can happen in virtual spaces. And it does take time for people to adjust to these new care models. There's certainly different risks and drawbacks with providing care in a virtual space. And I think there are also a lot of benefits, but always, you know, we really need this human touch. So the digital program can enhance rather than replace the work with patients. So to close, I really wanna thank Dr. Bullock without whom this project would not be possible. She has been really integral to the design and operation of the clinic, and she has been a wonderful mentor. So I'm very grateful for her support. And then the rest of our clinic team as well, Dr. Kuhn in research, and Drs. Paul, Burkett, and Glick in our clinical team. I'll also acknowledge Meru Health and Headspace, both of whom donated subscriptions to our clinic and to the institutional partners in our department and Stanford's Tech Hub, and everyone who's just been supportive of this project and really allowed it to grow. So I appreciate the opportunity to talk with all of you today. I think we have a few minutes here for questions. So I'll be happy to take any questions. I'm gonna just briefly show my references here, and then we'll move on to questions. So let me see if I can find the Q&A. And it looks like we have a couple of questions in the Q&A here. So the first question is names of available CBT programs, cost and coverage by insurance programs. So the actual cost and coverage is gonna vary by depending a lot on the patient's insurance and their specific plan in that insurance. So that can make it difficult when you're considering referring a patient to a program that actually is covered by insurance. And I have actually had the experience of reaching out to different programs to actually ask them about insurance coverage. Some programs are also willing to provide patients with a quote for what their insurance would cost. And they also can provide information on what the out-of-pocket cost would be. So it can be worth it to explore that if you find an app that you wanna use, but you aren't sure of the insurance coverage. I would say that it just can be really variable. It depends a lot on the individual program and on the patient's insurance. So one of the reasons why we use Murrow Health and why that was one of the programs that we started with in addition to their track record of research that they've done with different Stanford and VA clinicians is that they were offered as a benefit for Stanford employees. And so as we think about how to integrate care into clinical settings, cost and payment is one of the really big questions that can guide decisions around which programs get used. So the fact that this was already available as a benefit to many of the Stanford patients is part of the reason why we tried that first in our pilot model. Now we are in discussions with some additional programs actually in the digital therapeutic space. So hearkening back to our initial discussion at the beginning about definitions, we are considering, we're in sort of open discussions with some of the digital therapeutics. These are those companies that have gone through more rigorous testing and they're under more scrutiny in terms of regulation to get FDA clearance. And so that's sort of another avenue as once a program gets FDA clearance, then there may be opportunity for either reimbursement through new insurance codes, and it remains to be seen what that reimbursement will look like. But other models you can consider are seeing which insurance types the company takes and then talking with them about options to refer patients who have that insurance. So who already have it as a benefit versus what their out-of-pocket costs can be. And out-of-pocket costs really range quite a bit. It depends a lot on what the service is. So it can either be something kind of nominal, like what you would pay as a co-pay for a medication. You can kind of think about it like that and even talk about it with patients as being sort of like a co-pay for a medication. But then there can be really significant costs depending on the product, and particularly if it's not just software, but it also involves hardware. There's another question about the best app for CBT for insomnia. So I will say my own, I can just speaking from my own personal sort of favorite, I'm not, have no conflict of interest and don't receive payment from this company in any way, but I do think that Sleepio is by big health. They have done a lot of wonderful testing and rigorous studies. And I think Sleepio is a wonderfully designed product. And they also have gone through a lot of the testing to obtain clearance and they are considered a digital therapeutic. So I would maybe look into Sleepio as a potential option. And then the, I think we maybe have just one minute for this last question, is our clinic doing digital psychotherapy alone or integrated with medication management in the same appointments? So that's a choice. We've done it both ways. Initially, the clinic was just offering psychotherapy combined with the app. And now later we've shifted into a model where if the patient needs medication management, we can also offer that. But it does mean allotting more time for the appointment. So you probably can't have like a 30 minute sort of check-in if you're, and again, more of like a digital navigation visit. If you're also doing med management, then you really should be meeting with the patient for a longer, you know, up to an hour long visit. So sort of more standard of care where then the app is just serving as an enhancement to the work that you're doing. But it does work in our model to also do med management. So I think we've hit our hour mark and there are no more answers in the Q&A. So I'm going to stop there, but thank you very much to everyone who attended.
Video Summary
In a detailed presentation, Sarah Johansson discusses her work at the Stanford Digital Mental Health Clinic, which integrates teletherapy and digital mental health interventions to tackle the rising demand for mental health care. With mental illness affecting millions globally and inadequate treatment access, especially in low-income countries, digital interventions offer scalable solutions. Digital interventions, broadly categorized as wellness products or digital therapeutics, aim to manage, treat, or prevent mental health symptoms. While many apps claim effectiveness using scientific language, few are clinically validated or go through rigorous regulatory processes like FDA clearance.<br /><br />Johansson underscores that despite the vast number of mental health apps—estimated between 10,000 and 20,000—only a small fraction are supported by empirical evidence, confusing both consumers and clinicians. A 2024 meta-analysis indicated apps primarily had modest effects on depression and anxiety symptoms, suggesting potential but also limitations.<br /><br />Barriers to digital intervention adoption include safety, privacy, efficacy concerns, insurance reimbursement issues, and lack of clinician and patient engagement. Evidence suggests human interaction significantly enhances patient adherence and outcomes when using digital mental health tools.<br /><br />The Stanford Digital Mental Health Clinic employs a blended care model, combining digital tools with clinician support. Early lessons indicate setting clear expectations and integrating human support into digital therapy models are crucial for patient engagement and effective treatment. Overall, while digital tools offer scalable mental health care solutions, their success largely depends on human interaction and clear, supported implementation.
Keywords
Digital Mental Health
Teletherapy
Mental Health Interventions
Mental Illness
Digital Therapeutics
Clinically Validated Apps
Blended Care Model
Patient Engagement
Mental Health Apps
Human Interaction
×
Please select your language
1
English