false
Catalog
Exploring the Impact of AI in Child/Adolescent Psy ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, let's dive in. So hello, everyone. I am Dr. Monica Roots. I am a child psychiatrist. I'm an assistant professor at the University of Wisconsin-Madison, helping in the Department of Psychiatry. And I am the co-founder of Bend Health, which is a national pediatric mental health solution for kids, teens, and young adults. Prior to this, I was the chief medical officer at San Velo, which is also an adult mental health solution and Teladoc prior to that. So I've been in the digital health space for quite some time and excited to talk about the impact of AI on child adolescent psychiatry and mental health. We definitely want to hear your questions. And so there is a Q&A box, which can be used to enter any questions that you have as I am speaking. And we have time at the end for me to address any of those. I do not have any potential conflicts of interest in this program or any actual conflicts as well. Today, we're really going to dive into understanding the concept of artificial intelligence and how it's relevant for child adolescent psychiatry and mental health treatment today. We're then going to talk about how it has applications, not just an assessment and diagnosis, but things like interventions and treatment planning. We'll talk a little bit about the challenges and how that presents a little bit different for child adolescent psychiatry than adult mental health care. We'll also talk about some of the limitations as well as the opportunities. So let's dive in. First of all, as a child psychiatrist, I know we didn't talk much about AI when I was in school. So let's start with potentially a little bit of a definition. Artificial intelligence and its potential allows computer systems to truly perform tasks that are typically require human intelligence. What does that mean? It means that it can learn from experience. It's able to recognize patterns and make decisions off of that. It really holds an extreme promise in helping us transform patient care diagnostics and really operational efficiency. As we think about AI and advancements in child adolescent mental health in general, I think that there's been that very much a lot of things have changed in the arena of child adolescent mental health. First and foremost is awareness and recognition. We really haven't talked a lot about pediatric mental health for quite some time. And I think that awareness is increasing. And we see that in the amount of demand that is occurring in child adolescent psychiatry and the mental health field. In addition, we've seen the introduction of digital mental health solutions, telehealth services, an increased focus on preventative and early intervention programs, which are really helpful to try to stem the increasing need that we're seeing. We see also an increase in multidisciplinary approaches, where we look at collaboration between psychiatrists and psychologists, social workers, educators, and other healthcare professionals. This idea of bringing a holistic approach to pediatric mental health is a new one. We also have seen a lot of advancement in terms of combining therapy and medications in care. This field continues to integrate those two as being a best practice, especially in conditions like depression, but we see also advancements in other areas like substance use psychosis and behavioral concerns. We do continue to see an extreme advancement in research and the potential for not just medication interventions, but other interventions as well. But I think, as we all know, there continues to be significant challenges in access and equity. We saw during social distancing and distance learning some of the equity issues we have in access to education alone, let alone mental health care for kids and teens, and this continues to be a factor. We also see the impact of environments such as social economic status, family dynamics, exposure to trauma is increasingly recognized as a factor in child adolescent mental health and many of the things that are occurring in their daily lives that are creating stressors, as well as new and engaging therapeutic approaches like mindfulness-based interventions, neurofeedback, and other interventions. But once again, let's get into AI. What is it? We see these computers, right? There's a lot of screens, looks a little scary, but artificial intelligence is defined as the ability of machines or software to acquire and apply knowledge and skills. It is the analog of human intelligence and the ability to learn and apply knowledge to new concepts. It started with John McCarthy, who first introduced a concept back in 1956. He really thought of the concept of thinking machines that could learn concepts and apply principles to new problems, just like a human could. He actually talked about this at a seminar that first originated in Dartmouth. There's really two forms of AI in healthcare. There's assisted and autonomous. Assisted is a form of intelligence that augments tasks that support humans. If you think about different ways that we can surface knowledge to a healthcare provider to assist them in delivering care, autonomous is a different form of AI where you're able to have a form of knowledge that makes decisions on its own. We see this a lot with generative AI and the ability to make decisions on its own to service information. There's also machine learning. This is a form of AI that is adaptive in the system to enhance function and accuracy. In other words, machine learning is like a function to improve the accuracy of the predictions that it makes. It's really trained on the information that is utilized to train the model or inputs, things that they call features. It allows the ability to predict the outputs based on the inputs. The better the inputs, the more that it's able to identify a really good pattern. Yet, and we'll talk about this in a bit, it's highly dependent on the information that it is trained on or the inputs, which can or cannot be an issue. As we think about these two aspects of machine learning, it's really the idea that artificial intelligence is a computer's ability to act on its own according to its environment. It displays that knowledge, while machine learning is the ability to observe and analyze and predict based on previous patterns what the output could be. That is really the distinction. In my mind, artificial intelligence is looking at information and making decisions based off of it versus machine learning is looking on past patterns to predict new patterns or future patterns. There's a lot as we think about how to apply this knowledge in healthcare that I think is important. To see an example of how machine learning works, I think that's really interesting to look at an infographic such as this around the planets. If you think about machine learning, what is it doing? Well, it has inputs like Mercury is the closest planet to the sun, Venus has an amazing name, but it's really hot there. Earth is the third planet from the sun. Mars, despite being red, Mars is actually cold, so it's the opposite of what it would think. It takes all of that information and it gets information and predicts about Saturn, another planet within the solar system that is giant and has several rings. It's trying to use all the information that it has and predict what Saturn would be like. That's really what machine learning is like, but once again, it's based on the information that's put in. What if we trained it on Venus does not have a nice name and it's actually cold? Well, that would really impact the prediction that it makes. We talked a little bit about what these things are, but in terms of how we apply it to healthcare today, we're doing a lot of different things in terms of AI and application. It has a lot of potential and these AI-powered algorithms, which have fed a lot of medical data, can really help us rapidly analyze complex patterns, helps in early and accurate detection of diseases. It helps us interpret medical images like x-rays and MRIs and CT scans. It can also help clinicians identify abnormalities and prompt them with insects. It allows us, if you think about what AI is doing, allows us to really personalize medicine. It allows us to take someone's data based on what they have, look at other patterns and start to think about their genetic makeup, their lifestyle, their medical history. Many of the things that our clinicians are doing, but it's such a large amount of data that AI really has the promise in being able to help us synthesize all of that information and really optimize therapeutic interventions. It also potentially allows us to minimize side effects from medications and improve patient outcomes. In addition to that, it's also helping streamline healthcare operations. There's so much we can do there to really improve, not just the burden on the practitioners or clinicians, but also on the patients themselves. Administrative tasks like appointment scheduling, billing, record keeping, there's all this potential to make this automated through AI, allowing healthcare professionals to really focus on what they want to do, which is patient care. It also allows us to contribute towards medical research and drug discovery by rapidly analyzing lots of data sets that have been out there to potentially identify drug candidates, accelerating overall research processes and process. This really has potential to bring some novel treatments into the market in a little bit more efficient manner than we have before, but there's a lot of challenges that we will get into. Regulation is a big part of AI as we think about its application into healthcare, which we'll get into. As we think about AI in child adolescent psychiatry, which is really why we're here today in mental health, again, it's really the focus of applying computer systems that can simulate human intelligence to analyze, interpret, and support mental health related tasks. I think also it has a promise of helping us do a better job. It involves the use of algorithms, machine learning, data analysis to help us assist with diagnosis and treatment and management of these mental health conditions in kids. Again, just like as we saw with the general category of healthcare, it can help us do a lot in child psychiatry. It can help us with early detection and intervention, so it can help analyze data sets to identify early signs of mental health issues, identify timely interventions that we need to think of, and prevent more severe conditions down the road. It can also help us with detection for prompt support and the development of effective coping strategies. It also allows us to personalize treatment plans. AI algorithms can analyze a person's data, identify their genetic information, really personalize and tailor that treatment plan to the information that you feed it. This approach allows us to actually individualize their treatment. Instead of just using the algorithms that we typically have used in the past, like let's try a couple of SSRIs and move on to different treatment patterns for depression, this really lets us look at, well, what are the genetic influences? What are the socioeconomic influences that are going to impact treatment outcomes? It also helps us with diagnostics. It helps us interpret diagnostic assessments, looking at brain imaging and speech to really help us identify patterns and provide more accurate and objective diagnostic information. This really leads us with the opportunity in child adolescent psychiatry for precision medicine and allows us to really pick effective treatment options for them. It also allows us to enhance access to mental health resources. As we know, there's not enough child adolescent psychiatrists or therapists currently that are working today to address the need. This allows us to leverage different kinds of technology like AI-powered chatbots and virtual assistants to help with immediate and accessible support, navigation, offering information about coping skills, psychoeducation, and it helps us bridge the gap. I don't think it's necessarily a replacement. What it allows us to do is being able to offer them resources while they may be awaiting a practitioner or a clinician, as well as enhancing the care that is offered. There's also continuous monitoring and risk assessment that it helps with, including continuous monitoring of child and teens' behavioral and emotional patterns based on real-time data that can be fed to it, as well as automated risk assessments that help us really detect any changes in progress or risk concerns that need timely and immediate intervention. Of course, I think that AI really has the opportunity in child adolescent psychiatry to support the clinicians. As we know, there's a lot of data out there now that is given to us to help us with treatment, but that means there's a lot of data to interpret and review. Through AI, it really can help us with data-driven insights to really tailor our treatment plans, look at progress, look at risk factors, and really tailor those therapeutic interventions, and that supports us with decision-making opportunities. Now, what kinds of artificial intelligence factors are being used in mental health care today? I think you'll hear about many of these today, but just in summary, there is natural language processing. What is that? It's a form of AI that focuses on the interaction between computers and human language. In mental health treatment, natural language processing is utilized with chatbots and virtual therapists and sentiment analysis to really understand and respond to the nuances of human communication. If you think about it, we can apply it with chatbots to provide immediate support. We can assess language patterns to find distress. We can also offer resources or coping strategies, and then also virtual therapists can leverage natural language processing to engage in conversational-based therapeutic interactions. It also helps us understand, is there a possibility here of finding some risk concerns if there is any text messaging or other interactions that are occurring? There's also machine learning algorithms. This again is machine learning, which is the development of algorithms that allows computers to learn from the data that is given to them and improve performance over time. This is being applied to predict the risk of mental health issues in child adolescent psychiatry and mental health, personalizing treatment plans based on individual characteristics and predicting potential crises by monitoring changes in behavior. We also have something called computer vision. Computer vision enables computers to interpret and make decisions based on visual data. Mental health computer vision is used in the analysis of facial expression, body language, and other visual cues to assess emotional states. We look at facial recognition technology being employed to analyze emotions and detect indicators of conditions like depression or anxiety, and really helping clinicians to assess the emotional well-being of their patients. We also have predictive analysis. This is where predictive analytics uses statistical algorithms and machine learning techniques to identify the likelihood of a future outcome based on historical information. This is being used and can help mental health professionals in making informed decisions about treatment plans and resource allocation, and identify individuals who are at higher risk and proactive interventions that we can do to help them. An example of this would be looking at data around an individual and their ability to engage in therapy. Someone who may have worsening psychosis may disengage in therapy, and being able to look at that data, predict that there may be a relapse coming, and using proactive interventions of reaching out can have significant impact. There's also voice analysis. This is a technology that uses AI to assess various aspects of speech patterns, including the pitch, the tone, and the speed. In mental health, we can really use this to find emotional distress, mood changes, and other potential mental health conditions. So this can be applied in virtual therapy to monitor how is their engagement, how are they improving, as well as additional insights into their emotional well-being. We of course have mobile apps and wearables. This is the ability to monitor how their mental health is doing, especially in between sessions and provide real-time data, including physiological and behavioral indicators. Things like sleep and movement, these are all tools that can help us track things like physical activity and sleep patterns and stress levels, so that we can use all of this data to once again modify our treatment protocols to get better. We also have sentiment analysis. I think this is a really interesting one, where we're looking at expressed written or spoken language. This I think has a significant promise in helping us understand, for example, when there's journaling occurring within care, what is the sentiment and what is any concerning features? We can use this to assess for any social media posts or things that are going out there that may have the ability to detect any risk. Or looking at how mental health professionals are interacting with their patients. What is their rapport? What is that sentiment in between sessions when they may be messaging to each other and finding emotional trends? Of course, there's the promise of virtual reality. As we know, virtual reality uses these computer-generated environments to simulate real-world experiences. We've seen this a lot from the VA, where we're able to simulate potentially historical traumatic situations and being able to process those emotions, or even things like panic concerns and phobias, where we're able to do exposure response prevention. This has great promise to really help us identify opportunities to use these in anxiety disorders and exposures where it's hard to simulate that in daily life and make some true impact and progress. But we've talked a lot about the promise, but of course, I think there's a lot of challenges which we'll get into. There's ethical considerations. If we think about many of these things, a lot of the concerns that usually arise first are privacy concerns. Privacy around sensitive information data, especially around minors. We have sensitive information about young kids and also adolescents that we have to be very careful about how we secure this data and making sure that it is used appropriately and not in a way that's inappropriate. In addition, we need to really think about what is an informed consent of use of this? What is the safety and transparency of application of the data that we're capturing and applying to new situations like machine learning like we talked about? There's also concerns about algorithmic fairness and biases. As we talked about, machine learning and AI is only as good as the inputs that it gets. But of course, we know in healthcare in general, much of the data that we have captured through research and other things has a risk of bias. Therefore, this is an additional ethical challenge that we have in the application of AI. There's also, of course, the legal concerns like safety and effectiveness. What is the liability of utilizing this information as well as other data protection issues? There's also the concern of limited generalizability. What do I mean by that? AI models have the limitation once again of being trained on the information it receives. Without diverse populations of input data, then there's a risk of it not being generalizable or even for the more biased. Therefore, ensuring that we have inclusivity and the information is being trained on and the information that it uses is really important. There's also the concept of human-AI collaboration, which really means balancing the role of AI as a tool to support clinicians rather than replacing them. I think that's a really important ethical consideration and there's really going to be a difficulty in really identifying an opportunity where a human can truly be replaced in child adolescent mental health given the importance of rapport. However, AI can really be a tool to use in a balanced way to be able to support clinicians in applying these techniques. But what is that balance? That is something that is an ethical challenge that needs to be deliberated. There's also, of course, regulatory and legal issues. Really being able to integrate these inter-traditional practices. We know that there's some resistance to including AI in everyday practice, and that is because there's a couple of things. One, I think that there's limited information about what these AI techniques is doing to, how do you apply it, what is the training? As we talked about at the top of the hour, I was not trained on this in medical school, so how do we continue to train like we are today on what these models are and what the benefit is, as well as looking at the long-term efficacy and safety. But let's talk a little bit about the specific applications, because that's where I think it gets really interesting. If we think about AI-driven assessment tools, early detection and diagnosis is a really big opportunity, and it uses techniques like SVMs and CNNs, which are support vector machines and convolutional neural networks as well as deep neural networks to do that. Now, I just said a lot, what are those things? So support vector machines is really machine learning that supports a vector network to be able to understand the classification of the information that's coming in. This is just another form of machine learning and the way to really look at the data and then apply it. And there's different layers of inputs and outputs, such as what these different forms of AI techniques are. But I think what's interesting is starting to look at the application of these. So for example, there was a study in 2022 that found that they captured the SCID, which we always use, the SCL-90, to look at the diagnostic features of multiple mental health conditions. They were able to then take key factors from those questions, or were able to decrease the number of questions to 28 key questions that really correlated with mental health diagnoses based on the training data that it had. Using those 28 questions with a patient, a child, or a caregiver, answering those questions, they were able to accurately diagnose 89% of the time, just simply using that decision support system through AI to identify the diagnosis. And no human intervention was required. So I think that shows us that this is the first step towards identifying what are the key features that AI is able to allow us to do, which is really distill information down to be able to predict patterns, to being able to offer the ability of faster access through this AI-driven assessment tool to be able to better diagnose conditions, and being able to then look at additional applications if there is additional tailoring that's necessary. There's also the application in personalized treatment planning and intervention using AI. So the limitations in current care, as we know, is that there's very limited diagnostics that we use today. We use a lot of self-directed inventories. There is research on the use of imaging, but that's really not used in the majority of current care. There's measurement problems. There's also an issue with randomized controlled trials. There aren't enough of them, and there's always limitations in some of the people that are actually included in those studies. And then of course, in treatment planning, we have very little feedback in personalized treatments that have occurred because we really haven't had that portal of connection between the patient and the clinician to continue to give feedback. So with that, I think there's opportunity through AI. So we see through Lutz in 2020, he used the concept of a treatment navigator to really help us understand what is the best personalized treatment to apply. So what they did is they used tools for pretreatment recommendations using behavioral tracking, and then through behavioral tracking, we're able to adapt that personalized treatment recommendation throughout the therapy. The issue though was that it was extremely effective in identifying predictions for patient dropout and recommending treatment strategies. But the biggest piece about it was actually the clinician's ability to use that information and apply it. That is also very similar to the study that Lambert found in his meta-analysis, which found that having outcome monitored assistance psychotherapy had superior outcomes to treatment as usual. Once again, this is the idea of utilizing data, which can be analyzed by AI and surfaces recommendations to personalized treatment plans to practitioners, was extremely effective in one, identifying potential patients that were gonna drop out, and two, actually getting better outcomes by the end of treatment. The issue, once again, in many of these studies that were looked at in the meta-analysis was that utilizing the measures by the provider was the best predictor of the success, meaning that the provider really had to understand the information and apply it, as well as their true confidence in that data was another mediating factor. But unfortunately, some of these studies are still have more to be done, but the effect sizes range from small to moderate in this. We also find applications through AI-enabled virtual therapists and chatbots, which are also known as conversational agents or relational agents. What they're able to do is provide continuous support and monitoring and sometimes intervention. So things that you may see out there are chatbots that ask how they're doing. Can you rate your mood? Have you slept today? And that allows the ability to get inputted information from a patient or a caregiver or a remote monitoring device to be able to surface it to the clinician and sometimes provide an intervention, meaning that if it is very clear that there is a concern, like they're not sleeping enough and their mood is going down, suggesting that improving their sleep may improve their mood. That's just a very simple example. But as we know, the FDA released its temporary guidelines in April of 2020, allowing for more digital therapeutics for conditions like major depressive disorder and generalized anxiety disorder. This led to many more digital therapeutics entering the market. The first chatbot though was actually around for quite some time. So it started with the chatbot that was named Eliza that was back in 1966. But over time, since 2016, about 39% of health chatbots focus on mental health care issues and the proliferation of these chatbots has been significant. In fact, as we said, given that change in clearance, there have been about 41 mental health chatbots that were created in 2019 alone, and that has only increased since then. These chatbots are focusing from everything from depression to anxiety, to autism, to suicide risk, to substance use, to acrophobia. One of the things that's very distinctive about it is that the satisfaction is extremely high. So for those who use it, about 68% of it have found that it's extremely helpful. And in fact, one key piece was common is that they don't find it to be judgmental. But one of the pieces that we've seen with these chatbots and many studies with digital therapeutics is that there's very limited number of studies truly looking at outcomes. And so therefore the effectiveness of these chatbots is somewhat mixed based on meta-analyses that have been done, but I find this to be a true opportunity. There's also the ability to help us scale our own work as clinicians through the concept of automatic speech recognition and natural language processing. This is an ability for these sessions that we deliver through our clinics and telehealth to be automated into clinical documentation. We know that clinical documentation is an extreme risk of burnouts, of burnouts of practitioners and offering the opportunity to have this AI-driven automatic speech recognition, transposing it into clinical documentation is a true opportunity. The automatic speech recognition, what it does is that it transcribes a conversation and then the natural language processing will extract and summarize the relevant information and of course present it to the clinician. It is not a substitution for the documentation and of course a clinician should review it for accuracy, but it also finds additional quality measures. So it allows to listen to that conversation and pull out key factors like safety concerns or training opportunities to improve the quality of the care that is being delivered. Let's dig into those ethical considerations that we talked about a little bit. So as we know, there are disadvantages to the use of AI unfortunately. So as we talked about privacy and confidentiality is one of the main risks that's typically identified with the use of digital mental health technologies. And one of the biggest pieces about it, right, as we know is that there are breaches in confidential data with many different industries, but specifically healthcare data is extremely sensitive. And so therefore the data privacy risks are those that need to really be looked at, especially when data is shared with third parties for processing or additional vendors. There's also though some patient mistrust that is occurring. So because of these data breaches that we hear about that occur in many different industries, there is definitely patient mistrust around the risks of data privacy and the ability of people to have access to that data. We see that as actually decreasing in younger generations, but continues to be a very strong concern with caregivers and older generations who may have an impact on access to treatment for child, adolescent mental health. In addition to that, we really think about accessibility and equal access for those who don't have broadband, for those who don't have computers, for those who don't have smartphones, being able to generate this data and truly having access to the benefits if AI creates ethical considerations around accessibility and equal access. In addition to that, there's cross-cultural and cross-country attitudes and resources that are different, which is actually known as an ethical consideration in being able to increase the health literacy of individuals who are trying to access care and really trying to help them understand what AI is, the impact on data, so they can truly have informed decision. And then, of course, we have the clinical validation that is necessary, as well as the ethical legal guidance that's necessary in applying these AI techniques to child, adolescent mental health and being able to provide consent. The one piece that I would like to really dig into, though, is the risk of bias. So data bias occurs when the data used to train the machine learning model are underrepresented or incomplete, which leads to biased outputs. This can happen when data that was collected from biased sources or when the data was incomplete, the missing information can have errors as well, leading to further bias. There are many different kinds of biases. There's algorithmic bias, which occurs when the algorithms used in machine learning models have inherent biases that are reflected in the outputs. So this can happen when the algorithms are based on biased assumptions to create that algorithm, and when they use bias criteria to make the decisions. There's also user bias, which occurs when the people using the AI systems introduce their own biases or prejudices into the system consciously or unconsciously. This can happen when users provide biased training data or when they interact with the system in ways that reflects their own bias or hesitations. So an example of bias in healthcare is when AI, for example, is used to predict patient mortality rates. And in fact, what Obermeyer found was that it was biased against African-American patients. The study that was conducted found that the system was more likely to assign high risk scores to African-American patients, even when other factors such as age and health status were the same. This is really a true reflection on some of the biased data that it was trained on that is because of barriers, such as barriers of access to care and receiving subpar care historically, therefore resulting in a continued bias through the application of these algorithms. In addition, there was another study that assessed for potential disparities of the classification model towards disadvantaged groups that was used to create these algorithms. They found that 18 disadvantaged and advantaged groups based on gender, ethnicity, socioeconomic status, age, obesity, mental health, and location. And in seven of 90 cases, there was statistically significant difference in the favor of the advantaged group. And the main reason for that disparity is a difference in the type of medical visit. For example, blood is a strong logical cue to classify a sentence as important for the plan section of the summary, but the word is also less often in conversations with Asian patients. So you can see how even use of terms, unfortunately, can lead in bias. So the goal is to really look at fairness in AI. Fairness in AI sounds pretty complicated and is pretty multifaceted if I think about it. However, achieving that fairness in AI is challenging, but necessary. And there's many different factors that go into it that we need to think about. There is group fairness, which refers to ensuring that different groups are treated equally or proportionally in AI systems and further subdivided into different types like demographic parity, that is important to make sure that the AI that is utilized is balanced and doesn't have a notion of unfairness or disparate treatment. In addition to that, we need to think about individual fairness. So that is the concept of referring that similar individuals are treated similarly by AI systems, regardless of their group membership. This can be achieved through methods such as similarity-based or distance-based measures, which can aim to ensure that individuals who are similar in terms of their characteristics are treated similarly and fairly within the AI system. There's also counterfactual fairness, which is a little bit more recent, which ensures that AI systems are fair, even in hypothetical situations. So counterfactual fairness aims to ensure that an AI system would have made the same decision for an individual, regardless of their group membership, even if their attributes had been different. And then there are other types of fairness, including procedural fairness, which involves ensuring that the process is used to make decisions is fair and transparent, and casual fairness, which involves ensuring the system does not perpetuate the historical biases and inequities. So these are all frameworks as we look at ethical considerations and application of AI and how we can continue to improve the algorithms that much of the machine learning AI is built off of so that we can continue to improve. And I think also it really points out that there is not a one-size-fits-all solution to this. And so looking at fair consideration of all of these factors is necessary. So really, it really comes back to balancing the role of AI with human intervention, and really thinking about a patient-centered approach. The patient-centered communication is going to be important in training healthcare professionals on the effective communication of AI-driven decisions so that we really can make sure that we are not presenting bias ourselves and are truly presenting the results in a way that is patient-centered. So that'll be crucial in making patients understand and being really comfortable with the use of AI in healthcare and fostering that trust and transparency so that we can continue to apply it but in a very balanced way. And this will allow us also to have greater efficiency, more time for our healthcare professionals to really focus on the human side of care, being able to foster trust, and really using these efficiencies in a way for us to get back to really that patient-doctor rapport and relationship. Let's look at the specific applications that have been done of using AI in child-adolescent mental health. One of the primary areas of interest of application of these techniques has been in autistic disorder, but there are many other conditions which we'll get into as well. In several studies, they really use the concept of ECG chest traps to look at autonomic nervous system responses in patients with ASD during various techniques. So Belissian 2018 found that using the data, toddlers with ASD had higher low-frequency power of ECG at baseline compared to typically developing children. But conversely, when a task was presented, toddlers with ASD had lower frequency power that went to then higher frequency power. So what they were able to find was that this differential in their automatic dysregulation between baseline and when a task was presented was able to differentiate those who had probable ASD diagnosis in toddlers over those who did not. So as we know, toddlers are usually when some of these symptoms start to present, early identification and detection is necessary, and so this affords the opportunity of utilizing a device to look at autonomic nervous response systems and using AI to detect those patterns. Another interesting one was DePalma in 2017 that used vitals from that ECG chest belt while playing games over the course of six months. In that game, they used concepts of joint tension and imitation exercises. And so looking at various aspects of the ECG, what they were able to find was that there was an increase in heart rate events during sociocognitive tasks, and then over time, they were able to see an increased percentage of physiological events associated with lower RSA, which was one of the features they were looking at. So this allowed them to look at the response to the intervention, which was the game, to look at joint attention tasks and imitation exercises and see who was responding and who was not in those who had ASD, allowing the opportunity to personalize treatment recommendations. So this allowed us to continue to look at not just ECGs but EEGs. So they looked at EEGs to qualify engagement and cognitive involvement. So they looked at variability and engagement during sociomotive interactions. So they were able to have interactions during the study through Belisi's team in 2016 and looked at the EEG patterns. What it was able to find was that the AI device could detect patterns of child's engagement in that care, especially with those with ASD, to be able to identify who was responding and who was not, as well as identify those who needed modification in their treatment plan. There are many other applications as well, but really using the concept of EEG and ECG and detecting those patterns in the literature has been a really significant focus. So interestingly enough, using this to look at physiological signs of emotional response with various tasks has been a key feature in looking at the response patterns of those with ASD. So utilizing not just that, but sensors and facial recognition, they were able to identify autistic kids and their responses to different emotional patterns and helping them understand those emotions better. Then there was also an interesting study by Goodwin in 2019 that investigated analyzing patterns of physiological emotion data for children with ASD during naturalistic observations. What they could do with looking at those patterns was that they could identify those who would engage in aggression within a minute before it occurred or within three minutes. This was a really interesting pattern of data that they found is that 84% of the time they could identify when an aggressive event was going to occur. This obviously has significant application in child and adolescent mental health with the potential of really aiding behavioral interventions to redirect potentially aggressive behaviors, both in treatment, but also at home with caregivers to be able to monitor behavior. There's also a study that looked at using a wearable anchor sensor to diagnose ASD in young kids. You could see here that the ASD center was just a simple device. They put it on the ankle. There was a leg warmer to look at it. What it was looking at was it had accelerometers and gyroscopes to be able to look at detection of patterns of movement. They were able to look at those who were very young to identify to see if they could identify who had ASD and who did not. What they found was that those who are high risk had a lower emotion complexity compared to those who are not diagnosed with ASD. There was a very strong correlation between motion complexity and ASD outcome relative to the cognitive abilities they were showing. Once again, these abilities of using wearables, ankle devices, as well as ECGs and EEGs have been a strong factor in diagnoses of ASD conditions. You can see here that these devices allow for a better prediction of patterns, but also being able to help us with diagnoses. There's also an interesting study that occurred with Pfeiffer in 2019 that looked at the use of headphones to decrease synthetic activation in those who had ASD. As we know, sensitivity to sensory input can be a significant issue, especially with sound. What they found was that having that over-ear versus in-ear noise attenuating headphones led to a significant difference in their physiological response. Identifying that response of over-ear noise attenuating headphones being more successful than in-ear noise attenuating headphones is also a significant impact of AI looking at those response patterns. Being able to identify that allows us to use this in care to really decrease stress levels and anxiety, especially when there's a risk of hyperacusis. ADHD is another really strong area of use of AI and looking at these patterns. Lukoff utilized a smartwatch application device to look at movement and actually give that feedback to the patient with ADHD on their movement. What they found was really analyzing those movements and showing that data to the patient almost as a biofeedback mechanism, as you can see here in the upper right, allowed that individual to modify their movement and improve their symptoms because of that feedback given on their movement. That AI was able to identify and predict those patterns of movement, give feedback, and that actually modified the outcome for that patient. They could actually modify their own behavior. Another interesting one in ADHD was looking at, once again, the use of a smartwatch to analyze movements. What they did was they were able to collect data for two hours a day, for three consecutive days, in a naturalistic setting to see that children with ADHD tend to have more variable and frequent movements than controls. This was allowing the use of a smart phone to help us identify who may have ADHD with those more complex movements versus those who are just moving on their own, so being able to differentiate that pattern of movement just from the input of a smartwatch. In addition, there's another great study that was used on accelerometers on wrists and ankles to look at those movement patterns in kids who had ADHD, those who are medicated and were not. What they were able to demonstrate is that patients with ADHD who are not medicated show high differences in medium-intensity movements compared to the normal developing patients, whereas those who are medicated showed different patterns in their low-intensity movements. Through AI, they were actually able to differentiate those who are normally developing children versus those who had ADHD and were medicated versus those who had ADHD and were not medicated. Then, of course, there's treatment monitoring. Utilizing accelerometers to monitor movement patterns and utilizing AI to really identify a change in their movement, suggesting when a medication is utilized, can we actually monitor is it improving their movement patterns when we introduce the medication so we can help with treatment modifications. There's also the use of actigraphs to look at sleep in ADHD. What they were able to find through analysis through a neural net, in fact, was that the sleep duration and circadian strength measurements differed between children with ADHD and those with bipolar disorder. They're actually able to look at their circadian rhythms through this and identify different patterns for those who had ADHD, ADHD with a mood concern, bipolar disorder, and control. This allows us to really look at overlapping potential diagnoses, which we see a lot, and being able to differentiate who may have what disorder based on their patterns on their actigraph. Really exciting things that are coming with AI. I think that there is really a true as you look at, we as clinicians want to always do best by our patients. A lot of what we look at on a daily basis is self-report. I think what we're seeing through the use of these remote devices and the use of AI to really look at these finite pattern recognition opportunities allows us to have better detection, allows us to have better treatment monitoring, as well as interventions. Then, of course, there's a significant progress of making our lives better through things like natural language processing and automatic documentation, so that we can really decrease the burden of the administrative load that is placed on our ability to really conduct care and focus on that patient rapport and care, which is really what we want to focus on. Of course, there is a lot of need that needs to continue to come, including using large randomized controlled trials to looking at really differentiated outcomes and navigating those ethical considerations that we talked about. As a recap, AI is something that is vast. There's many different elements to it. It is not just one thing, and it is applied in different ways in different areas of child adolescent psychiatry and mental health treatment. It is a great promise in assessment and diagnosis, treatment planning, and intervention, but of course, there's ethical considerations and fairness initiatives that are necessary to really get the value that we need from it. In addition, as we said, we need more training sets that are non-biased, increase education and training of healthcare professionals, as well as our patients, and continue to think about how we can use this in patient-centered care. Thank you so much for listening. I really appreciate it. Let me look at any questions that you have. First question is, what AI programs do you recommend for us to use to do these things? Doesn't it take a lot of our time to feed the information to the AI about a particular patient? Great question. I think a lot of this is starting to look at what programs are currently out there. I don't stand for any one or another, but for example, there are companies today that use the natural language processing and automated documentation that are out there today. You don't need to feed that data. That is data that is just listening into your session and documenting so that you can review it and really make any modifications that are necessary. In addition to that, there are other programs out there that are already using these smart sensors and devices and distilling the information down for you through AI so that you can use it in your care. A lot of the challenges, and I acknowledge that, is that this needs to become part of your workflow. Looking at your electronic health record and things like that to look at what is currently included or can be added on is the best way to find the technologies that you can utilize that can help distill the information for you so that it doesn't become a massive impairment but becomes something that you can apply. The ways that you're mentioning using a computer vision imply the use of video sessions only. How do you think virtual sessions compare with in-person sessions for child psychiatry now and in the future? I think that in terms of telehealth, as we've seen, there is a significant application during the pandemic and social distancing, but we have continued to see that virtual care has been a continued application that patients want and need and have continued to use. In terms of visual recognition, there are two applications. One is, yes, in video-based sessions we can look at facial recognition, what is the rapport that is occurring, is there any treatment plan recommendations that we need to make, but there are actually also many apps that currently exist. If you look at different advocacy sites like Autism Speaks and others, they're cataloging these apps to find the best ones there. Some of them use visual recognition to see what the emotional responses are in patients and being able to track. Once again, being able to get that data back to you is really important, but us as clinicians, video or virtual-based sessions to be able to really get that facial recognition and cues back is probably the easiest way to get that data into your daily practice today. In terms of HIPAA challenges, the next question was, what about HIPAA challenges and any use of chatbots in psychiatric practice? In terms of chatbots today, as we've seen in some of the EHRs, they've started to implement this through generative AI and utilizing chatbots and what that needs to be consent. As you've said, with HIPAA, I think the biggest challenge in child adolescent mental health is the differing consent ages that there are, especially with adolescents and privacy from caregivers. Then of course, making sure that the data is secure. Of course, any chatbot that you look at or recommend, you want to make sure that they are using a HIPAA-compliant cloud server to make sure that they are actually securing that data. Once again, having that conversation with the caregivers and the patients that you're taking care of, that there are always risks to using tech solutions is important. So many great questions. Thank you. Let's see. There's another question here. You mentioned the high user satisfaction associated with AI-enabled visual therapy chatbots, helpful and encouraging without being judgmental. What are the biggest patient family complaints of AI-enabled virtual therapy chatbots at this time? I can speak to personal knowledge, but also what exists currently in the literature. There's two things. The first piece is that they do find it that it's highly encouraging. Much of what those chatbots are based off of either a CBT-based format, a mindfulness-based format, or really an advocacy-based format. What do I mean by that? They are asking a lot about what's your mood? How's your day been? How are your thoughts? How is your emotion? So really that's CBT-based, or how are you checking in with yourself? So it's a lot of tactical coping skill-related interventions or monitoring that they're doing. The piece that I think that we've heard a lot in terms of the critiques of these is that individuals tend to hit a limit, meaning that the chatbot is only trained on so many types of responses to different scenarios. And there are scenarios that it has not been trained on. And so it will come to a place where it does not know how to respond. And so that can feel very inauthentic. In addition to that, some of the responses may not feel as personalized because it is based on a training set of what to do in this scenario. So in terms of the AI chatbots, there are definitely advantages in terms of supportive care, but there are definitely limitations in that personalization as well as nuances and different iterations around it. In terms of another question that we got, regarding chatbots for depression and anxiety, user satisfaction, high efficiency mixes, or evidence that the chatbots are worse compared to placebo. That is extremely hard in terms of being able to do a placebo-controlled trial. I have not seen it. And so if you think about what that would potentially mean, it would be chatbot against maybe looking at WebMD or finding their own information. I have not seen that as of yet in terms of a placebo-controlled trial to really see what is effective or not. My consideration about it is that as in everything that we look at in mental health, there is a placebo feature to it, meaning that an individual engaging in any kind of care can be helpful. But you are right, that we definitely need to have a call for more randomized control trials and placebo-controlled so we really can look at the effectiveness. We're running out of time. Thank you so much. If there are any questions that I didn't get to, I would love to respond to them. Feel free to reach out to me personally, but really, really appreciate your time and listening today. And I hope you enjoy the rest of the sessions. Thank you so much.
Video Summary
Dr. Monica Roots, a child psychiatrist and co-founder of Bend Health, discussed the impact of AI on child and adolescent psychiatry. She highlighted AI's role in early detection, diagnosis, intervention, and personalized treatment planning in mental health care. AI models, including machine learning and natural language processing, help analyze data, predict mental health risks, personalize treatment plans, and streamline healthcare operations.<br /><br />Dr. Roots emphasized AI's potential to enhance access to mental health resources due to a shortage of professionals and its applications in autism and ADHD treatment through wearable technology. Ethical concerns like privacy, bias, and generalizability challenges, alongside regulatory issues, were addressed. She underscored the need for AI to support clinicians rather than replace them, stressing the importance of balancing AI with human oversight.<br /><br />Challenges such as algorithmic bias, patient mistrust, and data privacy require careful consideration to ensure fairness and transparency of AI systems. Ongoing advancements and large-scale trials, alongside ethical considerations, are essential to harness AI's full potential in pediatric mental healthcare. Dr. Roots encouraged integration of AI into current practices and future research towards enhancing treatment outcomes.
Keywords
AI in psychiatry
child mental health
early detection
personalized treatment
wearable technology
ethical concerns
algorithmic bias
data privacy
treatment outcomes
×
Please select your language
1
English