false
Catalog
“E3 of AI”: Equity, Errors, and Ethics of Artifici ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, aloha everyone. Thank you so much for coming. It is such a pleasure to see you all so early on a Saturday, Saturday morning. It is my pleasure to introduce a really wonderful session with our very esteemed presenter. We'll be presenting on Equity, Errors, and Ethics of Artificial Intelligence in Vulnerable Populations with Substance Use Disorder. My name is Jacques Ambrose. I'm the chair of the sessions. I'm the senior medical director and chief clinical integration officer at Columbia. We have Dr. Colin Burke, who is the director of the Youth Experiencing Homelessness Program at Massachusetts General Hospital Harvard Medical School. Dr. Silvia Franco Corso, assistant professor at Columbia Vagelos College of Physicians and Surgeons. And Dr. Jehede, assistant professor at the Yale School of Medicine. So these are our conflict of interests. And as an overview, we'll kind of break down the sessions into four kind of distinct pieces that integrate together. And our hope is you will walk away with a little bit of a curiosity. And hopefully this is more of a primer to the space of AI and how it is interlacing with our clinical practice, as well as the field of psychiatry. So we'll talk a little bit about the AI framework, focusing a little bit more on what are some of the vulnerable communities specifically that we're concerned about. We'll highlight the homelessness and pediatrics population. Some of the specific health inequities that we're currently seeing in substance use disorder. We'll talk about substance use and AI in individuals with disability. And then lastly, we'll have a panelist discussion in Q&A. So these are our objectives for the session. So principally, I think one of the things that has always been interesting to me as a business and strategy consultant, I've been doing this for a little bit over a decade, I'm going to focus a little bit more on the applied and practical aspects of AI and technological implementation. So this is far from the entirety of the picture, as well as the current research in this space. But my hope, as I said earlier, is this is the beginning of our understanding and learning together about AI. And this is hoping to spark a little bit of your curiosity. I'm very grateful for a lot of my mentors, Dan Guetta, Tony Deer at Columbia, Percy Lang, and Andrew Ng at Stanford, all of whom have guided a massive foundation for this presentation. So we'll kind of just start off as getting the vibe of the room. Some of you may have used AI before. Some of you have never heard of artificial intelligence before, and you wanted to learn more. So just as a fun little interactive thing, you can use your phone to scan this QR code, or you can go on PaulF.com, and my username is AJA2222, and just enter two or three words. What are some of the feelings associated with AI that you've had? What are some of the applications or apps you've seen before? Any concerns, curiosity that you have about this space? Any question you may have about AI? And I realized, as we're doing this, is when things are going and there are also people online doing this, so it might fill up to capacity pretty quickly. So feel free to enter any words that are applicable to you. And I'll just show an example of a recent word cloud that we've used. These are some of the... Are you having a problem accessing it? Yeah, OK. Don't worry about it. It's to highlight, like, some... There's such a divergent experience when it comes to AI. Confused is one of the most common words. This is a word cloud from one of the Grand Rounds I did on this topic, and majority of the topic for this talk for people, they were just so confused. Most people have heard of some of the things that you are seeing here. Chat GBT, for example, is a very common word. People are very concerned about privacy, how their data are currently being used. And I think for a lot of clinicians, the sentiment of the unknown and certainly being scared and distressed about what the technology is currently doing and how is it going to be implemented in the clinical practice for ourselves as healthcare providers, for our health system, and certainly for our patient, and how do we maintain that space as patient advocates to make sure that the patient experience and the clinical care can still be maintained and best practices could still be implemented. So I wanted to give a little bit of a very, very brief, brief AI history. So it started not as long ago as one might think. It started in the 40s, actually. The initial conceptions of AI, looking at as an artificial neural network. This came out of seminal work from the University of Illinois at Chicago with Dr. McCulloch and Dr. Pitts, both of whom are not physicians, but they're neuroscientists and a mathematician. But looking at a mathematical model of what a neural network could be. Consequently, in the 1950s, at Dartmouth as one of the workshop, the Dartmouth Summer Research Project, they focus on looking at AI and really integrating it in what is it that we want the machine to be able to do. And this is a direct quote. They want to create an aspect of learning or features of intelligence that could be precisely described so that a machine could simulate it. And on the left is just a picture, a cartoon depiction of a neuron and its connections. And on the right is actually what a typical neural network looks like. So this is one of the earliest version of a chatbot. This is in the 1960s by Weizenbaum. It's called ELIZA. And the function that it use is, it use SLIP, it stands for Symmetric Lisp Processor. And as you can see, it has that same vibe and feel as with chat GPT, is you enter something and it pit back something. And this chatbot is specifically geared towards psychotherapy. So it's doing Rogerian psychotherapy. And it's reflecting back a lot of what the user inputs would be. And it serve as a really important baseline and platform for future iteration of chatbots that you currently may be seeing. And this is what it looks like on the back end. When we were talking about very, very basic neural networks, this is a flow diagram of the keyword detection mechanism that it used in the 1960s. So for example, you see on the chart, if the keyword is my XYZ, the response would automatically transform to say your XYZ. And this is very much related to the foundation of the Rogerian therapy in trying to reflect back a lot of what the user inputs would be. So it's thing like a more complicated sentences, like you are very helpful, would get broken down into core key words using the slip mechanism. So things like you helpful would reverse back and translate into what makes you think I am helpful. And then it goes into some of the cadence for a ranking mechanism and how the script might then respond to different scenario that the user might input. So this is very, very rudimentary and you can already see how it builds on some of the earlier foundations of just a simple mathematical model. And then over the decades, we started having a little bit more development in the sophistication of the AI algorithm. So in throughout the 60s and 80s, we had dendro for example, it's a molecular structure looking at ways in which the machine can mine data from mass spectroscopy. And then myosin, it's a program developed at Stanford that looked at antibiotics based on blood infection and results. And then Xcon, which you see on the far right, converts customer orders into parts specification. And this was really the first time in the 1950s and 60s where AI was really used in a commercial space and it saved them, I mean, reportedly $40 million a year. So some of the challenges for early AI is really not necessarily having the same complexity that we're currently experiencing with the current iteration of many of the chat GPTs that you see or other chatbots that you see. The search space has grown exponentially and often outpacing the limitation of the earlier hardware. So when you're thinking about, I put a picture of a floppy disk here because genuinely many of my patients currently do not know what a floppy disk is. How many people here have seen a floppy disk before I raise your hand? Thank you, thank you, I feel very validated. But that's just to illustrate that back in the early 90s and 2000, we're encoding information on megabytes. Now we are encoding information in terabytes, which means literally millions of these floppy disks. So the complexity for both the calculations, the computation, the software, and the hardware have really evolved within the last years or so. And I mean, literally, I think within the last five years or so, computational power has caught up to some of the demands that we need in order to develop sophisticated GPT model. And this is why you see starting in 2022 onward, you really see this explosion of AI. So one of the questions I typically get asked a lot is what is chat GPT? How do you classify that? And have anyone here used chat GPT before? Raise your hand if you've used it. So roughly about half of the room have used chat GPT before. So there's generally two main categories that you can think of for AI. It's strong versus weak, general versus narrow. And strong AI is what we typically think of when we think of like full-on, generalized, very, very sophisticated and highly autonomous intelligence. This is like the Terminator version of AI. Weak AI is model AI. And what that means is it can only operate within a certain set of predefined functions. So we're talking about expert system, decision tree, random forest, neural network, very much similar to a lot of the things that we heard previously developed in the 50s and 60s. So chat GPT stands for chat generative pre-trained transformer. And the reason why it is the GPT revolution is a lot of this large language model was developed and trained already pre-existing data from a lot of different sources on the internet, from uploaded books in different components and proportion. So how many people think chat GPT is a strong intelligence? Raise your hand. Ooh, we have like about 10%. How many people think chat GPT is weak intelligence? Okay, there's like 10 people who don't know what it is altogether. That makes a lot of sense. So chat GPT is a cloud-based Python run platform, generative pre-trained transformer. It has helped countless students finish their homework the day before and help countless professors, myself included, grade papers, so it's just a dance. So if you ask chat GPT itself, it will say that it is actually a weak AI because it does not have that flexibility to do the work outside of its pre-trained capacity and parameters. And for chat GPT specifically, depending on the version that you have, you're pretty much restricted to the text space or image space. It doesn't have the necessity to generate audio, for example, as a simple feature. So as we're looking at some of the different categorizations of AI itself, I'll mention the three main features. So within AI as an umbrella, you have machine learning, which is the subcategory, and within machine learning, you have truly deep learning AI as a subcategory of that. And this is just to familiarize you with the terminologies that's typically used associated with AI. But you can think broadly of machine learning as a way for you to compile a lot of data, break it down into rules. Truly, it's based on categorization, classification, and rules, and translate that into a prediction. An example of this is, if you've used Zillow before, it is training a ton of historical data from past sales of homes of different size, different bedrooms, different bathrooms. Like in New York, it may be a one bathroom, one bed for $17 billion. So then when it sees a similar one bedroom, one bathroom, it's gonna give you a pretty similar estimate. And that's based on geographical data. Deep learning is a little bit more complicated, and this is where we're really seeing the magic happen. And I put specifically a hidden layer in there because truly, right now, a lot of the things that's happening in deep learning is just unknown to the engineers themselves. So what you have are inputs data. So whatever the user will input in the initial layer, it will get transformed through countless, countless different layers, and this is the transformer component of the GPT. And then you will have a singular output, and sometimes you may have multiple output layers for images, for audio, for videos. And the concerning thing, and I think a lot of where the inequities that we're super, super nervous about is we don't even know why certain outputs get generated based on the input. And just a quick characterization, when I'm talking about neural network, how you can differentiate between deep learning and machine learning is deep learning, the really specialized neural network, relies very heavily on multiple layers of the interconnected nodes, the little neurons that I show earlier. And this can represent extremely, extremely complex hierarchical structure from the data. And we're talking about hundreds, thousands of different layers. For machine learning, it's a little bit more simplified. It's a wide range of decisions and models, but they're pretty specific. It's not based on neural network. Some of the other ways that you can think about it is what can you extract from the model itself, some of the feature extraction, the model complexity, interpretation, as well as the size. So for deep learning, you would need an immense amount of computational power. This is where there are data centers being built in order to accommodate a lot of the computational needs being generated either through the cloud or through the data center in order for the actual information and the models to be trained. So I am really, really speeding through this because I just want to give everyone basic architecture of what AI is. So when you hear of AI, I want you to think of is this like simple machine learning or is this deep learning? Is this something that is giving me concrete information or is it something that's generalizable and is a little bit unpredictable? So you can have some common vocabulary when people, patients, healthcare providers, fintech bros try to come to you and talk about really exciting startups. It at least gives you some understanding so you can ask more discerning questions. So has anyone here used any version of AI in a health context? We have, okay, we have like about 10% of the group. So I want to show some data available globally. This is from PwC, a consulting firm. And this question asks, would you be willing to talk to a computer, a robot to answer a health question? Diagnose your conditions, recommend treatments. And not surprising that just younger generation, you can see on the column there that it's 18 to 24. EMEA stand for Europe, Middle East, and Africa. So this is global data. What do you think it is for the middle age group? So like 40s, 50s, higher or lower? Lower. Anyone think higher? There are some very optimistic people. I love it, I love it. It is a little bit lower, but this is what I wanted to show. It's not like 10, 20%, it's half. So half of people that we think in their 40s and 50s are already very, very willing to engage in this technology. What about 55 and plus? So like the 60s, 70s, and plus people. Higher or lower? Lower. What do you think? Like 10%, 20%, 30%, 40%, 50%? 10? 30? So it's between 10 and 30, okay? It is 45%. And this is the really shocking data for me. It's like even in, and this is global data. So even in the latter generation, the more seasoned folks, they're also pretty willing to consider AI as technology in engaging with healthcare. And I often couple it with, these are some of the specific questions in asking, what are some of the biggest advantages in using AI in healthcare? This is for the younger generation, and I'll move to the 55 plus. But just to differentiate, it is not night and day. It's pretty close together. We're talking about a technology that has the potential to really close the access gap between the different generation. And the thing that I'll highlight for folks here is about one in five people think that AI will make fewer mistakes than healthcare professionals. That was actually very surprising to me because oftentimes the main concerns for a lot of people is the fidelity of the algorithms and how is it being used. But as you can see, the majority of folks think it will improve access, will be able to give you more accurate diagnosis. But certainly the part that often surprise a lot of healthcare providers are often, they're actually really, really interested in the ability of AI as technology to zoom in to specific population. And this is a component of precision medicine. Zoom into specific population that oftentimes has been ignored or not paid a lot of attention with in traditional medicine. So we're talking about marginalized community actually. There's a lot of interest in AI looking at specific population that has not been historically represented in the research data. And how can we tailor specific treatments that are more suited to treat that target population. So just as a primer, I wanna underscore this graph, not necessarily to frighten people, but to highlight that AI as a technology is not a distant future. We're not even talking about 10, 20 years. We're talking about next year. We're talking about right now. And the hope is that as you're engaging in healthcare in whatever context, as a learner, as a consumer, as a practitioner, that you be a little bit more mindful and curious in terms of how technology, just as many other technological developments have influenced the field of psychiatry, like EHR. I will hold my opinions on Epic, but everyone has experienced EHR now. And I think that's the thinking is that technology is very, very much here. So along that same lens of how do we zoom in on some of the inequities that may be present in AI as a technology, I've asked my colleagues to really highlight with their specific population that they work with, as currently practicing addiction psychiatrists, what are some of the things they worry about? What are some of the things they really want the technology to do? And what are some of the things that we, as healthcare providers, should really be mindful of as we are entering into a space where our patients are regularly using it? So with all that aside, I'll ask Dr. Burke to come up and talk about his phenomenal work in substance use and homeless youths. Please give Dr. Burke a round of applause. Thank you. Okay, thanks so much. Can everyone hear me okay? We're good, okay. I will echo thank you for being here on Saturday morning. It's phenomenal, it's truly, I admire everyone for being up and here on Saturday morning. So I am gonna take us from this beautiful, technological utopia, the theory, it's just like, it's like magnificent, and I'm just gonna bring us right into the reality on the ground in clinic. I apologize, it's Saturday morning. By definition, we're not in clinic. I hope not to cause too many flashbacks, like bring people back to what we came here to escape. But my hope is that we can inform some of the work and really think about how might these technologies apply to young people experiencing homelessness or similar marginalized, traumatized populations, and specifically in the realm of substance use disorder treatment. So I'm a child and adolescent and addiction psychiatrist in the Mass General System, and I'm the director of Mass General's Youth Experiencing Homelessness Program, which is a combined clinical research and teaching program. It's a partnership between Mass General Hospital and Bridge Over Troubled Waters, which is just a fantastic, well-established, longstanding community agency in Boston area that provides a whole range of services to young people experiencing homelessness. And so we're really excited to be able to just represent some of our work here and think about what are the potential applications of AI, what are some of the things that give us pause for this specific population and for similar, again, populations of marginalized and traumatized young people. So I'm gonna start with just, again, just to get us, let's get the feeling of being in person. Let's try to marry some of this really beautiful theory to what happens on the ground. So I'm gonna start with a case. I'll go through it quickly. Again, apologies in advance if this just causes you anxiety. As it does for me. So I'm gonna give a case example of a young woman of color, Ella. This is a composite case, so this is anonymized, but did not have to search far for details to add in. This is a pretty standard case for us in our clinic. So a 21-year-old young woman presenting to Bridge Over Troubled Waters for a psychiatric evaluation. She had recently been living with her boyfriend. She had to leave pretty suddenly due to intimate partner violence and is staying in an emergency shelter at Bridge Over Troubled Waters. And luckily, one of their fantastic staff, people said, hey, have you ever met with a psychiatrist? Would you like to meet with one? We're noticing some symptoms or maybe some substance use. We think it would be useful. She's seen multiple psychiatrists in the past, has been diagnosed, in her words, with everything. So bipolar disorder, major depressive disorder, ODD, ADHD, PTSD, you name it, right? Most of these have been in one-off settings, sporadic engagement with care, inpatient or outpatient settings, but no long-term attachments to treatment figures. Born in Haiti, moved with her family at age two to the United States. The father left the family relatively quickly. Her mother had a series of boyfriends and unfortunately, Ella experienced sexual abuse recurrently from boyfriends in the home and ultimately was removed from the home by the Child Welfare System, DCF, in Massachusetts due to maltreatment, neglect, and substance use in the home. She describes her childhood generally as just chaotic. And again, I put these details in there because this will actually come back when we talk about AI and some of our concerns, right? Like, I mean, folks are clinicians or researchers or interested parties in this room. Like, many of these things may ping in the back of your mind, oh, like, how does this interface with technology? What is the meaning of this? And I'll come back and talk about that, okay? I promise I'm not just like giving a case for no reason. So Ella lived in various foster families, group homes. She said she just always ran away, tried to make connections with family at different times, never really had a stable attachment figure growing up, has a chronic history of affective instability, reactivity, fights with family and peers. And regarding her recurrent psychiatric hospitalizations, she just says, like, they started sectioning me, which is involuntary hospitalization every time things got heated. It kind of gets to the point where the family, the system doesn't know what else to do other than just activate the systems that exist. History of elevated irritable moods and racing thoughts lasting up to three days at a time, somewhat limited sleep, no grandiosity, has baseline elevated impulsivity and erratic behaviors, but no episodic increases during these periods of time, no increased cold directed behaviors. This could, this one bullet point could be like a three hour lecture on the challenges of making a good diagnosis in these settings, and I'll talk a little bit about that. Medications, she's one on SSRI, second generation antipsychotics, other mood stabilizing medications. She can't really recall which ones they were for how long, what the effect was, although maybe some of them made me a little bit worse. Chronic history of suicidal ideation, this will come back as well, during times of distress and non-suicidal cutting in the past. History of suicidal behaviors with kind of unclear lethality, walking too close to the train, drinking in risky situations, crossing traffic without looking, no current active suicidal ideation. She smokes cannabis daily, usually smokes, usually vapes, sometimes smokes the flower, plant, or waxes, dabs, resins. You know, you could like take a college level course in different formulations of cannabis just from talking to many of the young people we see. Binge drinking one or two times per week, often to blackout, and this often precipitates intimate partner violence, affective instability, and just kind of like things start to blow up very quickly during these periods of time. No opioid stimulant, benzodiazepine, or nicotine use. I'll come back and talk about that as well. Not opposed to therapy, but like it takes a long time. You get on a wait list by the time you come up on the wait list, maybe you're living with a different family, maybe it's just not the right time, maybe you've got a job interview or things are blowing up, hasn't been able to make a stable attachment. And despite all these obstacles, graduated high school, she's had multiple jobs in retail, really actually quite intelligent, high functioning when she's able to access those skills, tends to be interpersonally reactive, gets in arguments with coworkers and customers and ends up losing jobs and then getting rehired and promoted and the cycle kind of continues. Interested in potentially starting a family once things feel stable enough. So again, any of these bullet points could truly turn into an interesting hour-long conversation. But I highlight these points, again, this is a standard psychiatric intake in our clinic, and I think there's just so many different facets in which we can think about the potential utility of AI as an augmenter to our treatment and some of the concerns and pitfalls, and we'll talk about that. So zooming out slightly, I wanna give just like a little bit about the population of young people experiencing homelessness. Since I have the opportunity, I always just trying to sort of like raise awareness and have people thinking about this population. So first, homelessness is like shockingly common among young people. This is a large-scale study, nationally representative study from 2018, of people ages 13 to 17, 4% experience homelessness in a given year, 18 to 25, 10% of young people in that age group experience homelessness in a given year. It's like, that's shocking. As a child and adolescent psychiatrist, that's more common than many of the common disorders that we spend a lot of our time thinking about, right? OCD, ADHD, bipolar disorder, this single feature is more common than those, not to put any rank order on the sort of importance of these conditions, right? But just overlooked, I think. The risk is not equally distributed, so youth of color are significantly overrepresented among young people experiencing homelessness. You can see the numbers here, and these numbers are for young people ages 13 to 25. When you get to that 18 to 25 group, the numbers just go up for each of these groups. And there's a real intersectional lens as well. LGBTQ plus youth are over twice as likely to experience homelessness as their non-LGBTQ peers. Youth identifying as LGBTQ plus and black or multiracial have among the highest rates out of any population of young people of homelessness. Unmarried parenting youth, over three times as likely to experience homelessness as non-parenting young people. So you just think about this sort of intersection of marginalized identities. It really paints a picture, I think, of the type of young people that we're working with. Common reasons for homelessness. I always talk about this because this is, I think, just a really important lens for us to be having on this population. Family conflict is a major one. So around sexual orientation, gender, school problems, pregnancy, substance use, whether that's among young people or among parents or guardians or caretakers in the home. Maltreatment is, unfortunately, ubiquitous. Physical abuse, sexual abuse, neglect. It's very, very uncommon, exceedingly uncommon that I would do an intake where one of these things doesn't feature significantly in the early developmental history. Systems involvement. So there's a really well-documented pathway, unfortunately, of coming out of child welfare system and experiencing homelessness and the criminal justice system as well. So, you know, homelessness in and of itself is a negative adverse developmental event. I think one other important just top line takeaway is young people who experience homelessness have a greater than 10 times higher early mortality rate compared to their peers who do not experience homelessness. Shocking and tragic. And the two leading causes are suicide and overdose. So that puts psychiatry, I think, squarely front and center in dealing with and helping to sort of manage the challenges faced by this population. Risk and retraumatization. So it's not only the experience of trauma coming into the experience of homelessness, but then thereafter, these recurrent cycles. So one in five young people experience sex or labor trafficking while homeless. There's good data, actually, that much of this happens within the first 48 hours. Just really, really high rates of victimization and retraumatization. And there's just limited data on longer term outcomes. So, you know, educational, occupational attainment, housing stability in adulthood, family and social life, satisfaction, well-being, like thriving. It's a very different population than what we think of as a typical adult sort of chronically experiencing homelessness population. The experience of homelessness in young people can be quite transient. There's real heterogeneity in terms of pathways in and pathways out. And a lot of this is just not well characterized. In terms of diagnostic makeup. So this is a study that we did in our group of 140 young people just kind of walking in the door at Bridge Over Troubled Waters. We did a mini, which was a structured psychiatric assessment, a range of other psychometric scales. And we did a cluster analysis looking at just how do these diagnoses interact with each other and overlap with each other and confer risk or resilience. So we found four clusters in our group. The far right, no diagnoses, which maybe means I stared at the iPad for a half hour and pressed random things or just pressed next. The second group moving to the left here is marked by high cannabis use disorder. So that blue line there is cannabis use disorder. Marching one further to the left, marked by high major depressive disorder and relatively low rates of other things. And then the furthest left there, this is what I call the hot brain group. We think about that case, Ella, that I presented. Very high rates of major depressive disorder, very high rates of cannabis use disorder, and high rates of almost everything else. Very, very mixed bag of co-occurring psychiatric conditions. I'll point out here also, you see here, and I didn't put the single diagnoses here, but when we look at the common psychiatric and substance use conditions in this population, it really mirrors the relative frequency of these conditions in non-homeless young populations. Cannabis use disorder, alcohol use disorder, mood disorders, ADHD, PTSD, anxiety disorders. This is not a categorically distinct group of young people. It's just that these common disorders are much more common in this population and the ways in which they overlap and the severity of them is much higher. And it's a really important point because, again, that's more in the realm of just me kind of like advocating, raising awareness of this group of young people. But I think it also has implications when we talk about AI. So now let's apply some of this thinking to the conversation about AI tools and kind of like what are the opportunities here. So as I was thinking this through, I thought of four spaces where I think AI has potential to really augment our work and really be helpful in improving our care to these groups of young people. So the first is access, as Dr. Embers mentioned, access is like the number one thing, right? Access to subspecialty substance use disorder care is low across the board, even lower for marginalized and traumatized groups of young people. There are evidence-based, CBT-derived, manualized treatments that are available for substance use disorder in young populations. It's just really hard to get access to them. It's really hard to get high-fidelity, good-quality skills-based treatment. Medication-assisted treatment, we can talk more about that. That's interesting and there may be applications in terms of decision trees and algorithms. And then especially in this subset of young people, there's really good data to show the impact of positive mentorship, pro-social attachments and supports, recovery-based community spaces, and thinking about how AI could potentially sort of help make those linkages, triage and provide resources and supports. Disparity in rigorous psychiatric and substance use disorder diagnosis. This is one of my trainees, hear so much about this because it's like really something that I think is really, really important. I think there is a significant disparity in good psychiatric diagnosis and formulation, especially among marginalized and traumatized populations. You can see that the case I presented, Ella, like Ella shows up 20 minutes late because the train broke down. She's got to leave for a job interview. Your intake is suddenly 30 minutes. You're like putting out fires left and right. Each subsequent time, there's something else coming up that makes it really challenging to sit down and just do a really good, thorough diagnostic assessment. And this is why we see, I remember distinctly as a second-year resident in the emergency department in the middle of the night, just thinking like, it can't be possible that all of these people have schizoaffective disorder. The rates just can't be that high. But you know, with good intentions, right, there's a high level of diagnostic imprecision, of sort of provisional diagnoses, of prescribing off-label, which in many cases is really well-intentioned to allay some of these really severe symptoms. And I really very strongly feel that we need to be making really high-quality, really thoughtful diagnoses and formulations. And it's more than just academic, right? This helps us understand mechanistic linkages and specific targeted interventions. Another kind of challenge that I think AI might be able to assist with is just, there's limited time in office. There's conflicting demands, right? And especially with someone like Ella, with all of the disrupted attachment, with all of the institutional transference, with all of the feelings that she might come with about how I feel about figures of authority, or like the white man sitting across from me in this office who I've never met before, right? It takes quite a bit of work to do really good, attuned engagement, attachment-based treatment, even in psychopharmacological treatment, right? And the developing brain, right? This is sort of a very, very common refrain in these types of talks, right? But the ways in which the limbic, mesolimbic structures, that sort of reward-driven pathways, develop much earlier than the top-down cortical kind of judgment inhibitory pathways, right? So even in the best cases, when we're trying to teach skills related to substance use disorders, you're going to expect that you're gonna be talking about these things over and over and over again, continuing to reinforce, and that's just sort of like baked into this treatment. So then how might AI fit into some of these areas? So access, I think, it is one of the great promises is that access to really rigorous, high-fidelity, high-quality therapy, psychosocial treatments, and medication treatments as well. I think, you know, I could imagine a role for a out-of-the-office, interactive, diagnostic sort of chat-based tools that might help us. Hey, when you get some time, when things aren't blowing up, sit down, take some time, go through these questions and have some breakouts. You know, it seems like you're getting a little overwhelmed, let's break out for a two-minute breathing skill or something like that, right? Allowing people to access these things out of the office so that we get higher-quality data that we can sort of rely on and really build our clinical and research work around. Oh, and disclosure as well, right? Young people may be more likely to disclose, especially things related to substance use and risky sexual behaviors, to an anonymized collection method rather than to an in-person treater. And then with that limited time in office, I think one of the great potential areas is between session, reinforcers, supports. How are you doing? What are you going through right now? Let's break out and practice this skill, even just chat-based, text-based kind of supports. I've seen presentations on many of these models and I do think it's a promising area potentially, and it may feed back then to the team. This is what this young person's been dealing with and this is what the psychiatrist might help with, this is what a case manager might help with, this is what a therapist might help with, and here's some emerging trends that we're noticing. So the concerns, and I'm sure there may be many others. So the first that always comes to my mind is safety, right? Like, it's 2 a.m., someone's accessing a text-based AI interactive app, and something around suicidality or imminent risk comes up. Like, who owns that information? Who does it get fed back to, and at what time? What are the methods that are in place to ensure safety? And there's a whole industries, I think, that are grappling with this with some relatively interesting and thoughtful responses, I think, that we may talk about if it comes up. The quality of teaching and support, I really do think that this is not just, I don't know what the word is, but like human-centric thinking, but I think that a human therapist or even psychiatrist, psychopharmacologist, provides a lot of intangibles that I'm not sure AI can capture and can provide. Humor, surprise, you know, I think that the ELISA that Dr. Ambrose showed earlier, a lot of the early AI text-based supports that I'm seeing are, I'm so sorry you're depressed. It's so hard when your parents aren't listening. It's like, my goodness, that's me on like Friday at four o'clock when I'm like trying my best to hang in there, you know? But I think at our best, right, it's like it's surprise, it's humor, it's an unexpected response, it's a moment of attachment, and especially in this type of work with the types of young people I've talked about, right, like a lot of the therapeutic process is around there's a person here who's gonna sit with me and weather this with me and understand what I'm going through, and it's not always gonna be easy, right? All of those really good intangible kind of psychodynamic things that I think are really, really important. And I think of like, you know, Harlow's Monkey, like the sort of felt and wire AI therapist, how far will that take you? Surely better than nothing, but I do think that there are important differences. And then I think longer term, the concern that that leads me to is the development of a two-tiered system. So young people who are marginalized, traumatized, have least access. Are those the people who are gonna end up most with an AI-based sort of treatment method? And is it gonna be like we see in many other areas, right, people with more resources, more access, can pay more for that in-person type of treatment? And might that actually be kind of flipped from what is actually needed therapeutically, or most likely if we were to just to design top-down, who would have access to what types of treatment? So I'll stop there. I really appreciate your time. Thank you. Thank you, Jack and Colin, and thank everyone for coming. Thank you, Jack and Colin, and thank everyone for coming. So I'm gonna present a little bit of a, yes, a little bit of a gloomier picture, and I'm sorry about that. So blame it on Jack and Colin. I'm gonna be, my name is Oluwole Jaged. I'm gonna be presenting health inequities in AI, you know, but sorry, it's probably not as upbeat as my earlier speakers. Today, I'll focus on shifting racial trends of substance-related mortality, I'll contrast health inequality and health inequity, and I'll talk about the pre-existing social context of AI, where we all work and where patients are seen, and also the war on drugs. I'll talk about novel technologies and inequities, and then AI's vulnerability to perpetuate racial inequities. And finally, we'll talk about the application of equity-based lens to AI development and deployment. It's beneficial to start from what we're dealing with right now. Nine million people aged 12 and older presented last year, in the past year, with opioid misuse, and that's according to NSDUA 2020. This is the latest data. About six million people had a diagnosable opioid use disorder in the past year. About 700,000 people used illegally made fentanyl, and about a million people misused fentanyl last year. As of 12 months ending January 2020, we had about 72,000 deaths due to drug overdoses in general, and you hear me talk about opioid use here as a proxy generally for drug use. The reason is because most drug overdose deaths, 75 to 80%, relates to opioids. In the 12-month period ending June 2023, we've had 110,000 deaths, and I think this was not bad enough when you put this in context of race. This is a study by Freidman and Helena Hansen, came out JAMA a couple weeks ago. They used the framework, Case and Deaton's framework of deaths of despair. What you find is that the contributors to death in 2010, 2019, and 2022, there's been an unmistakable increase amongst American Indian and Alaska Natives and black Americans, as you can see in these figures. This is a very common now, very popular graph showing the three waves of opioid overdose deaths in the country. The first wave, of course, started in 1999, second wave in 2010, and the third wave, 2013. But we now have a fourth wave, which is, I'll talk a little bit about this in relation to my work. But what I want to point out here is that in 1999, the percentage of overdose deaths amongst black, Native Americans, and white Americans were similar. But what happened about a decade later was an unmistakable separation between white and Native Americans and black Americans. And this was in the context of prescription opioid overdose deaths. But in 2010, what we found was also the fact that heroin started to replace prescription opioids. And what happens then is the illegal drug became what was in the streets, what everybody was now taking because of the crackdown on prescription opioids. And then black overdose deaths started to exceed what the 1999 numbers. To the extent that in 2016, there was a 42% increase in one year of black overdose deaths. What we have now in 2023 is that amongst Native Americans, Native Americans 1.8 times, you know, but of drug overdose deaths compared to whites and among blacks is 1.4 times. In my state of Connecticut, where I work and live, fentanyl is also like on top of that rank. It's almost people now who have been stable using say crack cocaine or cocaine for 20, 30 years and did not die, did not have any overdose. What we're seeing now is the fact that when fentanyl is mixed with it, these are people who didn't have any idea what they were using. Now people are dying more. This is Connecticut data. Now I say all this to give a little bit of a background to what I really wanna talk about, which is health inequity and health inequality. We need to make a distinction between these terms only because then we can have a framework when we, you know, talk about policy because health inequality is different from health inequity. Health inequality is any particular type of difference in health in which disadvantaged social groups with other poor racial or ethnic minoritized people, females, other groups who have persistently experienced social disadvantage or discrimination are systemically likely to experience worse health or greater health risks than more advantaged social groups. That's inequality. Now health equity on the other hand is the absence of disparities in health or social determinants of health between groups with, you know, different levels of underlying social advantage or disadvantage. Health equity focuses on elimination of health disparities and then operationalization of health equity implies attention to ethical principles, social justice, and human rights. I showed my eight-year-old this picture. Well, I didn't show him. He came when I was working on this and he saw this and he asked me one question. Why, dad, why do we have this fence? And I looked at him, I said, well, you're probably gonna be a psychiatrist. You know, equality is different from equity. Now from a practical standpoint, equality says let's do the same across the board. Now that may be good, but then that leaves out the fact that we need to individualize programs. We need to individualize interventions. Okay, now when you go on to look at what really then is inequality, any difference at all, any difference in health between a minoritized population and a non-minority group is inequality. However, when this difference is due to the ecology of our healthcare system, environmental factors, discrimination, bias, stereotyping, uncertainty, it's more accurate to refer to those as inequity. And why is this important? I'll make an example. So last year, one of the major legislative agenda, inputs last year that was really great was the exclusion of all those, you know, X waivers were stopped. You didn't need an X waiver anymore to prescribe buprenorphine. But that was an equality-based decision because it's not enough to say you, you know, you didn't need an X waiver anymore when everybody who, every doctor will work in the best side of town. There was no one, you know, prescribing for people who were poor, who really needed it, who went the country, you know. So that's an example of showing you how equity is different from equality. So I talk here about the pre-existing structural context. AI works within an already established system of healthcare. Substance use disorder treatment reflects our own social intention and belief. There's a war on drugs, which is a preference for abstinence-based moral failure addiction treatment which imposes a deviance model on people who use drugs, perpetuating stigmatization and marginalization. So in other words, we are working within a system that is already divided. We're working in this system that's already marginalized its people. So it's like a garbage in garbage out situation. AI works with source data that works, that's already existing within the structural context. Look at the war on drugs. The war on drugs is actually the war on people. Started in 1880 with the Angel Treaty, talking about the rules around opium importation from China. You can see that the war on drugs, the war on people, it's actually very race-based context that the drug use system, the drug treatment system is based. The deployment of AI is increasing rapidly in every medical field. In the field of addiction, AI has been deployed to facilitate early detection of substance use disorders, potential for new treatment discovery, customized approaches, digital applications, and other health technology options. AI has also been used for real-time monitoring of clinically relevant processes, cravings and withdrawals, peer support, and in addiction research, data acquisition, processing, and management. But I wanna sound a note of caution here. The implementation of novel technologies within that pre-existing historically unequal system worsen outcomes. There's a very interesting theory called the fundamental cause theory. This was popularized by Phelan and Link in 1995. Health inequities persist despite advances in health technology. Despite our advances in health innovations, what we find is that those in the higher socioeconomic group, they tend to do better even though we have, whenever we have advances in technology and we have to be careful here. We have to be cautious of the fact that AI, even though Colleen was telling us about the great things that I can do, this is true, but AI can also, in fact, perpetuate disparities. Take, for example, stigmatizing language. We know that stigmatizing language is very, whoa, five minutes. We know that stigmatizing language is very prevalent in our healthcare, in our EMRs, and then when you think about large language models that depend on this root information, it more or less garbage in, garbage out. So what we see is stigmatizing language continues to yield racial disadvantages for black people and becomes a source of real racial disparities. I need to fly. I still have a few things to talk about. So LLM, large language models are trained on existing data. They utilize complex EHR as their source. Human biases are carried over to outcomes reported by AI. I want to give an example here of perturbations, approaches, and trained AI models. So what these guys did was they added, they were doing mortality prediction, and they added the word Caucasian to that model. Now, that model, the output did not change when you add the word Caucasian, but when you add the word black, the prediction decreases by about 16%. And I wanted to talk about novel technologies. These buprenorphine, psychedelics, medical marijuana, semaglutide, these are all in one way, shape, or form novel technologies. Buprenorphine has been here for over 20 years. We still see a real disparity in buprenorphine prescriptions. Is that you're white or you have a private pay or self-pay to get buprenorphine? What about psychedelics? 80% of psychedelic research has been in white people, so it's much more difficult now to make generalizable statements to the larger populations. Medical marijuana. We're moving medical marijuana now from schedule one to schedule three. In New York City, for every 10% increase in the proportion of black residents in New York, essential extract was 5% less likely to have a certifying provider. What about semaglutide? Semaglutide, the GLP-1s, 57% of the people who are eligible, FDA eligible to be prescribed semaglutide are black, 55% are Hispanic. But what do we find is that these people are either uninsured, on Medicaid, and have low family income. Why is AI vulnerable? Healthcare systems are set up based on an unequal premise to produce unequal treatments. There's a wide normativity in our data sets. In fact, there's a wide swath of the public who are not even involved in that data set at all. Look at people from Africa, from other places in the world that are not involved or included in our data. And then AI and digital approaches may not mimic cultural nuances if are necessary for effective treatment. Over the next three slides, I'll talk about what I think we can do. AI algorithms must recognize social determinations. We have to develop strategies to identify and eliminate stigmatizing language to train future AI models. We also have to recognize potential negative consequences of connecting social determinants of health to health outcomes. We have to partner with and collaborate with people who use drugs, community leaders, clinicians, policymakers, and researchers to improve AI algorithms. We need to promote health equity during all phases of AI development, ensure transparency, community engagement, and establish accountability for equity and fairness in health outcomes. I think finally, acknowledge the vulnerability of AI to perpetuate inequities. We have to engage culture, that's gonna be very challenging, into AI development, center anti-racism in clinical care and addiction research, and promote equity in AI care guidelines, structure systems to encourage equity-based payment systems. Equity-based payment systems, and programmatic efforts to improve racial equity in the development of addiction workforce. In the words of Jesse Enfield, whatever the future of health holds, patients need to know there's a human being on the other side helping guide the cause of their care. Above all else, healthcare AI must be designed, developed, deployed in a manner that is ethical, equitable, responsible, and transparent. Thank you for your attention. Thank you. Hi everyone. I'll try to summarize my presentation so we have room for questions and discussions. Can you hear me okay? Okay, I'm going to talk about AI platforms for individuals with disabilities and substance use disorders. So I'm going to start with a page that I received as I was working in the Psychiatric Emergency Department. A 67-year-old woman with a frontal lobe resection due to a tumor, a glioblastoma multiforme. She has expressive aphasia. The husband gets frustrated talking to her. She fully understands everything I say but cannot express what she wants to say back. So basically she's a woman with broken aphasia. She can understand everything but she cannot communicate by words or by typing. So she was extremely frustrated, distressed, sad, and so was the husband. And the emergency physician wanted me to talk with her and I did. Six years of medical school in my home country, Colombia. Four years of residency. Three years of fellowship. I was not prepared for this moment. I did not know what to do. So this is an extreme case but the truth is that disability impacts all of us. So disability involves physical, cognitive, sensory, mental health conditions that impact our daily activities. But it's not just only related to the physical limitations. Disability is also, it depends, it's shaped by the environment and by the opportunities that people with impairments have to participate in the society. There are six standard questions to ask about disabilities in national surveys. There are questions about difficulty hearing, difficulty seeing, cognitive difficulties, difficulty walking, dressing or bathing, or doing errands. And in this largest telephone survey in 2019 for health conditions, one in four, this is for 400,000 people, one in four non-institutionalized adults had some type of disability. That's a lot. The most prevalent type of disability were mobility and cognition followed by difficulty with independent living, hearing, vision, or self-care. So having a disability causes a lot of challenges in the person's daily lives, right? From transportation, even thinking of going to a bathroom to a restaurant is a challenge. Social stigma and discrimination that leads to isolation, limited access to health care from physical barriers, accommodations, or providers that are trained to address their unique needs. Limited opportunity of education or unemployment that leads to financial hardship. And also individuals with disabilities have higher chronic health conditions and also higher rates of mental health conditions. And not a surprise, they also have higher rates of substance use disorders. Just one example, the 2019 National Survey on Drug Use and Health found that the individuals with an impairment had almost double the odds of having a substance use disorder than those without an impairment. And when we think about treating the addiction, we sometimes think about just addressing the tip of the iceberg while neglecting all the elements that can cause and perpetuate the addiction. And add to that the challenges of being someone with a disability. So when we think about treating addiction in this population, we have to think about the social drivers of health. And how can AI help make someone with disability, how can it make their lives easier? And some examples are accessibility tools, like voice recognition softwares where someone with a disability is able to talk and manage a computer. Sensory augmentation or substitution. So for example, AI cochlear implant or some glasses that have image recognition or voice recognition. There's an option, for example, for prosthetic limbs to have AI. Some prosthetic limbs use AI to be able to adjust their grip if the person is lifting an object or if the person is running or walking. Communication aids, for example, some devices that recognize gestures. If the person has a disability and cannot do sign language, they cannot talk, then these machines can identify gestures and translate this into voice or actions. Or employment opportunities, for example, AI-driven job matching platform or personal assistance. So this already exists. This kid has cerebral palsy. He's nonverbal. He cannot type. And he uses a machine called Dynabox where he can choose, for example, I want, and there are fruits or there's actions. So I want apple. I want dim light. And then this Dynabox says this out loud. And then, for example, a machine like an Alexa and Echo hears what the Dynabox says and turns it into an action. So I want the dim light. And then the Alexa recognizes and dims the light. But that can give people with disability a lot of independence. There are also brain implants that can, they can recognize the neuronal activity and translate it into voice or actions. So with brain implants, people are able sometimes to move cursors of the computer. This is already out there. There are companies and there are universities that have already implanted this type of brain implants. Or live captioned glasses for the people with hearing impairment. Or brain implants for people so they can walk when they're paralyzed. And how, so these are some examples of how can AI facilitate their lives in general. But what about in addictions? Some examples are therapeutic tools like virtual reality that can provide like an immersive experience, like a simulation. For example, being in a bar where the person can practice coping skills, whatever they learn in the CBT. Virtual support networks and chatbots that are not a replacement for human providers, but they can help in around the clock support for individuals in recovery. Virtual assistants, so when you go to an appointment and you have to repeat all your medical history, allergies, et cetera, more so when it's in a psychiatric evaluation, it's a lot. And imagine having a virtual assistant that can provide the provider the information, that can summarize, that can relate concerns, that can transcribe what's happening. Remote monitoring, so wearable devices. One example is the SmokeBit, which is a smartwatch that, I don't know how, but it's able to recognize the movement of the hand related to the mouth. So it's able to identify when the person is smoking, and it alerts the person, and it provides feedback, and then it can give, for example, tips. Okay, this is happening, these are some ideas of what to do. I wonder if some people have to wear two smartwatches for each hand. And all this data can be used for developing more novel technologies. We have to assume that there will be flaws. Even, I don't know you guys, but when I call a company and I'm dealing with a robot, I see myself just dialing zero, zero, zero, zero, zero while yelling at the phone, talk to a representative, and the legend said that the louder you yell, the faster you get to a representative. So these technologies have been there for a while, and imagine these more novel technologies that are targeting these very complex needs. There are going to be a lot of challenges. An example is the under-representation of training data. So the machine learning systems heavily rely on training data, and if specific populations, like those with disabilities, are not included, then there's going to be some bias. For example, voice recognition systems that don't include people with speech difficulties are going to have faults. Not a disability, but I clearly have an accent, and I don't get along with Alexa because she cannot understand what I say 60% of the time. Discrimination in algorithmic decision making, for example, job mining tools may recognize extracurricular activities or previous work experiences as indicators of success, and individuals with disabilities may not have the possibilities of this, of having such extracurricular activities, for example. Over-reliance on AI technologies, if they malfunction, then that's a problem. Cost and accessibility, this is huge. I, myself, am hearing impaired. I wear hearing aids, and the hearing aids, I don't know if here people wear hearing aids, the warranty lasts three years, and every three years, basically, they're kind of like designed to damage every three years. So I have to buy, every three years, $6,000 hearing aids. In the span of 50 years, that's $100,000. And this is a rather simple device, so imagine when it's more complex devices. Anyway, so we have to make sure that there's a lot of research allocation for developing these technologies, and that we have an interdisciplinary collaboration between the researchers, but also the politicians, and we have to, it's really important to involve people with disabilities in the design of these technologies from the beginning. It should be a co-design, not until the piloting. So that way, this will facilitate the development of these technologies when we have people with various disabilities, and followed by the disability mantra, nothing about us is without us. Wow, that was like a star-studded panel. So we have a couple of questions that were submitted beforehand, but being mindful of the time, if there are questions in-house, I want to make sure we address the live questions first. So if folks have any questions, feel free to use the mic at the front, since the session is currently being recorded. Any burning questions? Please, come on up. Was there data, particularly for the homeless patient population, for how many might have a smartphone, just given the AI, what was it again? Or do you know if there's data for that? Yeah, so I haven't seen great data, although there are feasibility studies of EMA using smartphones that are like very successful. You know, smartphones are quite accessible. The question that we often run up against is more, do people have cell phone like data plans versus Wi-Fi only? But there is actually a pretty high, I think our folks estimated when we were looking at an EMA study, something like 80 plus percent of young people in-house have access. Yeah, it's a great question though. Please. So hi, so what would you recommend just in terms of, you know, because you're talking about the inputs of information and trying to, you know, bypass past inequities because of the models of care. So for the trust factor, and that could be between academia, industry, but how would you foresee like that partnership working? Because, you know, a lot of people are still scared of some of these aspects you've talked about. Particularly, it's like building new models for patient care, especially the vulnerable populations who may suffer from, you know, SUD, maybe racially discriminated against, those type of things. I think by far the most important thing we need to do is to involve the community in AI designs from the get-go, you know, and just that humility, that cultural humility in systems like the pre-existing systems, the EMRs, training clinicians, and use of less, no stigmatizing language and all that, all that would help. And I'll just make a plug that if you're interested in that question, there is another session happening at noon in the Mental Health Innovation Zone that will be talking directly about the AI as well as some of the current technological implementation in clinical practice and how do we address the health inequities. It's my session, I'm plugging it. Please. Thank you all so much for being here and thank you for your talks. My question, a couple of you mentioned AI for diagnostics and the potential for that to improve diagnostic accuracy. I was wondering what you think about how we get from where we are to there. What's the role of the human? Should we have computers listen to everything? But I guess, how do you see it improving diagnostic accuracy? I'll take the first half and feel free if any of our panelists want to jump in. So one of the hats I wear is as a consultant for industry, looking at ways in which technologies are implemented in clinical practice. And I think for AI, as you've mentioned, the gap currently is often looking at spaces in which AI is currently being implemented. For most academic medical centers, they're very reticent to engage in new technology. There are a few places here and there that are toying with a couple of pilots, but the predominant place in which AI has really entered the market is in the private as well as startup world. And I can certainly talk offline about many of the companies that are entering into it. I think one of the concerns that we have often is both bidirectional. Who are using it because their data are being implemented to refine the tool? And which patient populations do we typically see being presented in the user case? And I'll give you a wild, wild guess in terms of both group tends to be predominantly in what the World Economic Forum call WEIRD, which is Westerned, Educated, Industrialized, Rich, and Democratic. So oftentimes it's in countries and places that are very high resource. To hearken back to your question earlier with what Colin is about, in 2022, they estimate about 68% of the world have smartphones. In the US, it's 91%. So the market penetration, this is for college age. I should clarify, this is for college age. So even just something as simple as, like you're thinking about an app that you're using on your phone. If you don't have access to a phone, if you don't have access to Wi-Fi, if you don't have access to internet or even the literacy to use it, the technology is useless. I don't know if folks wanted to add anything. Please. So I have a couple of questions. I've heard about your talk about the young homeless populations. I was curious as to, you didn't really talk about people with psychosis as how much percentage of your population are you seeing in psychosis as its own, but AI creates another host of issues. Can you talk a little bit about that? That's so incredibly important. And I think that it's a reflection even within the work that we're doing, that that's a further marginalized population that is often unseen, often staying out, often staying out in places where they're not easily accessed even by outreach services. So I think that's an incredibly, incredibly insightful and important point. And when we start thinking about, yeah, who's collecting my data? What's this computer? That has a layer to it that presents a real challenge. So thank you. That's very, very important. My second question is, talking about research and having the right people's data and within the research. So we are collecting the appropriate data. How much research are you seeing you're doing yourself or your colleagues are doing with people with homeless populations? It has its own ethical issues, but do you know of some research going around that area? Yes. So there is a network of folks across the country doing this type of work. I'll say, there's been a really rich literature in allied fields, social work, nursing, psychology. Within psychiatry per se, we did a systematic review of just rates of diagnoses in the literature. And especially within the last five or so years, there were relatively very few studies using rigorous DSM-based diagnostic models to really characterize the population. So when you looked at the rate of major depressive disorder in youth experiencing homelessness, in our systematic review, I think it was 30 to 95%, which is like, I always say, if you walked up to someone on the street and said, what's the rate of depression among youth experiencing homelessness? And they said 30 to 95%, it wouldn't inspire a lot of confidence, right? So I think what our push is really to bring the best that we have to bear in a psychiatric lens to that literature and catch up with our colleagues in the allied fields. And if you have grant monies, come talk to the panelists afterwards. Please. Hello, I'm Ahmed Nahyan with the SAMHSA-SMSB Medical Student Fellowship. My question is regarding, just from a medical student perspective, as first years, we are heavily grilled in breach of confidentiality, patient privacy, and everything of that sort. And I'm just imagining a little bit, and like three and a half, the GPT three and a half is free and four is paid. But then there's so many of these bots which collect your data. Do you see it being a problem from a physician perspective in the future where this entire protecting confidentiality, but at the same time using GPT will become a problem at some point? Yeah, because we were talking about how it's a weak AI and how the machine learns from you. Do you see it being a HIPAA issue in general? Yeah, I also worry about that. And I think we all should. And that's one of the points of my talk is imagine connecting data of a population of people to substance use disorders or social determinants of health. Imagine what you could do with that. Imagine if the insurance company if you have prediction for mortality, for example, and the insurance company says, okay, I'm not gonna insure you because of this and that, you know? So those are potential areas where we need to be cautious and mindful of how we use data. Thank you. Yeah, and the one caveat that I'll share is oftentimes one of the things we learned in business school is nothing is free. So if you're using something that's free, if you're using something that's free, you are the product. And I think when these technologies are designed, you have to start with what can go wrong because a lot of things can go wrong. And one of the examples I showed there is a woman that had cerebral palsy with a brain implant. I don't know exactly how it worked, but with these brain implants, they're able to read the neuronal activity and it can translate into a machine and the machine voices out loud what the person is thinking. So imagine once these technologies are developed enough, they can be used for horrible things too. So that's also a risk, not just privacy, but... Yeah, really, I mean, come to the noon session, please. Everybody was significantly taller than me, but so at my clinic, we're developing an AI model to assist coaches, therapists in providing mental health care for a bunch of different issues. And I was looking at how can we best integrate cultural responsiveness in the recommendations that come out of these AI language models to help with the disparities and recommendations provided? It's an interesting question and a very good one and one that I have thought about. So I think where to start is really educating source data. Clinicians about cultural responsivity, cultural responsiveness and cultural humility. That's where to start. And we need to, again, emphasize engaging the community where we have products, engaging the community and having their feedback on how we can best culturally appropriate, if you will, culturally administer these AI models. So we just have to keep this in mind. It's very challenging for AI to have that cultural component to it. It's like almost giving a human face when it's more or less very concrete. It's a challenge. We have to break the gap somehow. Yeah, and it's also figuring out the balance between when we're inputting information about the patient, how much information do we want to include about culture, about race, about experiences, but without reinforcing the stereotypes, I guess, and what comes out in terms of recommendations. Absolutely. Removing stigmatizing language and stuff like that. Yes, absolutely. I recognize there's so much interest and a lot of curiosity, which is lovely. I want to be mindful of all the conflicting sessions that's happening as well. So we'll close the public Q&A, but feel free to come up and talk to the panelists afterwards. Thank you so much for coming. This has been so engaging and really, really exciting. And big thank you to our panelists again.
Video Summary
The session, hosted on a Saturday morning, tackled the topics of equity, errors, and ethics of artificial intelligence (AI) in vulnerable populations with substance use disorders (SUD). Chaired by Jacques Ambrose, the session featured esteemed professionals, including Dr. Colin Burke, Dr. Silvia Franco Corso, and Dr. Jehede, who examined AI's role in clinical practice, psychiatry, and specific marginalized communities.<br /><br />Dr. Colin Burke presented cases involving youth experiencing homelessness. He emphasized the complexities of accurately diagnosing psychiatric conditions in such populations, given the high prevalence of trauma and substance use. He discussed AI's potential to improve access to subspecialty SUD care, enhance psychiatric diagnosis accuracy, and provide between-session support for individuals. However, he stressed concerns about the quality of AI-based tools, potential development of a two-tiered care system, and maintaining safety and effective therapeutic relationships.<br /><br />Dr. Jehede discussed health inequities and AI's potential to perpetuate racial disparities within a pre-existing, structurally biased healthcare system. He highlighted examples, such as stigmatizing language in electronic health records affecting AI outcomes and called for community collaboration and equity-focused AI algorithms.<br /><br />Lastly, Dr. Franco Corso talked about AI applications in assisting individuals with disabilities who also have SUD. She provided examples of AI technologies that facilitate daily activities and therapeutic interventions for disabled individuals, while cautioning against challenges like under-representation in training data and the risk of exacerbating inequities.<br /><br />Overall, the session underscored the importance of integrating AI with an equity-focused approach, ensuring ethical and culturally sensitive implementations, and addressing the socio-economic barriers present in the healthcare system.
Keywords
artificial intelligence
equity
ethics
substance use disorders
vulnerable populations
psychiatric diagnosis
health inequities
racial disparities
community collaboration
AI applications
disabled individuals
socio-economic barriers
×
Please select your language
1
English