false
Catalog
Health Equity and Digital Divide in the Post-COVID ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi everyone. Welcome to our talk today. So we're going to be talking about an important aspect of digital psychiatry known as health equity and digital inclusion. And so before we get started, I want to take some time and introduce you all to our wonderful speakers. So I'm Darlene King. I'm the incoming chair of the APA Mental Health IT Committee. I'm also an assistant professor at the University of Texas Southwestern Medical Center in Dallas, Texas. And I'm also a physician informaticist. And also joining us today is Julia Tartaglia. She is a psychiatry resident and a digital psychiatry researcher specializing in mental health apps at Zucker Hillside Hospital at Northwell Health. She is also a member of the APA Mental Health IT Committee and the co-chair of the APA annual meeting for the Technology Committee. And then Morkay Blaytofeh, he is also joining us. He's a clinical fellow at NIMH for computational decision neuroscience and also a psychiatry resident currently at Baylor Scott & White in Temple, Texas. So thank you guys so much for for joining us today. So what we're going to do is we're going to break this topic into three parts. The first we're going to look at background terminology and kind of build a framework for how to think about these topics going forward. Then we're going to have a clinical case example where we apply this knowledge that you learned in the first part. And then for the third part, we're going to introduce a new topic where generative AI is currently in the forefront of conversations and one of the underlying concerns with it is potential biases that may exist with machine learning and this type of technology. And so we are going to talk about that and talk about ways to mitigate that risk. So without further ado, here is Julia Tartaglia. Good afternoon everyone. I'm going to be doing a brief overview of health equity and the digital divide. And so we're going to go through some statistics and framing the problem in order to support the cases that are coming later. So what is the problem? So we're all aware that there are racial and ethnic disparities in mental health. So just looking at some statistics, as you can see, the U.S. population is becoming increasingly diverse. So some statistics from 2020 census show that, as you can see on the left, it shows the breakdown of children under 18 by racial and ethnic demographic and then adults 18 and over. And what you can see here is that by 2024, there's a prediction that more than half of all Americans are projected to belong to a minority group. So you can see that each of the demographics are expanding. So looking at the rates of mental disorders, a lot of studies have shown that the rates of mental illness amongst people of color are similar or even less than those in their white counterparts. However, there is suspicion that this could be underdiagnosed due to a lot of biases and access in care. So looking at this figure from 2008 to 2012 data, you can see that amongst black, Asian, and Hispanic people, they have slightly lower rates of mental illness versus American Indian and Alaskan Natives and also people who identify as mixed race or two or more races have higher rates of mental illness compared to white people. So these statistics are changing. Since COVID, the rates of mental illness are increasing a lot faster among people of color compared to whites. So this is one study that shows that suicide death rates by race and ethnicity are a lot faster or increasing faster amongst black population, Hispanic or Latino, and AIAN populations at much faster rates than Caucasian, even though the absolute numbers still remain higher amongst white and AIAN people. There are also ethnic and racial disparities in mental health service utilization. So while the the rates of mental illness might be around the same, we do see that black, Hispanic, and Asian adults were much less likely than white adults to actually access mental health services. And this was statistically significant at the 0.025 level. So, you know, why might this be? So there are reported differences in the quality of care for people of color, which lead to poorer outcomes. And many studies have been looking at this issue and have identified several different contributing factors, including language and cultural differences between patients and providers. This can lead to communication issues, decreased perception of the quality of care, and poorer patient outcomes. There's also systemic racism and provider discrimination, so biases and stereotyping that can lead to poorer outcomes. And there are also socioeconomic factors or social determinants of health, such as insurance and access to care based on geographic location, socioeconomic status, and that is leading to poorer access to higher quality care. So as you all have seen over the past decade or so, there has been an explosion in digital technology in the mental health space. And in thinking about health care disparities, many people have began to wonder, is digital technology the solution to address some of these barriers? So the question remains, can we eliminate or reduce disparities through digital health? Digital health offers, you know, a lot of exciting potential to increase access for those who might not have access to traditional care due to factors such as geography, you know, cost of getting to a physical location, and other factors. And there's also interest in seeing if tools can be created that can increase cultural competencies, so customized resources that are informed and tailored based on users' ethnic and demographic background. And the theory is that in doing so, this can lead to better outcomes by reducing disparities. However, as we've been thinking about the potential of digital health, we also have to acknowledge that there could be a potential problem. So this term has been coined the digital divide. So the question remains, are we actually creating a new problem in terms of access to these digital tools and interventions? So as treatment is turning digital, there's a concern that this digital divide could perpetuate these disparities in mental health outcomes. And so what is this term? It refers to the gap between individuals and communities who have access to digital health technologies and those who do not. Okay, so there are several factors contributing to the digital divide. So one is accessibility. So who actually has access to cell phones, Wi-Fi, and all of the devices and technology needed to access this care? Another factor is health literacy. So as, you know, as treatment is moving towards apps and a lot of self-guided tools, you know, who is going to have the capacity to understand and digest this information on their own? And a new component is digital literacy. So who actually has the ability to use mobile apps and different devices, being able to figure out how to connect, how to get online, and how to navigate through the technologies? And finally, language barriers and cultural components. So looking at accessibility, you know, one thing that I wanted to examine in the literature is are there any racial and ethnic differences in access to health care technology? So studies have shown that overall, in terms of smartphone access, there aren't any differences by race and ethnicity. So if you look at the green bars, you see that for both white, black, and Hispanic responders to this survey, they hovered around 85% do own a smartphone. You can see in this study, there were other factors that might contribute to access to phones, including being over age 65 reduces the likelihood, and also living in a rural environment or having a high school or less education. So looking a little bit deeper into the smartphone access issue, one study did find that black and Hispanic people are more what they call smartphone dependent compared to white people. This means that they are not as likely to have broadband at home and are using smartphones as their primary access. So in this graph, white is the darker blue line at the bottom, the middle line is black, and the top line is Hispanic. So you know, why we care about this is wireless access limitations could actually impact the ability for people to use mental health apps. So if you're imagining completing modules or doing a telehealth visit using only your data, this could rack up quite a big bill. One study did find that 38% of Spanish speakers do not have access to a smartphone with a data plan for wireless internet, and this was found in a recent study of Medicare patients. So all of these things need to be considered when recommending apps and tools to patients. So next, looking at literature for digital literacy. So we'll be looking to see are there racial and ethnic differences in the ability to actually use technologies. So just to define digital health literacy, it's defined as the skills necessary to successfully navigate and use digital or electronic health information and patient resources. So looking at the data, there still isn't a ton of research out there, in particular for healthcare apps, and the the data is mixed. So I'm just going to explore one study. So this study examined mental health-related technology use by race, ethnicity, and age. And as you can see in the first, so the the graphs are divided by age, so the first cluster is 54 to 64, the middle is 65 to 74, and then 75 plus. And the striped bar is white, the center bar that's dotted is black, and the grayed-out bars are Hispanic. And this study showed that older black and Hispanic adults were less likely to use technology for their health than white adults. And so there's some consideration to be made with age as it correlates to race. Finally, what cultural factors are at play? How do language and cultural differences impact the digital divide? So, you know, one of the major concerns in this area is language and translation. Most mental health apps that are available are not being translated into other languages or modified for accessibility, such as, you know, ASL-speaking populations. So there was a study that was done in particular looking at Spanish-language mental health apps. The authors reviewed over 200 mental health apps in a database, which revealed that only 14.5 percent were offered in Spanish. And, you know, considering that many apps out there aren't even validated or tested, the ones that actually might be effective is probably a lot smaller than these 200 percent. So this could make it very difficult to find the right tools to recommend to patients. Another study I looked at looked at Asian language speakers and their use of Internet and social media video search. And they found that Asian language speakers were less likely to use the Internet for login compared to their English user counterparts. So this study shows that in the first column, in terms of using Internet, Chinese-speaking users were significantly less likely to use the Internet in general than other language groups. And the study also found that all Asian language users were significantly less likely than English-speaking users to use the Internet for login. And the implication of this is, you know, the ability to log into an account is critical in things like accessing a patient portal, you know, accessing mobile apps or mobile websites for mental health. And, you know, right now this is a standard tool given to patients for communication with their health care teams. So studies show that cultural adaptations in treatment actually improve outcomes for minority groups. So this suggests that cultural factors are actually very important in thinking about designing technology. So one example of this is a study by Greiner and Smith showed that culturally tailored psychotherapy demonstrated greater efficacy for ethnic and racial minorities compared to a diverse range of other controls. So one quote, you know, that I want to bridge with is, you know, cultural competence is really essential for psychiatrists to start bridging this digital divide and for us to provide effective care. So in introducing this new concept, what is digital equity and inclusion? And so you might see these terms being used. Digital equity refers to fair distribution of technology and Internet access to all individuals regardless of race, ethnicity, socioeconomic status, or geographic location. And digital inclusion is what we do with this. So it refers to the activities necessary by providers, by people creating tools to ensure that all individuals and communities, including those most disadvantaged, have access to and use of these technologies. So this is very important in our space right now as, you know, mental health care delivery is moving toward, you know, virtual digital format. So, you know, how do we go about achieving cultural competency? So there's been a lot of different frameworks that are being proposed to create, you know, equity centered digital health solutions. So our focus is really going to involve centering in on the community, really incorporating feedback from your target end-user in creating solutions that, you know, are culturally sensitive and appropriate. This involves the idea of co-design or user-centered design, which means, you know, putting together diverse teams when designing tools for populations to make sure that all perspectives are being taken into account. It also involves making tools accessible, so making sure that they're discoverable, accessible, and usable by diverse populations. And this might also include, you know, at the clinical level, providing training for people in order to try to bridge or, you know, close the gap in some of those digital literacy skills. And, you know, finally dividing the right tech, developing the right technology, something that is free of biases and stereotypes. And so I'll just leave you with this final quote from the National Alliance on Mental Illness, that cultural competency is essential for providing mental health care that is effective, equitable, and respectful. And this is why we feel it's so important to stay on top of this literature and make sure that, you know, we're all doing our part to be culturally competent. Thank you. Thank you, Julia. That was wonderful. All right, so that brings us to part two. So we are going to jump into a clinical case example, and we're gonna be using Kahoot. So let me bring it up. So if you go to Kahoot.it, In the meantime, I'm curious, what brings you guys to this talk? What is it that, for anyone that feels comfortable sharing, what are some of the things that you were hoping to gain from this talk and information that you think would be valuable? There it is, okay. All right, so we're going to join, so we're going to use Kahoot to make this more interactive. So if you go to 1-1-2-0-5-7-1, you go to kahoot.it, you can join in the Kahoot. And it's going to give you a friendly username, so you don't even have to think of a username to pick. All right. So, the first question is, do you recommend mental health apps to your patients? You'll have about nine seconds to reply. All right, so a lot of you are recommending apps to your patients, about 75%. So the case example we're going to go through, it's an example of a patient, and it's a simulated example. So imagine a patient, Maricela, a 24-year-old female, no prior psychiatric history, comes in and has symptoms characteristic of generalized anxiety disorder. She has a smartphone, which you see, and a little bit of background on her. She's a Latinx female. She lives at home with her parents, has two siblings. She came to the United States as a teenager, and Spanish is her first language. Her smartphone is her only way of connecting to the internet, and she has a limited data plan. She shares symptoms of generalized anxiety, and her doctor talks to her about a new diagnosis of GID. Surgery has started, and then at the end, recommends a popular CBT app to provide some additional support in between visits. An informed consent was provided talking about risks and benefits of the medication, as well as using an app, and she went forward with that information. So let's go to the next question. So when you're recommending an app to a patient, what are some factors that you think about in making that determination of this patient is someone who would benefit from an app, and are there any apps that you recommend regularly? Alright, we got one answer so far. We've got two. Alright. Okay. So let's see the answers. There we go. So you look at the cost. Is this app available? Is it free or is there a charge? Does any evidence exist? How do you know if they're even using an app? Symptom awareness, digital capabilities, willingness, is the app safe? Severity of illness, access to a form of care, access to internet, digital access, access to appropriate technology is a major consideration. Definitely. So it sounds like all of you are thinking about the right things. When you're thinking about an app, considering an app, if you know about the AP App Evaluation Framework, it's a shape of a pyramid in a hierarchical structure, and it starts at the bottom with privacy. And so thinking about safety of an app and as you go up, evidence and clinical efficacy is also there. But as we know, there is not any set in stone, like regulatory process right now for apps. So it can be hard to know whether an app is truly evidence-based or not. So if Maricela was 65 years old, would her age factor into your decision to offer an app? So yes. I think whenever we think about patients who may be older, there is this myth that patients who are older may not be as good with technology. And it is a myth because if you think about when computers were invented, a lot of people were around and grew up with computers and technology. And so some people who are quite old are actually very familiar with technology and utilize it regularly. But when we talk about digital inclusion and access, there are certain barriers that prevent people, and especially elderly patients, from knowing how to use certain technologies. It is helpful to think about what Julia talked about was digital literacy. And so let's see. So what apps do you currently use on a regular basis? How do you think about using an app? What do you use on your phone? What apps do you use? Yes, these are all excellent questions to start thinking about digital literacy. Digital literacy. And so if we look at e-health literacy, if we get really specific about it, there are six domains to digital health literacy. And the traditional one is what we typically think about as your ability to read and write. Traditional literacy. But then media takes it a step further of, can you seek out, do you know how to find videos and websites or access a newspaper, access news online, utilizing all media sources for information? And then information in the analytical domain is where you think about pretty much, can this person access information? And then can they utilize this information and actually make decisions from the information that they get? And then you can also talk about literacy in a context-specific way. And so we like to focus on the health digital literacy of, does somebody know how to obtain, process, and understand communications about health? And can they then use that information to make informed health decisions, like reading and tasks that would be required to function in a health care environment? So for instance, if we have a patient who they... I had a patient a few weeks ago who had a smartphone, and I saw that she was on MyChart. And I asked her, you know, you can message me on MyChart. And she said, I don't know how to use that. And it turned out she, while she had a smartphone, she was in her 60s, she only used it for taking pictures. And she couldn't use the browser or the internet part because it was too small, and she didn't know how to make the font bigger. And so her storage on her phone was, like, maxed out because of all the pictures she had on there that she couldn't use. She couldn't download an app, even though she had, like, logged into it. She couldn't actually use any apps because her phone storage was maxed out with all the videos and pictures. And so there is an example of talking to the patient and getting to know, you know, their specific circumstance. Well, even if they have a smartphone, it doesn't mean that they use it in the way that you might think they're using it. And so there are some eHealth literacy skills that are available where you could, if you want to pre-screen patients, this is a really short one where they can answer questions very quickly. This is a more in-depth one called the Mobile Device Proficiency Questionnaire, and it goes through eight different domains and is very, very specific, and it really does a deep dive if you want to really dig into a patient's digital health literacy. And so another idea is, how do you offer, you know, personalized care and help close this digital divide and provide digitally inclusive care when you have a really heavy patient load and you may have to see patients every 15 minutes? How do you incorporate this into a heavy clinical schedule where there's this idea of having a digital navigator that could bridge a gap where it's similar to a peer navigator, but they help with digital apps, and so they would keep up with apps, they would perform evaluations for apps, they would meet with a patient and assess their level of digital literacy, kind of start from zero to get them to where they need to be, and then work with the provider to show, you know, here's some recent data that we got from the app. But the digital navigator would really manage the apps and manage the digital therapeutics and help synthesize the data that came from it so that you could then use that with the patient. And so what happened with Maricela was she left the clinic, she downloaded the app, and then the app shared CBT knowledge through a series of informational readings, high-definitional videos, and voice-guided meditation. But then she found she couldn't watch the videos because they took too long to buffer on her data plan. She tried to do one of the exercises in the app, but it recommended that she de-stress by going to the movies, but she couldn't justify paying for the movie as she had more pressing financial responsibilities. And after paying all of her utilities and bills, she didn't want to spend that money on a movie. And then she ultimately finds that the app was a bit impersonal, and it was difficult to understand as it was all in English, and she preferred an app that would be in Spanish. And so she really never opens the app again and moves on. So if we look closer at Maricela's needs, so we knew that she came from a lower economic status, and the recommendations would have needed to be affordable, realistic, and practical. Her language preference was Spanish, so having everything in English would have been a barrier to comfortable use, especially if it might have had a voice-guided meditation or it might have had walls of long text that would have been hard to sift through. And then she might have preferred a clinician or an intervention that would take more time to know her and really provide a more personalized solution than what was offered. And then the limited data plan that she had really impacted the needs so it wouldn't drain her data, and so she didn't want to use the app for that reason as well. So an app that could work offline or natively on the phone might have been a better choice. And so overall, the app wasn't relevant to her. And so while there is a great need for technology to address the shortage of mental health providers that we have, we need to make sure that we are also reducing mental health disparities as we're rolling out and developing this technology. Because a lot of apps are currently designed with the majority population in mind, and there can be a lack of attention to unique experiences of marginalized individuals, which compromise the effectiveness of the app in these populations, and we want to avoid the creation of new intervention-generated disparities. And so what do you think about when you are working on app development and you want to create for diverse populations and you want to minimize digital gap? So there's a few different frameworks that are available. So this one is the idea of empowering diverse stakeholders. So you want to be able to collect real-world evidence, which would then increase the transparency, reduce time to market. You want to educate providers and consumers. You want to have adaptive interventions to optimize care, create for diverse populations, and then build trust. So ultimately, you want to involve the people that you're designing an app for. You want to talk to them, you want to get to know them, and you want to involve them and hear their voice in the development of an app. You want to create something that they want and that would be useful to them. And so this is another framework, the idea of DDBT design, of identifying different needs, then working to design it, and all the while, you're building it, you're testing it, but you're also building trust and you're building a good working relationship with the community that you're designing for. And so it is this idea of creating, trial, and then sustaining. And through this, you're able to accelerate development. And so ultimately, cultural tailoring of apps must go beyond just language translation to incorporate cultural values, norms, and references. And technology offers the opportunity to increase efficacy of interventions and improve engagement. And we need to design for diverse populations by including them in the design process and also include clinicians as it is a dyadic relationship, where if you do design something for a diverse population, but you don't include clinicians in that, then it may not be used if it has a clinical indication. So you want to involve all the stakeholders in your design. And then voice recognition software should be inclusive and trained on data for diverse populations. All right, and then now without further ado, here's Marque Beletofi. Thanks, Darlene. And so what I hope to do is kind of tie in what Julia and Darlene have already talked about in their topics into a topic that is very present now, artificial intelligence. And so the goal of this is not to be kind of a whole rehashing of artificial intelligence per se, but is to give a general overview of what it is, how it's used in the purposes of digital health and particularly digital psychiatry, and things to be aware of as far as bias, which is a large topic. And so to start off this discussion, I want everybody to consider this case. You're an app developer creating an app to treat major depressive disorder. The app utilizes voice recognition, location, and smartphone user activity as digital biomarkers for predicting the onset and relapse of major depressive disorders. And so how is this done? These days, it can be done through artificial intelligence or AI. But first, a quick definition of AI. So artificial intelligence is the study and creation of computer systems that can do tasks usually done by humans. AI is a very large topic with many different types. But let's take a look, a general look, of how apps like this are made using AI. So first, you have your biomarkers, which could be voice, activity, mood scales, sleep, cognition. Basically, things, and these could be things that a device such as a phone can capture on its own, or particular scales that are created for purposes of gathering data and information. And then you can have demographics such as age, gender, race, et cetera, that could be used. And in this particular case, location. And then all of this data is incorporated into a model. And all these different aspects are what are called features, which essentially are variables that can be incorporated into a model, in which to discern which of this information is useful necessitates a mixture of domain knowledge and the available data to whoever is using it. And then once the selected features are noted, they can be incorporated into a model. And there are different types of models. And generally with models, what you do with this model is, once you've determined what features you're going to use and what type of model you're going to use, you train the model, which essentially is you have a large set of data, and then you put aside a certain set of data that you're going to train the model on. And usually that can be, you know, if 100% is the full data set, you may put aside 70% for training. And then once you've trained the model, which is basically the model is allowed to kind of get a sense of certain patterns and certain aspects of that would be important for the model for whatever purpose it's used, you can test the model on the testing data set, which is set aside and usually blinded so that to decrease the model peaking in and peaking in and taking in information that it shouldn't have. And this leads to, and this is done to determine outcome. And outcome can be, is essentially what the model needs to do. And in our case of creating an app to learn about onset and relapse of major depressive disorder, it can be a categorical feature, which is basically a binary feature, like, for example, one if major depressive disorder is indicated as zero if not, or it could be a score, which is a continuous variable, which is basically some sort of score that is given to a particular person, and then basically from that can discern whether a person is more likely to have major depressive disorder or have a relapse, depending on how the model is used. And so this is a very basic use case of how to take data into a model and then what you hope to get out of the model. And so with the development of a model like this, there's one large issue that emerges that can affect its usefulness, bias, and bias can take place in all three steps. And so just to give a general definition of bias, this was from a combination of the Merriam-Webster and Oxford definitions, bias is a systemic error or prejudice that favors or discourages a particular outcome, answer, thing, person, or group over others in a way that is considered unfair or unrepresentative of a whole. And bias is as old as civilization itself, and those who believe that AI and AI models are immune to bias are mistaken. AI models are often reflective of the systems they are created in, and they can be used to magnify pre-existing biases and unknowingly create new ones. And to be useful, AI applications need to incorporate necessary variables or features for their situations and train and test them on appropriate data to represent clinical scenarios in real-world settings. And these results should be generalizable. And so to borrow from a framework from the paper in Tucci et al. 2019, I'll go over three different aspects of bias and discuss sub-conversations within them, and then give an example of the particular aspects to solidify how they can happen in real-world settings. And so initially, understanding bias, first understanding the causes of bias, the old axiom for data is junk in and junk out, because if your data is not appropriate or not collected well, if your models aren't being reflected of real-world scenarios, then it hinders its ability to be able to be used in an appropriate setting. Manifestation, so models that are fed with skewed data or with data that may be biased or may be incomplete can essentially create a model that is cultivated on that skewed, on that biased framework, which allows, and if these models continue to be used, can perpetuate that cycle as you see these models have been, currently are, and will be used in real-world settings and can have real-life consequences. And fairness, fairness means that the AI system, that these models that we create and the decisions that they may be responsible for are equitable and take into account the rich diversity of our real-world scenarios. And it's important for the models to represent the actualities of their task. And so the first real-world example I want to talk about is the German credit dataset. Basically the South German credit dataset has been widely used in studies and some companies have even integrated it into their purposes to assess the credit worthiness for people. And basically the dataset is a stratified example of 1,000 credits, 300 bad, 700 good, from 1973 to 1975 from a large regional bank in southern Germany. And this dataset was used for years despite its known inaccuracies. One, the data when it was provided for public included swapped labels and incorrect variables. And then also, and probably even more importantly, the data was very time-focused in a way where basically, in ways that didn't reflect changes in the time as far as gender roles, as far as immigration populations. And so essentially the debtor's credibility seemed to worsen when their credit history indicated no past credits or all credits were paid duly and improved with people who had no checking account, as well as other impossibilities such as over-representation of foreign workers and also an apparent absence of single females as examples. And so while corrections for the dataset have emerged and new datasets have emerged, many studies for a while had relied on the flawed dataset. And in this particular example highlights the risk of using datasets that lack a clear data generation process, that weren't checked to ensure that the current structure of the data was appropriate when shared. And then also one that didn't necessarily represent the changing of roles and of population and not able to be melded for different uses. The next part of this bias framework is mitigating bias. And so there are three ways, there are three areas that were ensuring that bias is looked at and discerned and being able to do necessary things about it will be useful. And so in the pre-processing aspect, to use an analogy of kitchen prep, we must scrutinize our ingredients, which is basically the data. And we must correct the imbalances and ensure that it's representative of the population and of the use that we have for it. And this leads to processing or in-processing approaches, which is ensuring that our cooking methods are, so to speak, are reproducible and ensuring that whatever we're getting out of the model doesn't change with increased data and if it does, looking at ways to improve upon the model so that it's a dynamic process rather than a static process. And then the post-processing. And so this aspect is kind of the plating and test check of our cooking analogy. And once the model has made its predictions, reviewing and adjusting outcomes to make sure that they are, one, suiting the purposes that we're using the model for. And then if necessary, doing more tests on the model, making sure that it's the best model that we can use. And this is still an ongoing need for research and innovation in this area to ensure equitable outcomes. And so the second example I want to talk about is within the aspect of facial recognition. So in the GenderShades project in 2018, it was found that for a model being used for facial recognition, that compared models from IBM, Amazon, and Microsoft, that they looked at different categories of people, of gender, and whether they were dark-skinned or light-skinned. And basically, with the basic idea of trying to assess if there was any bias between what the model was able to capture as far in comparison. And so the biggest, and the accuracy of the facial recognition technologies. And so as we see here, for darker females, as noted in the red, and also circled by the black highlight, that there was a noticeable decreased accuracy in capturing darker females compared to even darker males, lighter females, and lighter males. And with error rates up to 34%. And so this brings a couple of things. How can we assure our algorithms are learning from diverse representative datasets? And then also, how can we improve the quality of the datasets? And finally, how can we continue to appraise the datasets so that they're keeping up with current technologies and making sure that they're accurate? And then within the last part of the framework, the accounting for bias, there is, as I mentioned, a couple of contexts. Getting the data collection process right is crucial. And as Darlene mentioned earlier, including stakeholder involvement can be important in this process, making sure that data is coded properly and that it reflects real-world settings. Describing and modeling bias. And so in order to, we identify, once we have identified the type and source and extent of the bias, what can we do about it? And that decision-making process needs to occur in order to develop a worthwhile plan to be able to do the right things that can ensure that the model's work is as it should be. And then explaining AI decisions. A lot of times, AI can be seen as a sort of black box where you just throw data in and then outcomes are spit out and then we don't, and then there's little discussion about how the model has reached those outcomes. And having more discussions about that and communicating the intricacies of the process, which can become very technical very easily in a generalized way, is also beneficial. And that leads to the last example I want to talk about of bias in AI. So the COMPASS recidivism algorithm is an algorithm that many counties across the U.S. have used in order to, basically with the goal of, if a person has been convicted of a crime, can you provide a score of how likely they are going to re-offend? And the issue with the COMPASS system was that black defendants were twice as likely to be misclassified as high-risk compared to white counterparts that didn't re-offend over two years, and there was a difference of about 45% to 23%. And then on the flip side, white defendants who did re-offend were almost twice as likely to be erroneously labeled low-risk compared to black re-offenders, and that was about a 48% to 28% difference. And then when controlled for factors such as prior crimes, future recidivism, age, and gender, black defendants were 45% more likely to be reassigned higher scores and 77% likely to be classified in the violence recidivism category. And so, like I mentioned before, these decisions that AI models may be used for do have real-world consequences. And how many of you know who this is? As of this past week, Samuel Altman, the CEO of OpenAI, one of the largest and most well-known players in the AI space, testified in front of Congress and called for more regulations on the AI space. And then two weeks ago, there was an AI summit at the White House with more players in the field such as Google, Microsoft, OpenAI, and others to clear the air on emerging themes in the field and where to go from here. And a lot of where we're going from here is due to be determined, but there are three major areas of concern. So within legal issues regulations, we have existing laws about data accuracy and discrimination. They set standards for the quality of data and how we can use and prohibit unfair treatment based on protected characteristics. But how do these rules translate with emerging technologies and new uses? Secondly, data modifications. Sure, we can tweak data to correct biases, but there's a larger issue of data modifications, particularly like where does it start and where does it end. And the biggest and the most notable example of this is in the music and art space. For example, being able to create models of different music artists and create new songs. And where does intellectual property come into play? And so even with small decisions, oftentimes they're part of a larger context where we need more clarity to ensure that even if we're correcting biases, making sure that we are doing the right thing and not introducing more issues. And then applying existing rules. So algorithmic decision making, as hopefully I illustrated, can be opaque, but which makes it hard to pinpoint bias or discrimination initially until oftentimes these models are used in decisions with real world consequences. But how can we adapt current existing rules to build stronger safeguards and stronger regulations that can not only prevent bias, but also allow for innovation to occur organically? And so if you have time, I think, are you? And so basically I just wanted to just really briefly ask the audience if there have been an example of bias that you've come across in a digital application, and what questions will you ask in the future to avoid bias in digital application? And this is really just kind of a really quick free-for-all. SMI Advisor is a great, great platform, but only English speaking. Hopefully it expands a little bit more to Spanish speaking and non-Caucasian populations. Even the educational videos and stuff are very Eurocentric. Fantastic platform. Great. Any other examples of apps out there that you've come across that you may be concerned about its generalized use? All right. And so that leads and so that concludes our talk. And if there are any general questions for us or comments for us or the larger audience, please take time to do so. We appreciate your attention and hopefully this discussion at least can add to the larger conversation of an important topic of digital equity and potential bias and potential bias and not only in current digital applications but in emerging ones too.
Video Summary
The presentation focused on health equity and digital inclusion in digital psychiatry, spearheaded by Darlene King and featuring Julia Tartaglia and Marque Blaytofi. It was divided into three parts, beginning with an exploration of health disparities in mental health and the rise of diverse US demographics. Julia illustrated that mental illness rates among people of color might be underdiagnosed due to biases and access challenges. Disparities in mental health service use were highlighted, with people of color less likely to access services, often due to systemic issues and socioeconomic factors. The talk emphasized the potential of digital health to bridge these gaps but warned of creating new inequalities through the "digital divide." This term underscores the gap between those with and without access to digital resources, influenced by factors like accessibility, digital literacy, and cultural differences.<br /><br />The session included a clinical case about Maricela, a Latinx female with limited digital access, to illustrate these issues in real-world contexts. Darlene highlighted the APA App Evaluation Framework and the role digital navigators can play in boosting health literacy among patients. The presentation stressed the importance of developing culturally tailored and inclusive digital health solutions, ensuring that broader stakeholder communities are involved.<br /><br />Lastly, Marque elaborated on the challenges of bias in AI within digital health, emphasizing six domains that need attention—from data collection and model training to fairness and algorithmic transparency. He discussed high-profile examples of biased AI applications in credit scoring and facial recognition to underline the potential real-world consequences of unchecked AI biases and called for refined regulations and ethical standards.
Keywords
health equity
digital inclusion
digital psychiatry
health disparities
mental health
digital divide
cultural differences
bias in AI
digital health solutions
APA App Evaluation Framework
×
Please select your language
1
English