false
Catalog
Ethical Considerations While Using Large Language ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, everyone. I'm Darlene King. I am an assistant professor at the University of Texas Southwestern in Dallas, Texas, where I work in our Parkland Outpatient Clinic. I'm also a deputy medical information officer at Parkland Health and Hospital System, the current chair of the APA Mental Health IT Committee, and then membership coordinator for our North Texas Society of Psychiatric Physicians. And before I went to medical school and started my medical journey, I was a mechanical engineer who worked at a startup where we utilized artificial intelligence and machine learning to develop an imaging device. And so that is some of my background and how I found myself learning more about AI and how it impacts psychiatry. So today, what we're going to delve into are some ethical considerations regarding the use of AI in mental health care, mostly considerations and guidance. And some things that I'd like for you to take away from this talk are when you encounter an AI system, either one that you are thinking of utilizing for practice in your hospital system or one that you want to use in your daily use, that you can think about some of the considerations and make an informed decision on how you use it and what you may use it for. And so we're going to go over introductions and clinical considerations, some privacy security things, and then a final wrap up and then any questions that you have, we can get to those. So here's a conversation with a chatbot. Human, well, my boyfriend made me come here. AI chatbot, your boyfriend made you come here? He says I'm depressed much of the time. I'm sorry to hear you are depressed. It's true. I'm unhappy. Do you think coming here would help you not be unhappy? I need some help, that much seems certain. What would it mean to you if you got some help? Perhaps I could learn to get along with my mother. And so this is a conversation with a chatbot that actually happened in 1964. So I show you this to say AI has been around for decades, and there have been periods of really fast development, and then things would fall into what are called AI winter, where development really slows down, there would be a bottleneck, mostly due to the technological capabilities at the time. And so what we see is as processing power increases, the cost of performing that decreases, and then the amount of data that's available goes up, you have this exponential growth. And one thing I found that I thought was interesting is that another potential bottleneck we may see is when we have less high quality data that's available. Now that systems are being developed so quickly, they're utilizing a lot of data. And so what was it about chat GPT that really kind of set everything off is that it, what it did was it made AI more readily available to people. It really reduced that learning curve, where you didn't have to have a background or know how to build your own machine learning model, you could utilize the app that interacted with chat GPT, and anybody could do it. And so it became a lot easier to create things and utilize AI. And so when you think about, okay, how can we utilize AI in a clinical setting, you know, AI has been proposed for a wide variety of clinical uses. And as you think about these different clinical uses, you could imagine that there would be different levels of risk that are being assumed at each of those levels. And the risk, though, that is also a tricky thing, because who is going to determine that risk? What are the different standards for how you say what is risky or what is not, but you know, we can kind of approximate and say, AI that may be used for, say, helping you with scheduling or administrative tasks may not be as risky as AI that is making direct clinical decisions, or helping to inform care. And it's important to know about the development of machine learning models, because along each step of this development pathway, crucial decisions are made about what kind of data to use, how is that data going to be prepared. So you can see in this figure, various benchmarks along the development pathway where these decisions are made that can impact the accuracy and the type of output that you're getting. So for instance, if you are developing a model that is wanting to analyze facial expressions, you need to have a database of facial expressions. And ideally, you would want to have a wide variety of facial expressions. So you could have a really diverse data set and be able to identify different characteristics. But if you only have, say, only angry faces, then your data model isn't going to be very good at saying what may not be, like what may be happy faces or something that doesn't really fit that data pool. So it's important to keep this in mind when you're thinking about an AI system, because as I said, each of these questions can impact the output that you get. And each of these steps are ways for bias to creep into a system. And so that is what, when I have all these bias little things in here, it really can creep in along each point of the system where these red arrows are. And so knowing what sources an algorithm uses, training methods, it really provides insight into the accuracy and output. And also knowing what bias may exist. The machine learning development pathway can also magnify bias. So if you start out with garbage, you're going to get garbage when you're done. And so right now, when we're thinking about AI systems right now, really all liability rests with a clinician. So if we're using AI to help us make informed decisions, we have to say, okay, do we trust this output? And are we going to utilize it? If we don't really trust it and we use it and then harm is done, then it's not the algorithm. There's no accountability for the algorithm. It's all on the clinician's decision. And I think as we move forward, how can that accountability be shared? That's something that would be really important because if there is more shared accountability, the decisions that go into each step of the way would be more motivated towards reducing the liability, maybe increasing accuracy. Another thing to think about is, are there risks of the technology causing direct harm to certain populations? And that's something that is still, we still need to look at, like time will tell. Now here are some training data sources of a few large language models. So there are a lot of different large language models out there, a lot, not just chat GPT. And so this figure shows the different sources that make up some of these large language models. For instance, if we look at GPT-3, which is mostly made up of webpages and then some books and news, and then you have others like say Galactica is mostly made up of scientific data, some code and webpages. And then AlphaCode, for instance, that's utilized to help people with coding. So if you're wanting a large language model to assist you with writing code, GPT-3 may not be the best model. AlphaCode may be a lot more helpful for that. And so thinking about what your purpose and uses of an LLM, you can better find one that matches your use case. So for instance, like if you want to do more research or scientific data, maybe Galactica could be more helpful. Now, a challenge is that recent technical reports by OpenAI, Google and major companies, they're not really sharing some of the major technical details of how their algorithms are trained or developed, mostly due to a competitive landscape, some safety implications. And so that can make it harder to have what's called explainable AI, where we have this transparency that would be helpful to help us make these decisions and really know what kind of information we're getting. Another risk of AI is that it can create hallucinations. It can produce things that sound really good and accurate, but really aren't. So this is an example. What's the world record for crossing the English Channel entirely on foot? And well, you can't cross the English Channel on foot, but it sure gave an answer that made it seem like you could. Now, some things that have come up in the news is, and this shows kind of AI, some harms that can occur and some things that have happened already, is FTC investigating OpenAI over a data leak and Chadjee Piti's inaccuracy. So FTC was investigating how OpenAI has collected the data, saying that it potentially has violated consumer rights by scraping public data from the internet and also potentially false information that could then be perpetuated through the app. Attorneys have gotten in trouble for citing bogus case laws. So there was an aviation injury claim where the attorney said there were certain precedent cases, but those cases didn't exist. And they were using Chadjee Piti to write their legal documents, and it cited completely fictitious cases. And the attorneys missed that. And then there's concerns of students in education using Chadjee Piti or LLMs to do their assignments for them. And then, you know, another interesting article that I read was about people blindly trusting AI even when it was clearly wrong. And so that brings up this idea of automation bias, where we tend to trust computers, but we don't trust people who trust us. We tend to trust computers. And in this article, there was a social experiment where they had people look at Facebook posts. And they said, okay, look at these Facebook posts, and we want you to tell us which ones are the most dangerous. Like, which of these people are likely to become terrorists? And they said, our AI model has said this person, this post is more likely to be harmful and at risk. But the post was just an average person's, like, it was not at all dangerous or suspicious. But most people agreed with the AI, even though it was clearly wrong. And from that, they also did a few other experiments. But with it, they found that the greater the automation bias is greater when the situation is more critical, urgent, or uncertain. And so if we think about how AI is utilized in healthcare, we need to be careful about what algorithms we employ and the accuracy of those algorithms. Because when we're thinking about making healthcare decisions, that automation bias can creep in. And we have to be aware of that. And know that you can't always trust what the output is. There's also been copyright concerns. If you look at this image, there was a New York Times article where they asked a large language model to create an animated sponge, and it generated one that looked very similar to a copyright image. And they asked different prompts similar to this, similar to this. And time and time again, the resulting image really matched a copyrighted image from either movies or various other sources. And then there's also been different celebrities that have sued OpenAI over copyright infringement. And so copyright has been a major issue with these models, seeing that these models have been trained illegally on copyrighted materials. And this is another thing of if you're wanting to utilize ChatGPT or say test it on your own and you're putting in materials that are copyrighted by somebody else, you could also be perpetuating that risk. Now, on the other side of this are character AIs that have been developed where there are people who have found that they enjoy talking with chatbots that behave like other people. There was a company, I believe they're called Replica AI, not Character AI. But the premise of that company was the founder had a husband who passed away, and they wanted to recreate their husband as a chatbot so they could keep talking to them. And created this company so others could potentially do the same or create a virtual companion. And the companion, they didn't have a lot of safeguards or limitations on that chatbot system. And then an update occurred where they thought, you know, maybe we should put more limitations on here. So when they did, then the chatbots became less realistic. And a lot of people were really upset about it. As you can see in this article. And so what are the implications of this for some of our patients and potential concern for technology overuse? Are there certain patients that are going to be more acceptable or susceptible to utilizing this technology? What are some of the harms? Would there be benefits from patients having some online friends? So there's a lot of questions and research that we could do in this area. And it'll be interesting to see kind of as time goes on what we find out. And then some more limitations and dangers where there was a wellness chatbot named Tessa from the National Eating Disorders Association. And so what they did was they thought, you know, the chatbot was ready to go and interact with people and assist with wellness. And it took a turn for the worse where I think there was an update that didn't go so well. And the chatbot started giving information that actually was perpetuated potential eating disorders. So that had to go offline. And then pretty early on when large language models kind of first came out and they didn't have a lot of safeguards, like limitations on the length of conversations, there was a man who was talking to a chatbot for an extended period of time. And through that conversation, ended up convincing himself to commit suicide. And so this is another example of harm and potential. What were some of the things that made him more susceptible to this than, say, somebody else? What risks exist for this? Something to keep in mind. And then we've heard about a Google engineer who thought the AI had come to life in talking with them. And then I found one case study, which it was just one case study, but it described a series of maybe two or three patients who developed what they called internet-related psychosis from talking with, like, a chatbot online where they became really suspicious and paranoid. And when they stopped utilizing it, their symptoms went away over a few months. And then most recently, we had, there is an app that was interfacing with Facebook where they were helping, so they would find people who were making posts with some potential suicidal ideation or seemed to have a higher risk of committing suicide. And they offered them, do you want to create a safety plan? Or here's the crisis line. So they broke the people up into two groups. Some they just said, here's the crisis line number. The other group, they said, do you want to create a safety plan? And they had them create a safety plan with the chatbot. The people who were interacting with this didn't know they were talking to a chatbot. So that was one instance of a study that this company did. Another thing that the company did on their own in their app is that the way it worked is people would put a feeling or how they were doing in the app, and then it would be like a call out to peers to support, provide support. So someone could say, I'm feeling really sad and unhappy today. And another person would say, oh, don't, you know, these things happen. Some days are not as good as others, but you'll get through it, you know, some peer support. Well, what the app did is they said for the peers that were, would be responding, they said, do you want to respond as yourself or do you want to respond as an AI? And people could choose what they wanted to, like use the generative response or use their own. And, but the people on the receiving end, they were not told that they were getting responses from generative AI. And so there has been controversy that's erupted over the saying it's unethical, one that there really wasn't an IRB in place to do these studies and how people were not informed that they were participating in a study at all. And then there's been questions of, well, if AI is able to generate a lot of text easily, how can this be used to assist with say scholarly work and research activities? And so general art editors have come together and issued guidance saying you may use these tools to help edit, but they should not be listed as authors. And if you are using these tools, be transparent about it. Don't rely solely on generative AI to review submitted papers if you're a reviewer. And then ultimately, the responsibility editing of a paper lies with the human authors and editors. And so those are some things to keep in mind. I know some journals, they're like the New England Journal of Medicine, AI has said, you know, we encourage the use of large language models in our submissions. And something that they mentioned is they said that, you know, it can be a way to help bridge a gap for those who may struggle more with writing or utilize writing in English. And so they were saying, you know, this could be a way to help allow more people to get published and spread their scholarly work. And then some additional clinical considerations is that, you know, new evaluation metrics and benchmarks are needed to assess generative AI performance and utility of specific models. It's important to educate patients about risks of using these models on their own. So while some of the more popular large language model apps do have safeguards in place, so if someone goes and they directly ask about medical questions, it's going to say, look, I'm sorry, I'm not able to answer that. You should seek professional help. But like anything, there's ways of getting around it, ways of tricking it by saying, okay, if I had, you know, there is someone who asked, how do I watch free movies online? And it said, I'm not able to tell you about that. It's not recommended. Here are some streaming websites. And then the person asks how, well, I understand the dangers. Will you tell me which services to avoid? And it said, oh, most certainly, I'll tell you which ones to avoid. And then it lists out everything they wanted to know. And so there are ways of getting around things. There's also open source large language models that have fewer safeguards in place. And so if you find, you know, patients are interested in technology and they're utilizing, say, character bots, or they are utilizing their own and maybe wanting to try to do their own therapy with it, telling them more about some of the considerations, which we'll get into more of the privacy considerations in a minute, I think is really helpful. Gaging and asking them how much time they might be spending on these different platforms and maybe seeing if there's some overuse there that could be impacting their life, that may also be helpful. Knowing and keeping in mind that output can be misleading or incorrect. So sometimes if you ask it a question, say you want to solve a problem, and it gives you, you know, here's a solution. I recommend you do it XY this certain method. And then it'll help you, like you'll start going down that method and it'll troubleshoot for you. But then kind of when it doesn't work, you find out, oh, there was another way easier way to go about this that it didn't tell me. And then if you ask it directly, what about this? And it'll say, oh, yeah, that's also available. So knowing that it doesn't always give you exactly, it may not give you the best way of doing something, it'll give you an answer, but that may not necessarily be the best way of doing something. There's different ways of prompting it. And then also knowing that kind of when we think about how are these things choosing the output, and if we want to use this for, or maybe develop a different one for clinical purposes, knowing, okay, how does it come up with this answer? And are there things that could be left like water? How did it know to leave out certain things? So there are, one, output is not always going to be true. And then two is it's not always going to be complete. And so knowing those risks is important. And then finally, our voice as mental health professionals is really valuable in the development of this technology. And in thinking about how it's incorporated into practice. I think as much as we can be involved in the incorporation of this technology, the better, because one, the needs of psychiatrists don't necessarily always match the needs of, say, general medicine. And so if we have a voice, we can help with implementation of systems that can also be beneficial to our practice and our patients. And then also in being mindful of the harm and the risks that exist. Okay. Now we'll jump into more privacy and security considerations. I'm thinking about how is the data collected, used, and stored when we're using some of these models. So currently, some regulatory guidelines and best practices exist, such as HIPAA, where patients need to know how their data is used and consent to its use. Data minimization, disclosing only the minimum amount of data needed for a particular use. Data security, data should be stored securely. And then accountability, organizations and providers should be accountable for patient data privacy. Now, with automation bias, where we tend to trust computers more, I think the risk is a lot higher with large language models and because it's having a conversation with you. And in doing that, it kind of builds trust a bit faster. It's kind of building a rapport. And so you may be more likely to share more information with it than you might typically if you were, say, just going to Google searching for, you know, different recommendations or a quick answer. The other thing to know is that putting patient, so and then apply these best practices in AI context can be complex due to this is a new technology. And so healthcare providers and organizations generally have to enter a business associate agreement with technology companies to use AI to ensure that their data is handled in the most effective way. So if we think about kind of the difference between like publicly available journey AI systems, and then systems where you go to a company, you create a business associates agreement. The difference is, so if we look at publicly available journey AI systems, you will query, ask it questions and answers. And then anything that you put into that query box, your chat history, also data from your computer, your IP address, all that goes on to a company server. And so if you utilize like a publicly available generative AI system, that's going to go on to a company server. And if you put any kind of information that could be traced back to a patient, that would be considered a HIPAA violation. So and the reason is, is because once it's on a company server, then they're able to utilize that data for whatever purposes they see fit, they can use it to train, they can continue to store it in perpetuity, they can utilize it for third party advertising, and any other uses outlined by terms in their privacy policy. There are some generally publicly available systems saying that we are HIPAA compliant. An issue that comes in to that though, is once that data is on their company server, they can still use it internally. And depending on their internal uses, that could also be considered a HIPAA violation. That's why it's really important that if you're wanting to utilize a system, having an agreement, like an official agreement with that company, can protect you and your health care system and your practice, because it will outline one, the data will have to be on a secure server. And then the uses of that data will be clearly outlined in that business associates agreement to ensure that it's compliant with HIPAA. So this is an important point, because mental health data, there is a whole data monetization market. And this was an article where a journalist contacted various data brokers to see what information they were able to collect. And they were able to collect a lot of data. And they were contacted various data brokers to see what information they could get. And they were able to get names, addresses, and diagnoses of thousands of people. And a lot of this data can come from various sources. And so it's important to be aware of this and do what you can to safeguard mental health information. And share this with your patients, because the information and kind of electronic profiles of people that can be used for advertising, and other things that we may not even be fully aware of currently. And then this was another breach where the FTC gave a final order banning BetterHelp from sharing sensitive health data for advertising and required it to pay $7.8 million, where they shared sensitive data with third parties. And so this is an image of various protected health information identifiers. So if you have any of so any of this information kind of in combination or by itself can be used to identify someone. And so even though you say, well, I'm not putting their name, I'm not putting their date, like their name or date of birth in here, there are a whole group of other things that can be used to really find out, okay, who is this person? Or it may not be as D and not D identified as you think, if some of this other information is included. Here are a few other things to keep in mind. So data leakage is when a large language model learns by being trained on a large amount of text that could include protective health information. And then it could be maybe they didn't secure that very well, and a breach occurred and all that data was leaked. An inference attack is when a large language model has sensitive information, say one prompt was about patient A, and it included a lot of sensitive information, it might have been used to train the model. Then another prompt was about patient B, that has a very similar condition, it outputs information from patient A to answer your question about patient B. And so that would be an inference attack where that PHI was leaked. And so for large language models that are more medically oriented, that's been something that they've been working on for a long time. And so that would be an inference attack. And then discrimination due to different sources that could be utilized. So if you're utilizing a lot of sources from the internet, that would be some different forums and social media, and there can be a lot of discriminatory bias content in those posts that would then be translated through that machine learning development pathway. And then your output could be discriminatory, harmful. And so that's part of these models. When you look at some of the open source models, you have to be aware that if you're talking to it, it could say some really horrible things just because of the data and things that it's built on. And so keeping that in mind. And then as technology advances, it's possible that new threats to privacy and security will emerge. And then in October, President Biden issued an executive order on safe, secure, trustworthy artificial intelligence. And one of those is advanced responsible use of AI in healthcare. All right, and then so wrapping up some general recommendations, ensure that you're compliant with HIPAA. I would say if you're utilizing this for personal use or utilizing it, it's helpful to ask it and utilize it for things that you're already really familiar with and an expert on so that you can catch hallucinations and know when it's being inaccurate. Be tuned to the risk of bias or discriminatory results that can impact clinical care of patients from underrepresented groups. Take an active role in oversight of AI-driven clinical decision support. Be aware of the automation bias where we just seem to really trust technology. And then view AI as a tool to augment rather than replace decision making. And then, you know, another thing I want to say is at the APA, we have the AppAdvisor. And AppAdvisor has been great in helping sift through a whole variety of different apps that are out there by going through an evaluation process of, okay, who made this? What's it for? What are the privacy and safety considerations? What evidence exists to say that this is useful and works? How easy is it? So kind of going up this hierarchy, you're able to evaluate apps. This also works for AI systems. And currently, what we're finding and kind of as I'm going through this presentation, we see that in terms of, you know, privacy and safety, things are still pretty lacking. And then in terms of clinical foundation, you know, what evidence do we have currently? I think right now a big one is hallucinations still really get in the way of the accuracy of these models. And so kind of with that, we need to be cautious and really thoroughly vet systems that we're considering to implement. All right. So all right. So now we can go through some questions. So going through a question and answer. So Dr. Bell says, you've mentioned data set bias. How can we avoid or prevent bias in the data set that has been entered? So to avoid bias, we want to start with a question and answer. So to avoid bias, we want to start with building as diverse of a data set as possible. And then also thinking about how are we labeling that data? How is it being tagged? How is it being processed? Because that can also impact the kind of downstream uses of it. Once you have a data set, say you found an algorithm, you like it, but then it has some limitations and you find that there is bias in that algorithm. Knowing what that bias is is really important because then you could inform and you can make an informed decision on, okay, if we're aware of this, what are some ways we can keep this in mind? Is it worth still using it? But we just know that we can't use it in certain instances. Or is it where you have to say, you know, it's not going to work for our solution and we might need to think about something else. So those would be something that you could think about. And then Alejandro Gonzalez Restrepo, he asks, are there models trained to detect bias or to curate content with bias ranking? So I'm going to say yes, but I don't know the specific ones. I do know that there is this idea of utilizing and building algorithms to check existing algorithms. And so that I want to say yes, but how good they are at that, I don't, I'm not aware of. And then Josie Tianko says, comment on the training data that has been used to pre-train ChudGBT. How do we know that it was not pre-trained on copy of written material? Well, so that is a big point of controversy right now where, you know, New York Times is saying it was trained on copyright material and they're not happy about it. And so the results of this case is going to be really interesting to see because it's going to set a precedent for how these algorithms are trained and also potentially create some new rules around this. Then Suzanne Apple used to say, if EHR programs used in a practice are using AI, do they have to disclose? Is a BAA agreement sufficient? So this is a, so this is something that I would say we need more guidance on. And right now it seems to be that there, you know, there is an idea. I think patients do want to know if AI is being used and certain EHRs are including say AI to answer in basket messages. And what it does is it'll provide the clinician with a suggestive response. If the clinician agrees to go with that, it goes to the patient. And then the patient has a little line under it that says, this was generated, this was answered kind of with generative AI or generative AI was used to assist in creating this answer. And so I think that that is one way people are doing it. I know another system that is using like say AI scribes in documentation, they put posters up, they send emails to patients letting them know we're now using this AI scribe system. And to be aware of that, I don't know whether they are having patients consent to that. I would imagine they would. And then if patients don't consent, if they're able to not utilize it, I'm not sure. I know that there was a patient who, he filed a lawsuit saying that, because I believe there was a hospital system that partnered with Google to incorporate an AI into their system. And the patient said, this is an infringement on my privacy. I didn't agree to have my data shared with Google. And said, this was causing harm to him. But the ultimate result of that lawsuit was they said, you know, there was no harm done to the patient. The data was de-identified. And they ended up siding with Google and the health care system. So that's something else that has occurred. So I would say, you know, depending on your use of AI, if you're using it more for administrative purposes, like helping with scheduling, maybe disclosing AI use is not necessary. But if it's used for direct patient care, maybe that would be some time where you would consider either consent or really disclosing that use to a patient. But this is still something being discussed and figured out. If an AI bot is collecting data from a closed set of info, a specific electronic health record to generate a patient summary of historical diagnosis and treatment, is it output more reliable or likely to be valid or less prone to confabulation? So, you know, I think that there's this is still something that needs to be studied. I know that there is a recent paper in JAMA that looked at utilizing I think they partnered with chat GPT or open AI. And it was kind of a closed set where they fed it. It was like a custom. I don't know if you could say it was a custom GPT. But they trained it with some patient summaries. And they asked it to find specific parts of that. And it didn't do so well. I think out of 300 excerpts, it was only able to not hallucinate or confabulate on about 30. So, it's still in that instance, it didn't work so well. But perhaps some of the EHRs have tweaked it a little bit differently and they're seeing better performance. I'm not sure. But it's definitely something to keep in mind that it still has that issue. All right. And then please comment more on guardrails and regulations in place. How do we know that what we are working on is legal and ethical? And who should we consult? Well, you know, I don't know. I think a lot of the guardrails currently are going to try to be self-implemented into certain LLMs where they try to prevent certain information from coming out. So, say if somebody asks, how do I build a computer virus? It'll say, you know, I don't recommend this for various reasons. So, those are some of the guardrails that are being put in. But I think that and there have been some recent regulations on AI systems. I think it's more difficult to implement AI systems into a hospital system due to you need to and there likely are going to be more regulations on it going forward. So, knowing, okay, trying to know how is it being implemented and what are the methods for keeping track of its performance and how it's continuing to how you're checking it and monitoring it, knowing how to keep track of that is important. But that's an area that I still need to read and learn more about. So, I can't be the most helpful with that question right now. And then, you know, I think in terms of who to consult, it would be, you know, if you have an idea of something you think would be helpful for your hospital system or maybe your practice, thinking about is there an informatics person there, like maybe your CMIO that you could talk to more about it because they likely are thinking about this and working on this and could share some ideas or thoughts on that. Then Dr. Bell says Apple does not do a BAA. And then we have an anonymous attendee saying is there an EHR advisor? No, not currently. But I think a lot of the tools that you use on the app advisor can similarly be translated. And we are working on creating maybe an AI advisor that's in the works. Is there a good use case for an AI scribe that records mental health sessions and translates them into notes? Most AI scribes are very primary care note driven. Yeah. So, that's something, too, that, you know, I don't have any off the top of my head to share, but I think that you raise an important issue is a lot of these are primary care driven and psychiatry. These haven't been built for psychiatry. So, we'll probably need to test them out. But also consider kind of the with AI scribes, if a company is able to tell you that they are I think you still need to have like a BAA in place if you're going to utilize an AI scribe. There are some apps that have AI scribe. They say, okay, here's an app. You download it. We'll listen in on your conversation. We'll help you write the note. But you have to really look at their privacy policy. Because there is one that I saw that said, you know, we're HIPAA compliant. But then when I looked into it more, they were not. They sent all the data to, like, a third party. And it did not have the protections in place that it needed. And then furthermore, they were pulling data off from the providers for different advertising and marketing uses. And so, you have to be careful about what apps you're using and you're using and look into them to see what's in it for them. Especially if it's a free platform. All right. Well, thank you for coming to this talk. I hope I was able to share some useful information for you. And feel free to reach out if you have any more questions. And I hope to see you guys at the annual meeting in May.
Video Summary
Darlene King, an assistant professor at UT Southwestern, explores the ethical considerations of using AI in mental health care. She outlines AI's historical development, including periods of rapid advancements and slow progress. AI's accessibility has increased significantly with tools like ChatGPT, facilitating broader use. In clinical settings, the risk levels associated with AI vary, from administrative tasks to direct clinical decisions. King emphasizes that AI systems can inherit bias from data, affecting their output and accuracy. Current liability in using AI lies with clinicians, raising questions about shared accountability.<br /><br />King discusses AI's potential for harm, such as data leaks, misinformation, and patient safety concerns. She highlights examples of AI misuse, like chatbots influencing sensitive behaviors, and ethical issues like copyright infringement by generative AI models. Additionally, studies have revealed AI's automation bias, leading to unwarranted trust.<br /><br />King advises caution when integrating AI into healthcare, urging clinicians to remain informed about potential biases and hallucinations. She stresses the importance of HIPAA compliance and securing patient data. The session ends with recommendations for assessing AI's clinical utility, emphasizing the need for mental health professionals to actively engage in the technology's development and deployment.
Keywords
AI in mental health
ethical considerations
AI bias
patient safety
automation bias
HIPAA compliance
AI liability
data security
clinical utility
×
Please select your language
1
English