false
Catalog
Augmented Intelligence in Psychiatric Education
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi, I'm Darlene Ng. I am an assistant professor at the University of Texas Southwestern Medical Center in Dallas, Texas. I am also the behavioral health clinical informaticist at Parkland Hospital in Dallas, Texas. And then I'm the current chair of the APA Mental Health IT Committee. So I'm happy to be here today and talk with you all about augmented intelligence and psychiatric education. And we'll go into some different considerations and guidance. To get started, we're going to introduce the topic. We'll talk about how students are using generative AI, how educators are using generative AI, and then what we can do about this. What are some action steps we can take? And then finally, we'll wrap up and have questions, Q&A. So to get started, generative AI, you can consider it as multi-modal AI now, where, you know, many of us are familiar with, say, chat GPT, where you can type in text. It also, you can ask it, you know, draw me a picture of a mini golden doodle, and it will generate an image. But generative AI is now becoming multi-modal, where you can talk to it, and then your audio can then generate, it can then generate the text based on what you're saying. It can be used to produce audio. It can be used to produce video. You've heard about deep fakes, or you can upload, say, an outline of content and say, please make an introductory video to depressive disorders based on this outline, and then it would generate a video that adheres to that theme. You can generate images, you can generate text, and so this is a world of multi-modal AI having these tools right at our fingertips, and when this first started coming out, I was using it a bit, and I wanted to share with you guys a personal anecdote of when it really became real that this is now starting to enter into academia in our training programs, where one night, it was around, I would say, 6 p.m., I got a call from a student, and they said, Dr. King, will you write a letter of recommendation for me? It's due tomorrow, and I'm going to send you a draft of the letter this time. So, first off, I'm still very early in my career, and this was the first time someone had asked me for a letter of recommendation, and I remember being a trainee, coming across a program and seeing, oh, no, the application deadline is next week, and so, you know, I had some questions about this, and I told the trainee, I said, you know, this is a really tight turnaround. I don't know if I'm going to be able to do this for you, but I'll take a look at your draft, and so I opened the draft, and the first thing I saw were these placeholders, where it said, to whom it may concern, I'm ready to wholeheartedly recommend Dr. Applicant's last name, and it said it exactly like that, and I paused, and I was like, you know, this was written by generative AI, and this was my first encounter, where I had a student provide me with material that was done with generative AI, and at that point, you know, I say, you know, there's no going back, so we might as well get on with things, and so from this experience, first off, if you're going to use AI, you need to do it correctly, and I think what could have helped the situation was if the trainee had been transparent with me about the use, and it may, and then secondly, review all the output, so if you're going to use generative AI, use it correctly by reviewing all the output, and think about, you know, being transparent. This trainee had told me, you know, Dr. King, it's short turnaround, I'm going to use generative AI to generate the note, I would have been able to think about it in that time, and decide, do I feel comfortable putting my name on something that was not written by me, or that was generated by AI, and really, this experience, and I think part of this is being early on in my career, where every academician has to think about, you know, how they want to handle and approach letters of recommendation, or their scholarly activities, I had to think about my personal values with respect to AI use, and academic endeavors, and think about, you know, some guidelines for myself, where now, I am saying I need at least a month before I can agree to write a letter of recommendation for someone, and I've decided that I want to, you know, write a letter where I really know the person, so that I can write the letter myself, and not have to depend on some external tools, but then, you know, this also got me wondering how other students, educators, and institutions were approaching AI in psychiatric education, and this wouldn't be a talk about AI in academic psychiatry if I didn't bring up the chatbot passing the USMLE, where we saw ChatGPT pass USMLE, PalmMed too passed USMLE, and in academic psychiatry, we had an article where the resident says, I had taken step two six months earlier, a feat that took me hundreds of hours of study, managing my stress for weeks, and a toll on my waistline. Even though I had scored higher than ChatGPT, I felt a sense of dread that it had achieved this in real time without specialized training. Could my expertise and effort be equaled or even trivialized in the future? Even so, I felt excited about the potential of AI specialized for the field of medicine and psychiatry, and so there is this excitement, but then there's also this uncertainty and some questions of what does the future of AI and psychiatry, what is that going to look like? And so let's go into student use of generative AI. And so some things we've seen in the news, there have been many news reports of students that have been cheating, using generative AI to cheat. One student used a trained essay and got caught, and then we have students that are wearing the smart glasses, using it on exams to capture the problems, and then there's one student who went so far as to create, so he had a button that had a camera, so the camera looked like a button on a shirt, and that then was connected to an earpiece, and in his shoe he had a router which connected the camera to the internet and communicated with the chat like a large language model, and so when he saw the question, the camera captured the test question, it then relayed it to the generative AI, and then the generative AI gave him verbally in the earpiece the answer to the question, and so he got caught, but his very elaborate technique of using generative AI to cheat was quite elaborate, and I wonder how much time he could have been studying that he spent building this elaborate tool. And so we're seeing a bunch of different news instances of students being caught cheating using these new AI tools, but what do general education data show? Is cheating become more rampant and more likely? Well, there was a challenge success study at Stanford that's been surveying high school students for the last 15 years, and so they captured data before the instance of ChatGPT, and then they started, then they captured data after ChatGPT came onto the scene, and they found that about on average around 16 to 70 percent of students admit to engaging in at least one cheating behavior in the past month, and this statistic has remained consistent for the past 15 years, and these surveys were anonymous, and they asked about specific behaviors. They didn't ask, do you cheat like outright? They asked, do you ever copy text from online and use it in an essay? So they asked about certain behaviors to then gauge whether the patient or whether the students were cheating or not, and they found that after the release of ChatGPT, the frequency of cheating did not increase and remained about the same, and they said, you know, what it really comes down to is thinking about what usually causes cheating in the first place and improper use of AI tools, and they saw that less cheating happens when students feel a sense of belonging and connection and when they find purpose and meaning in their classes, and so as we're thinking about residency education or medical student education, how can we continue to foster sense of belonging, connection, and make sure that our trainees really have a sense, find sense of purpose, meaning in the work that they're doing, and then an online magazine intelligence survey of a thousand U.S. college students found 30 percent of college students have used ChatGPT on written homework, and then 18 percent of the college students that were surveyed used AI on more than half of their assignments, and so when we think about generative AI, there could be some potential problematic uses such as using it to write an entire essay that someone then passes off on their own, using it to write patient case reports when that's your assignment, using it to write clinical assessments and plans from the very beginning of medical school, that could, that takes away your ability to formulate and use critical thinking skills where it's just doing that already for you, and then using it to complete assignments, or if you're told to read an article, say for journal club, you use the generative AI to read it and then give you a summary, that would also be some problematic uses. In terms of the clinical uses, I think it's really important to think about the clinical uses additionally problematic if they're using public-facing generative AI apps where they're providing patient information. That is another potential problematic use, and so I think a guiding question that I found helpful is, do I have any reservations about being fully transparent with how I plan to use generative AI for this task? And I think if you don't, if you say, you know, I have no issue being transparent, then you're likely not using it in a problematic way that you may, that may not be acceptable, so it's more likely to be an acceptable use such as, say, just editing a document. And more on problematic uses, Moriel and others in the recent academic psychiatry say, furthermore, in using the example of asking an LLM to create a differential diagnosis or write a reflective essay, students are not strengthening nor being assessed on their ability to think critically, which is a core skill in the practice of medicine. Now, residency education, it's training that provides knowledge, skills, attitudes, and other attributes necessary to practice medicine and ultimately become a psychiatrist. It's guided by ACGME psychiatry milestones, which organizes competencies into a developmental framework, and then it's usually a time of intense training and vast exposure to clinical care that I'm sure everyone looks back and remembers fondly in an environment that provides feedback and guidance through supervision and clinical competency reviews. And so, you know, during this time, I think a lot of learning happens, and what do we know about effective learning? So, there's a book called Make It Stick, The Science of Successful Learning, and going through this book, they highlight some key things based on scientific research that's been done in learning, and they say that some difficulties during learning help make the learning stronger and better remembered. Just reading through things over and over, highlighting, underlining, that is a passive type of learning, and it's not very effective, but having learning that is somewhat difficult can make that learning stronger and better remembered. Learning that's effortful changes the brain, making new connections, and increasing intellectual ability. So, it's like the harder you work at something, it may be better, you're maybe more likely to learn it because it was effortful. So, say, actively doing flashcards versus just rereading through a book, it's not going to be, doing flashcards is going to be more effective because it's more effortful. And then we learn better when we wrestle with a problem before seeing the solution. So, there are, an idea would be, you know, you have a patient that you've never encountered before in training, and you have to think through everything that you've known and you've seen in the past, and then synthesize that to come up with, okay, what can we do now? And at that time, you don't necessarily have the solution, but then you put it into place, and then staff with your attending and get that feedback on whether that is actually a good solution. And so, being able to wrestle with the problem before seeing the solution also better ingrains what we learn and we know. And so, students should strive to surpass current abilities. So, always thinking about, okay, where am I at in reassessing and working on progressing further? And, you know, that goes back to ACGME milestones where it outlines, like, a progression, a developmental progression throughout training that, you know, residents can follow and know, you know, mark their track through. And then setbacks provide essential information, needing to address strategies and achieve mastery. There have been, you know, that getting asked a really difficult question on rounds and stumbling, not knowing the answer, and then going back and figuring it out, that mortifying experience. I never forget what the answer to that question is afterwards. And so, you know, setbacks providing that essential information help one achieve mastery and get even better. And so, these are some things that we know about effective learning that I think is helpful to show as we think about, okay, how do we now utilize generative AI as a tool to promote effective learning instead of providing us with the answers let's look at it as a way to encourage effective learning and and so we know there is a there's been an educational transformation for for a while now of trying to move past lecture-based learning and focus more on team-based learning or problem-based learning learning that is more active and follows some more of those principles that I just described and so another guiding question to this is you know as I'm using generative AI for whichever application or purpose I am thinking of am I using AI as a tool for effective learning so for students I think that this is a good question of one do I have any hesitation about being transparent about the way I'm using AI and two am I using it to help me learn more effectively because if the answer to this is no then I think some thinking about how you could use it in a more productive way could be more effective in terms of your future learning now what would be ways that we could use AI as a tool for effective learning and we're going to go through a few examples these are not exhaustive they're just kind of some high level examples that you can use to get thinking about things you may want to try or things that you may find helpful and you know as I'm describing some of these things if you have any other ideas feel free to share put them in the chat and we can turn this more from a lecture-based learning to to more of a collaborative discussion so one example would be using generative AI to help you with brainstorming I'm working on a project to improve patient educational handouts for mental health and need some fresh perspectives could you help me brainstorm ideas for what topics to make patient educational handouts for I'm looking for ideas that are practical and scientifically backed and so it says certainly here are some practical scientifically backed topics that could be impactful for mental health and it gives me a list of things and so this is an example where you give it a prompt and you ask for some brainstorming ideas and it can help you generate those generate a quiz from a reading so you're giving an assignment that you need to read and you read the article but then you want to make sure that you're understanding it that you're going to be prepared for a discussion about that article later something you could do is you could ask AI to help you generate a quiz so you can upload the pdf and say I'd like you to summarize the attached article in three sentences and then generate a 10 question quiz based off information in the article now if you want it to be multiple choice then you say create a multiple choice quiz if you want it to be short answer and open-ended you can just say you know be more vague like this prompt was and then you say you know if you want the answers provided you can give them a detailed explanation after the entire question set and so this is what happened there it gave a short summary of the article and then it asks some questions about primary motivation and this article was about AI and psychiatric education and then it started asking questions based off of the topics that were in the article so this can be a good way of doing some active learning of quizzing yourself making sure you understand what you just read and then talking out loud making sure you understand material another thing AI can be helpful with is editing content and providing new ways of of writing things it can also help with creating an outline so the outline can be used to help create a study plan and and see what information you should cover and think about so learning about depressive disorders please create a general study outline I can follow to make sure I have an excellent grasp of major depressive disorder list any useful references you think I should review that will further facilitate my learning and so that's what it is like absolutely here is this general study outline to help you so it went into intro diagnostic criteria and it went on and on and I'm not showing the entire thing here or reading too much into the details of this but this is to illustrate the use of creating outline and we're going to go more into some of the limitations of using AI to study but this is these are just examples of things you can do and then ask for assistance and then another thing is using AI for study buddy prompts such as engage me in a Socratic dialogue where you ask probing questions about my responses and challenge me to think deeper about the concepts and principles behind them help generate flashcards so you can provide an entire document of say question and answer it'll generate the flashcards for you or you can say generate flashcards on this topic and it'll come up with the flashcards it thinks would be best for you you can ask it to schedule reviews at space intervals to maximize retention you can do and explain back to me teach me about major depressive disorder and then allow me to explain it in my own words highlight any gaps in my understanding so you could get into a conversation with the AI to teach about different things and highlight any gaps expanding concepts help me expand on my topic by connecting it to related fields or exploring deeper layers and then case studies and hypotheticals create a real world scenario or a case study that require me to apply concepts I've learned use scenarios that involve troubleshooting decision making or ethical considerations and so this these are just different prompts you could use that may engage and and help you get into that more effective learning using AI for effective learning now we are going to go into limitations after I talk about one other aspect where it's important to keep in mind that not all the information you get currently from large language models is going to be accurate and so you may be having a conversation or it's explaining some gaps in understanding but that that can be a bit risky and we'll talk a little bit more about that given the potential for it to generate misinformation and an inaccurate data now how will digital scribes and AI clinical decision support impact resident training so with AI assistance with clinical documentation it's very likely and some places are already piloting this and you may be using it at your institution is that you know it's going to start generating HPI generating initial plans um from the clinical interaction where the clinical interactions recorded that audio then goes into an automatic speech recognition system generates a transcript and then that transcript is in process with various AI techniques to then create an HPI and then offer up some suggestions potential plan and medication treatment options and so that whole process that we're currently very familiar with of seeing a patient collecting that information working on our note where we're then writing out our assessment and our formulation and using critical thinking skills to come up with a differential and then think about the proper treatment which are really important critical thinking skills now that we have AI in the loop how how can we what do we think the impact on clinical education will be and so what what it may be is that the information gathered during the clinical interaction is going to be very very valuable because if the history was not gathered effectively then the HPI that's generated is not going to be very good and so likely there's going to be a big emphasis on improving the clinical interview skills to receive high quality information from a patient that can then result in some higher quality notes and shifting the focus from documenting our clinical thoughts to then reviewing AI output and making sure that it is going to be correct and as we as you get kind of further on in this talk what we'll see is that it can be very challenging to review AI output and you have to have a very strong clinical foundation to make sure that what you're reading is accurate and is describing the situation. So what are some current limitations to know when using generative AI multimodal AI in education and training and the biggest one is the risk of misinformation where this is an example of you asking a question and then it provides a very confident answer that's entirely bogus and there's a question you know large language models or multimodal models are built on different types of data what if our model had more scientific data so for instance GPT-3 most events model was built off web pages and books and news but what if we're what if we use a model that's more scientific built off of more scientific data so say for psychiatric and medicine what if we're using models that are built off medical and psychiatric data would that then be more accurate would we get a higher quality of output and what in the instance of Galactica, Galactica was mostly scientific data to help with research and it lasted only for three days because what they found was the hallucinations it produced the type of misinformation that it made was very it sounded very plausible very scientific but it was factually wrong and in some cases also highly offensive due to underlying biases that may have existed in that training data and so what we so what does this mean for students large language models perform worse as material becomes more and more specialized and so there's an increased risk of misinformation but kind of as we're learning and going through in our clinical studies we need to learn those more specialized nuanced things and so I don't I don't know if the current large language models that exist out there are going to be able to fulfill that need in psychiatric training the generative AI output is always super confident too and it can be difficult for students to discern between correct and incorrect knowledge if they are using multimodal AI to learn or study because most of the time it's going to mix accurate and false information together so as you're reading part of it may sound really good but then embedded in it may be some things that are not accurate and could be wrong and it can be very hard to distinguish between the two and then there's risk of bias from large language models and multimodal AI output and so some examples of bias that we see in AI could be image generators that reinforce gender bias of the role of women in the workplace for instance only showing older males and professions and women are not ever depicted facial recognition having higher error rates and false having higher error rates and false positives on people with darker skin there was a paper I came across that was using AI to study the empathic response of medical students when talking with a patient with addiction and it would do facial recognition on a medical student and and give it tips give them tips saying you know you should make better eye contact or try to relax your your forehead you know it would give some tips on how to appear more empathetic in a clinical interaction but then when you look back at the data that their facial recognition software was trained on it was trained on an open source facial recognition software that didn't utilize a very diverse patient it didn't utilize a very diverse population of faces and so the limitation of that software was that it wasn't able to provide guidance to to people who were of darker skin color and so if we're thinking about utilizing something like that as a tone education that would limit and not provide as good of feedback to students that didn't fit that original criteria and so that's an example of bias in AI and why you have to ask questions on okay what is this trained on what to make sure that it's not garbage in is garbage out there was a past amazon hiring algorithm that favored applicants who used words like executed or captured which were more commonly found on men's resumes and they had to turn this algorithm off because it was excluding a lot of women from being considered for jobs and then llms have been shown to describe schizophrenia in dark and negative ways and even the term hallucination can provide some stigma there image generators depicting mental illness with negative images and then a health care algorithm which returns lower accuracy results for black patients and white patients and so these are some examples of bias and AI that we need to be aware of that exist especially you know if using it to help study some of the output it may generate may not be providing the full picture that is necessary when building up a knowledge base that is useful in psychiatry needed in psychiatry and so what are the potential impact of the different limitations and generative AI it would be you know generative AI now amplifies the need for a strong clinical knowledge and ability to critically review output problem-based learning and knowledge management are key and more research needs to be done looking at error rates and psychiatric exercises for major topics tools need to be developed which may partner with generative techniques and other techniques that can provide potential error checking or rate of confidence about put accuracy so that if someone's using this to study they can feel confident that the conversations and and and information that's being provided is accurate information because right now the risk is you, it's really cool and exciting to be able to do Socratic questioning or to have it point out gaps in your knowledge base, but you want to make sure that those are accurate gaps in your knowledge base. And so that is something to keep in mind. It's, you want to know, okay, what are the limitations here? And that you can't always just blindly trust the AI output. So now let's go into education, educator use of generative AI. And so, you know, some main areas we'll look into and explore in terms of education, research, and mentoring. So what do we do about students who are using AI? Well, AI is here. It's, students now have AI tools at their fingertips. If they're taking a test, they could go to the bathroom, look at their phone, ask the question, go back. And so AI is out there and widely accessible and used. AI tools are also being implemented very quickly into hospital system, electronic health records. For instance, we just went over digital AI scribes for documentation, but AI is also being used in many other ways, such as generating summaries of a patient chart, generating hospital course for discharge summaries, and generating responses to patients in basket questions. And there's an idea of, okay, well, what if for assignments, we just send it through an AI detection software? And then that way we'll know who's cheating and who's not. But AI detection software is also not prone to making errors. So there's also issues with AI detection software. And we're seeing people getting falsely accused of cheating when they haven't been. And students who have been in this situation have said, you know, sometimes I spend more time making sure my work doesn't get mistaken for AI than I do on the actual assignment. So that could be rewording and making their sentences shorter. It can be making sure that they're videoing themselves as they write their entire essay. Making sure they have timestamps. And so going, taking a lot of extra steps to make sure that if they are accused of using AI, they have proof to show that they didn't use AI. Because some of the consequences that these people have faced have been quite severe and required them to go through a lengthy appeals process. And with AI detection, we see, okay, it can do okay if the entire thing is written with AI. But it doesn't do so well for something that was maybe first passed with AI, but then it was edited very heavily with new things interwoven in that can make it kind of a mix of both. And so we kind of go back to another guiding question of, am I using and teaching AI as a tool for effective learning? And so knowing that AI tools are out there, people can choose how they want to use them. But in education, we don't really have that And so knowing that AI tools are out there, people can choose how they want to use them. But in education, if we provide some guidance and support for how we think acceptable use looks like, then we may be able to start thinking of ways that these tools are going to be used to improve learning and make it better. So on the educator side, how can we use AI for effective learning? So this is an example where AI can be used to help with curriculum development. So you can send it some questions. What are some best practices for teaching this topic? Or what are some creative ways to teach this topic? And it can be a starting point of helping you maybe get some ideas for what people may find fun and interactive. And here's an example. So imagine you're a psychiatry educator about to hold a 30-minute interactive session. Create a lesson plan for a 30-minute interactive session teaching psychiatry residents about opioid use disorder diagnosis and treat. And so that was the prompt. And then I'm just going to show you kind of the outline of what it created. So in 16 seconds, it came up with this. And then we're not going to go into all the details of it or dive into how accurate it is. But the idea of this is it can be a starting point for you to think through, okay, I'm going to give this talk. What would the AI suggest? And then maybe I can build on and I can modify. Maybe it has some ideas I like and I can go forward from there. So it could be an example to kind of get you started and provide like an initial skeleton of ideas. Now, one thing I've experienced with it is asking it for kind of some ideas. And sometimes it kind of does like really similar ideas to itself. And so that can lead to not a lot of variability. So that's another thing to keep in mind that we're still I think humans are very creative and your brain can always come up with some new interesting things that maybe chat GPT won't. And then another thing it could do would be like a role playing scenario. So you could tell your resident or trainee or medical student, all right, we have a patient coming in with some symptoms. But they're unsure of the root cause. Your goal is to build rapport, explore their symptoms, assess any background factors that may contribute to their anxiety. You should demonstrate empathy, active listening, and skill for questioning. And then you give them a little bit of guidance. Start with warm, nonjudgmental tone. Use open ended questions. Explore their thoughts. Try to identify any patterns. And then reflect on how to maintain rapport. So that could be instructions you give the resident. And then the prompt for large language model would be, all right, you're this 20-year-old patient. You've been feeling increasingly anxious at night. You're not sure why. But you suspect work stress might be part of it. You tend to be guarded at first, but will open up if you feel understood. You recently went through a breakup and have been struggling with feeling isolated. You don't know if it's relevant, but you notice you're sleeping poorly and often feel tense. And so that's kind of the prompt. And then it'll say, yeah, I can play the role of the client. Start the session. And so then you'd have the trainee or resident start asking the patient questions. How can I be of help? When did you first start noticing this anxiety? And so this is an example of role playing scenario. And then you can also ask the LLM to play out the scenario itself, like give you the solution. What would be a good interview and what would that look like? And it can give you some suggestions. Now, the reverse of this is you could take this output and you could go to the resident or medical student or trainee or class and say, let's now critique this and see what errors we can find or if there's better ways of approaching this patient. So for instance, quiz students to find errors in AI clinical output. Since we will start reviewing more and more clinical output from AI, we need to be able to get good at finding those errors in here. So this is an example where I said, please describe the APA app evaluation model. And in all ways, sometimes it gets it right. But this time it said it was the American Psychological Association. And I did correct and say it should be the American Psychiatric Association. But then it did a good job with the rest of the core components. But this is an example of you ask the question and then it provides output. And then that could be an exercise in and of itself. Find the errors and how is this, how did it do, how did it measure up? Now, AI use in research. So since we're talking about academic psychiatry, the Journal of Academic Psychiatry encourages early career researchers to create scholarly pieces from start to finish without the use of an LLM assistance to learn the requisite skills necessary to write, review, and edit manuscripts. And journal editor guidance on the use of AI in scholarly publishing, they say, you know, large language models cannot be listed as authors on papers. It is the federal responsibility of the human authors for editing a paper, reviewing a paper. And then academic psychiatry's position on the use of AI recently published in AI and medical education article focus on large language models. It says they encourage authors to refrain from using large language models, including generating topics or writing any part of a manuscript. They'll permit non-substantive usage as grammar correction or editing. But any use of AI has to be disclosed at the time of submission, either in methods or elsewhere. And it should be clearly stated in the cover letter to the editors and in the acknowledgment section. And they should not use AI generated images. And finally, given legitimate concerns regarding copyright, privacy, security, confidentiality, our peer reviewers should not enter any manuscript into an LLM. So do I have any reservations about being fully transparent with how I plan to use journal AI for this task? I think, again, in research and for scholarly activities, asking this question can be helpful. Because if the answer is no, then you're likely using it in ways that would be seen as acceptable. All right. Now let's get into, we have all this information. What now? What can be done? So different AI policies at institutions of higher education have been created. And they've ranged from prohibiting use all the way to encouraging use. So some places say, you know, we strictly forbid the use of online AI applications that haven't been approved. Others say, you know, we're going to leave it up to individual professors. To outline in their syllabus the proper use of AI. And then other universities have gone so far as entering into enterprise level licenses with a company like OpenAI to then provide an enterprise wide interface for, say, chat GPT that students, faculty can use that are secure and protect intellectual property. And they even allow use for clinical information because of the agreement that they've set up with the company on data management to ensure that privacy, security, intellectual property are maintained. And so we see there's a wide variety of policies out there. And then in psychiatric education, updating ACGME milestones can be helpful to include expected proficiencies in ethical and professional use of AI and digital technologies. Understanding limitations of AI in clinical care, reviewing and editing AI suggested output, effectively communicating with patients about AI use, and then also providing a framework for understanding AI use. So consenting patients for use, inquiring about their own personal use, and then correcting any misinformation that they may have with their interactions with AI or chatbot. And then lifelong learning of technologies and scientific literature in the digital world should be encouraged. And so this is an example of informatics competencies that's based on the ACGME framework. And so we have the competencies and then opportunities to incorporate informatics along the way. And so, for instance, patient care and procedural skills, how to look up information in smartphone reference apps or exposure therapy through virtual augmented reality devices, medical knowledge, understand how information systems and processes can optimize or impair workflows. And so these are some ideas on how different competencies can be included in the ACGME framework. And then developing curriculum is also necessary to incorporate education on AI in the psychiatric education and training programs. So, for instance, teaching AI from conceptual foundations, a roadmap for psychiatry training programs. This is an example of some curriculum that spans over six weeks, starting with intro to AI and deep learning, along with different learning objectives. And it mixes lectures with small group work and then a discussion forum. Also incorporating different AI ethics as well. So applied ethics feature challenges in AI for psychiatry. Learners will revisit proposals from week four where they had to make a new use for AI models. And they had to say, OK, are there any future ethical risks that could be associated with this case? And then another thing that could be helpful is training more psychiatrists in clinical informatics. You know, new models of care, such as collaborative care, psychiatrists oversee large populations of patients and utilize population data. It's important to understand the assumptions, limitations, and strengths of how population data is generated. There's a need for psychiatrists to understand new technologies to properly evaluate them and determine their use for clinical care. Trainees need to be able to make informed decisions to lead the adoption and promote judicious use of novel technologies. And so it's very important to understand some of the technical nuances behind these technologies. As exciting as they may be, we still want to be able to protect patient privacy and security and also be effective at our jobs. And then wrapping up, some final points. So we need to educate learners and educators on proper use of AI in education to foster, continue learning, and prevent harm. So it is this idea that AI is out there, it's here. We have access to multimodal generative AI at the tip of our fingertips on our phones, highly accessible. And how do we work with that? Instead of, you know, prohibiting it or saying no or blocking it off, how do we work with it, encourage effective use that is productive while also continuing to have learning and prevent harm as much as we can. And then programs need to create policies and provide guidance to students and faculty on the accessible uses of AI. Because I think with guidance and policy, students know where they can look for when they have questions. And it also helps to have things in writing so that they know what the expectations are as well. And then there is a need to train more psychiatrists and clinical informatics to help guide our specialty into the next generation in a way that will enhance care and maintain clinical integrity. All right. Well, that is all that I have for the material. Does anybody have any questions based on what I shared today that you'd like to know more about? Now one thing I will do is. Let me see. So I think with. Hopefully you know, I think the thing. With using. Generative AI to help with studying and learning is that we need to have ways that can minimize errors and. And incorrect information being shared and so. Uhm? That would be a way to. If that's something that can be better mitigated, I think there's a lot of potential in this space for effective learning. And experiences could be more personalized where I could say imagine an exam knows what that kind of changes itself. As someone's taking the test that better matches and and assesses their knowledge level. So things like that where we could utilize. Generative AI techniques. In combination with some other like computing techniques that would provide more safeguards in place to make sure the information received is accurate. And then something I didn't get into is on. There are some like custom GPT's people have made. There are so many of them and I would consider those as kind of. Look at them in a similar way. You may look at apps in terms of you have to see who made it and do you trust this person? Do you want them getting some of that information if you use these custom GPT's but you could design a custom GPT that would say. Do you help teacher resident like cognitive behavioral therapy like it could say OK I'm going to be the patient. You be the therapist and then they switch roles and you can upload. You know, maybe cognitive behavioral therapy manuals to the custom GPT so then it would know what guidelines to adhere to and and make sure that it's kind of more on track with that. And so that's a whole nother area that we didn't dive into with creating some custom GPT's that could also further facilitate learning. Or teaching and educational materials. Alright, well, it was really good to be here today. I appreciate everyone that took the time to learn about AI and psychiatric education. I hope this was informative for you and that you can ask some of those guiding questions to help your trainees or to help yourself with learning as you go forward. And then if you have any other questions, can always feel free to reach out to me via email at darling.king at ET Southwestern.edu and be on the lookout in around February there will be an academic psychiatry special edition on AI and psychiatric education with. That you may find helpful. And thank you everyone for.
Video Summary
Darlene Ng, an assistant professor and behavioral health clinical informaticist, spoke about the integration of augmented intelligence in psychiatric education, focusing on the potential and challenges posed by generative AI tools like ChatGPT. She noted the increasing use of AI in generating written materials and the ethical considerations for students and educators. AI is now multi-modal, allowing users to interact via text and voice, and can produce complex outputs like videos and images. Ng shared an anecdote about encountering AI-generated content in a student's draft letter of recommendation, emphasizing the need for transparency and critical review of AI-generated material.<br /><br />Students have faced allegations of cheating using AI, but studies suggest rates haven't increased since AI's introduction. AI's role in psychiatry education has sparked concern and excitement, particularly after AI like ChatGPT passed medical exams. Concerns revolve around how AI might influence learning experiences, potentially inhibiting critical thinking if over-relied upon for generating differential diagnoses or reports. Ng highlights the need for incorporating AI competency in psychiatric training, emphasizing effective learning through active engagement rather than passive reception.<br /><br />While AI offers potential benefits in education, research, and mentoring, there are significant limitations, including inaccuracies, biases, and the risk of misinformation. Ng underscores the importance of educators guiding proper AI usage, developing institutional policies, and anticipating integration of AI in clinical settings. Additionally, training psychiatrists in clinical informatics is deemed crucial to navigate future challenges effectively. Ng advocates adapting educational frameworks to include AI proficiency, ensuring that AI enhances rather than hinders medical education and practice.
Keywords
augmented intelligence
psychiatric education
generative AI
ChatGPT
AI ethics
AI in psychiatry
AI competency
medical education
clinical informatics
AI limitations
×
Please select your language
1
English