false
Catalog
Introduction to AI: Making Sense of AI Chatbots
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning, everyone. My name is Vishal Madan, and I serve as the Chief of Education and Deputy Medical Director at the APA. Thank you all for joining us today for our virtual immersive event titled Exploring Artificial Intelligence in Psychiatric Practice. As the world of AI has super exponentially grown in this past year and a half, we've continued to think about practical applications of AI in the real world clinical practice and in psychiatric education. Are there ways we can make our clinical processes more efficient? Can we reduce the redundancies in clinical care and documentation and help reduce burnout? Are there tools we can utilize to enhance outreach and delivery of psychiatric care? Are there ways we can make psychiatric education more personalized and learner-centric? And then can we have resources available at the point of care for the busy clinician? So today, thinking through some of those questions, we've meticulously crafted a program that delves into the intersection of AI and psychiatric practice. As you will note, from foundational insights into AI and chatbots to nuanced discussions on leveraging generative AI, enhancing patient and caregiver engagement, identifying risks for neuropsychiatric disorders, exploring AI applications for therapy, all of this is in our agenda, which promises to be a rich exploration for one and all. I'm grateful to all our faculty who've truly worked hard to create this educational opportunity available to all of our members and audience. As participants, you will gain access to recordings of all sessions conveniently housed in the APA Learning Center for up to three years for you to review. Without any further delay, I'm delighted to introduce our first speaker, Dr. Stephen Heiler, Professor Emeritus in Psychiatry at Columbia University, to kick off our exploration. Dr. Heiler, may the force be with you. The floor is yours. Thank you. Thank you much. I hope everybody can see and hear me as well as the PowerPoint, and I want to say good morning to all of you early risers. As Dr. Fischel had mentioned, my name is Stephen Heiler. I'm a psychiatrist at Columbia University Medical Center, and I'm going to be talking about making sense of AI chatbots. A shout out to Dr. Madan, who you just saw, our Director of Education at the APA, and also to Shazia Khan, who's in charge of our e-learning content and the strategy manager for online learning for getting me set up here. Just to make a mention, I have no financial disclosures or conflict of interest. I don't assume that any of you have prior knowledge or experience with artificial intelligence or chatbots, so this is going to be very basic, and like I said, it doesn't require any prior knowledge. I'll be starting this presentation with some basic definitions, and then I'm going to speak about available internet AI programs and options for you to try out. Also, I have an interest in personalities, and I've been playing around with the chatbots, and they have many similarities, but they also have some distinct personalities, so I'll be pointing those out. For my presentation today, I'll be using mostly chatGBT4, though I will talk about some other options that you have. And I'll be doing some live demonstrations, showing you about the use of the prompts, giving a lot of examples, and hopefully I'll be able to toggle back and forth between my PowerPoint presentation and my chatbots. Without further ado, let me get started. I believe everybody has probably a printout of the slides. I will be going basically along with the slides that you have if you want to follow. There will be some deviations as I've made some last-minute changes, so don't feel bad if it doesn't seem exact. You're on the right road. All right, so let's get started here. Firstly, what is AI? Well, AI is artificial intelligence, and artificial intelligence is the simulation of intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self-correction. As this complicated diagram shows – and by the way, I've created these diagrams from the image processes of chatGBT4. You could do the same. You put in text, tell it to make an illustration, and it pops out nicely like this. You input the data, the data gets processed through the AI, and your output becomes your decision or your action. Now, what does this have to do with things that you might have heard of that have the working brain? You're probably familiar with what's called a neural network. Well, a neural network is a series of algorithms that endeavors to recognize the underlying relationships in a set of data through a process that mimics the way the human brain operates. So what is generative AI? Well, generative AI refers to the subset of AI technologies that are capable of creating new content. And this new content can include text, images, videos, and it can resemble human-like output. And as an example, this abstract artwork showcases colors, intricate patterns, illustrating the depth and complexity of innovation that's possible with generative AI technology. Basically, the AI program created this beautiful image that you see here from some prompts that were given to create it. So you've all heard of GPT, GPT this, GPT that. Well, what is GPT? Well, GPT stands for generative free-trained transformer. And that's a type of generative AI. It's a model that was developed by OpenAI, and it specializes in understanding and generating human-like text based on the input that it receives. So what happens is the input is prompt, you input text into the prompt, and the GPT model will respond in generated text. So the illustration here, the image is designed to visually convey GPT's role in understanding and generating human-like text, emphasizing its transformative capabilities in an AI-driven communication and creativity. One of the things you've probably heard a lot about recently is about artificial general intelligence. So what is AGI? So what is artificial general intelligence? Well, artificial general intelligence is a form of artificial intelligence, and it has the ability to understand, learn, and apply its intelligence to solve any problem with the same efficiency and capability as a human being. Now this is what everybody's worried about, that this is going to take over for humans. Okay, I don't think that's going to happen. Now unlike narrow or weak AI, which is designed mainly to perform specific tasks with human-like capabilities, AGI can perform just about any intellectual task that a human could do. As an example, it would be able to do speaking and understanding languages, it would be able to learn, it could solve problems, and it can make decisions. You've probably heard the term LLM. Well what is an LLM? Well an LLM stands for a large language model, and that's a type of deep learning model that's designed to understand, generate, and interpret human language on a large scale. The image here is designed to convey the functionality of LLMs in processing and generating human-like text. Now I'm going to get to the chatbots, but basically the chatbots were developed taking into consideration a training module of a large language model which might encompass the entire internet. But I'm getting ahead of myself, let's get to what the chatbot is. A chatbot is a software application that's used to conduct an online chat and conversation via text or text-to-speech, in lieu of providing any direct contact with a live human agent. So that's what we'll be using, the chatbot. So when I show you ChatGB, one of the common questions is, how does a chatbot differ from what would be considered a simple Google search? Well a chatbot generates human text responses, emphasizing conversational dialogue and its interactive nature. By distinction, a Google search is a proprietary algorithm designed by Google, which retrieves and presents web-based information, that's its search function. A Google search focuses on finding and organizing data, and it provides results from various online sources. So it might provide sources from the entire internet. It could provide text, it could provide things from social media, it could provide things from videos on YouTube. There's a new guy in town called Perplexity. Perplexity is a hybrid search and chatbot program, and it's sort of brand new. And what Perplexity does is, it enables you to put in a search, but like a chatbot, you can interact with it. And if I get a chance, I'll give you an illustration of what Perplexity does. Now the internet AI programs are several, and several competing ones. The most common one now, and the one I'll be using today, is ChatGBT, and that's from OpenAI. Microsoft has its own version of ChatGBT, and it's called Copilot. And Copilot, formerly called BingAI, and if I'm not mistaken, I believe you need to use this on the Microsoft Edge browser. Maybe they have a way to get onto Chrome or other things, but if you wanted to use Microsoft Copilot, I believe you have to do it through the Edge browser. Now Google has a new program, their chatbot is called Gemini. Now Gemini is formally what they call BARD, and for that you need a Google account, which most people have if you have Gmail. If not, you need a Google account to get onto Google's Gemini. Another new guy in town is called Claude or Claude2, and that's by a company called Anthrobic. I'll say a few more words about that when I get to the personalities. I believe it might be up to Claude3 now. And I've already mentioned the hybrid search and chatbot called Perplexity. So what are the options for your account? So you should know that the legacy ChatGBT 3.5 or Turbo, that's basically free. Now an upgrade to that, and one I've been using, is ChatGBT Plus or GBT4, and that requires a $20 a month subscription fee. So you might ask, well what do you get for the extra $20 a month? Well what you get is a substantially larger, large language model, which has much more data in it than 3.5, and it's more up-to-date in the information that it has, and you're less likely to get a busy signal when a lot of people are trying to log on. And it also has some special features that you can customize that you can't do with the basic 3.5. Microsoft's Copilot, which I mentioned formerly being AI, it's using GBT4, and it's also free. Problem with that is it's got a character limit of 4,000 character limits per prompt, which limits the amount of information that you can put in a prompt and get back as an answer or whatever. 4,000 characters is a couple of pages. So if you have a long prompt or you have a lot of information, you're going to run out of space for that. That's why it's free. Google Gemini, which is formerly BARD, their new version of Gemini Pro, which doesn't make sense to me, the Pro version is free, but if you want to get their expanded version that competes with GBT4, that's called Ultra. And Ultra also requires a subscription. As I mentioned, Claude from Anthrobic, their basic version is also for free, but I believe they have on Claude 3 also a subscription model. And Perplexity, which I mentioned, which is the hybrid, the basic version is free, but if you want to get the Pro version, that also requires an additional fee. So that seems to be the business model. You know, the companies are getting you involved with the chatbots for free, and if you want some extra features, you need to pay an extra additional monthly subscription fee. Now I mentioned earlier that some of these chatbots have different kind of personalities, and I'm particularly interested in personalities. So the best that I could describe is the ChatGBT, it's your scholarly companion. It's knowledgeable and approachable guide, eager to assist with a wide range of topics. It's designed to be conversational, making it feel like you're interacting with a friendly tutor. It values accuracy and depth of knowledge, aims to provide informative and nuanced responses that encourage learning and exploration. Now Microsoft's co-pilot, I would call an efficient assistance. And what it does is it integrates GBT into their own ecosystem, which is the whole Microsoft system. I might liken that to a highly efficient office assistant, and it's built to enhance productivity, offering support with tasks, emails, documents, and more with the Microsoft software suite. It focuses on streamlining workflows, facilitating professional tasks, and it embodies a practical task-oriented helper. Now Google's Gemini, the competitor to GBT, I might consider an innovative explorer. Now specific details are less public, but Google's approach to AI emphasizes innovation and integration with its search capabilities, which it's famous for, and the user experience. So Gemini can be envisioned as an explorer, always ready to delve into new information territories, focusing on delivering fast, accurate, and highly relevant answers. And it also leverages Google's vast data and search prowess to offer insightful and context-aware responses. The final personality is Anthrobic's Claude. Now Claude, I would say, is the ethical guardian. Anthrobic focuses on creating AI that's safe, ethical, and aligned with human values. So therefore, Claude might be seen as a guardian, emphasizing thoughtful interaction, ethical considerations, and safety in conversations. It's designed to be reliable, but with a cautious approach to information sharing and interaction, prioritizing the well-beings of its users and society. In essence, Claude has much higher guardrails than the others, and you're less likely to get into controversy or trouble using Claude. Now moving on to GBT and the dashboard, I'll be getting onto this in a second and showing you the dashboard. Basically in chatGBT4, they say the training base only goes up to March 2023. However, I found that it also does use some real-time internet. And with that, I'm going to share my screen and show you a little bit about how this looks in action. So here, there, and with a little luck, you'll be looking at my screen here, which is basically the chatGBT4 interface. And here, what you have is the GBT4 interface, and I have a little custom version of this called a clinical companion, which is what I've been working on. And just so you know, if you get the plus version, you can have many different versions of GBT, depending upon your interest. So you could have things that I'm showing here, product officer, music teacher, image generators, et cetera, et cetera. Well, let me get back to my clinical companion, and I'm going to start giving you some illustrations of what I can do. So one of the first things that you could do is you could get basic facts. The basic facts that you could get, let's see, I didn't get to it down here. One of the basic facts, you could ask it simple questions. So a basic fact might be something like, who is the father of American psychiatry? Or I'm sorry, who is the father of the American Psychiatric Associations? So what I'm doing is I'm putting that into a prompt, as you can see, and I'm going to send a message, and we're going to get the response. I hope we're getting the response, and I hope you were able to see that. So here, the father of the Declaration of Independence, credited with founding American psychiatry. One of the things that you can get here is you can have this read aloud. Let's see if this works. So I'm asking it to recommend five psychiatrists for the position of the chair of the APA DSM Steering Committee. And here, basically, I'm not getting a detailed response, because I guess it's being shy. It doesn't want to insult anybody. But it's giving you, basically, what the criteria would be. So it would be leaders in research, experienced clinicians, contributors to DSM additions, educators, policymakers, et cetera, et cetera. Now I'm going to press it a little bit, and let's see. I'm going to—oh, come on, give me the names for some real psychiatrists for the position. You never know what's going to pop up here. All right, it's giving me a hard time. It really doesn't want to give me the names here, so I'm not going to press it any further. If I did press it, it probably would give me some more names. Now another thing that I found the chatbots do a really good job at is things like language translation. So here, I'm going to ask it a simple question, well, why don't you translate the Pledge of Allegiance into French? I'm not even going to try to pronounce this, but basically it does a very good job with language translations. Another thing that it does a good job with is comparisons. Let's set up a comparison. I'm going to say, compare the American Psychiatric Association to the Group for Advancement of Psychiatry, the GAP. That's the prompt that I put in. I'm going to send a message. It tells us a little bit about the APA and the GAP, and what its commonalities are and what its differences are. We have the purpose and the mission of the APA, which you're reading off on the screen, and the GAP. Then it talks about the difference in membership and structure, and it tells about having a large number of psychiatrists versus a selective invitation only. Finally, we have publications and output, influence, and impact. You can do a very nice comparison using a GBT. The next thing it could do is, GBT is really good at creative writing. I'm going to give an example that they showed on 60 Minutes of how you can give a minimal prompt. The prompt is to write a short story, no more than 100 words about what Hemingway said, for sale, baby shoes, never used. What you could do is get a 100-word short story generated by chat GBT. If you're able to read this off the screen, the tiny shoes caught sunlight in the windows, the laces are still pristine, Sarah placed the ad reluctantly, each word heavier than the last, for sale, baby shoes, never used. She ignored the quiet buzz of a phone as responses trickled in. Instead, she watched family stroll by, infants cradled lovingly in arms, each notification chipped away at the wall around the heart. This is cute enough that I want to see if they could read this aloud. Now, if we wanted to modify that a little bit, I have, well, let's try rewriting that as a horror story about 100 words by Stephen King. Same prompt, different author. Now, this is why there is such concern by people who do creative things like write novels, stories, and things that chatbots are going to replace them. The chatbots are getting awfully good, and this is one example. Moving on here, another thing that you could do with the chatbot is brainstorming. Say we wanted to brainstorm something, what five topics should be considered for courses at the next APA annual meeting? Well, bingo, they have number one here, digital psychiatry and telehealth innovations. I want to say it had nothing to do with that answer. Psychopharm, yes, that would be popular in a meeting. Mental health impacts of climate change, very interesting. Cultural competency and inclusive care and ethical boundaries in the digital age. This is a good example of what we might be able to get when we ask it to brainstorm. You could brainstorm about anything, about your child's birthday party, if you're starting a psychiatric practice, what you want to call it to make it unique, etc. Another very good use of the chatbots is it's able to summarize text very well. I'm asking it here to write a one-page synopsis for AI and psychiatry. Here we have with just that minimal prompt, a one-page synopsis of what AI and psychiatry is. It's done an awfully good job with it. It tells you what the sections are, etc. Now, I'm going to skip over making it understandable for a 10-year-old, but then I got to put another probe in, and we could change the whole text and dumb it down so a 10-year-old could understand that. I want to show an example. The next example I'm going to show is it's really good at creating lists. I'm going to ask it to, I'm trying to lose weight, give me a list of the top 10 foods that I should eat to stay healthy. My daughter happens to be a dietician, so I'm thoroughly familiar with this. But basically, it gives you the standard 10 foods, including leafy greens, whole grains, etc. Fruits and berries. Now, I could keep going on and on about other things it could do, but in interest of time, I'm going to fast forward. It could do personal essays, it could give pros and cons of lithium versus depakote. But I do want to get on to the rest of the presentation. I do want to save 10 minutes or so for questions. So let me get on to, I started to say, if you want to get better answers with the prompts that you have, you need to be very specific and clear. You have to give a lot of feedback. The more feedback, the better the answers. You have to be patient with your prompt because AI is still in its infancy with a lot of bugs. One of the golden prompts would be, act as if, then tell me what else you need to do. Now, I do want to talk about some chatbot mistakes because this is important for psychiatrists. Chatbots do make mistakes. They can make recommendations that are not evidence-based or supported by guidelines. Chatbots may not have the comprehensive knowledge of the latest research or standards of care. One of the examples is something that hit the press called TESOR, by the National Eating Disorders Association, whereby an organization was replacing live people to answer questions about eating disorders with a chatbot, and they came to some very bad things in terms of information that was given and it led to some very bad outcomes. Now, another mistake that the chatbots can make is failing to consider individual factors like a person's medical history, their risk factors, drug interactions, and that's because the chatbots lack the clinical judgment to apply general knowledge to specific patients. Providing advice without establishing a proper diagnostic assessment, the mental health conditions require a thorough evaluation by a professional before determining appropriate treatments. That's why I'm a big fan of having the clinician in charge of the information they get from the chatbots and not having your patients explore this on their own. If they explore it on their own, it's apt that they may try to self-diagnose in unreliable ways and the chatbots are not equipped to make accurate psychiatric diagnoses. They also could recommend treatments that might be ineffective or harmful if they're properly administered and the psychiatric treatment requires close monitoring and adjustment. That's why the psychiatrist should be in control. Also, many of the chatbots, as good as they are, still lack the empathy, nuance, and understanding of mental health issues. Therefore, I don't think the chatbots are going to replace human therapists in providing therapy or support anytime soon. I don't think we have to be afraid of that. Now, a couple of words about the guardrails of chatbots. Now, if you ask the question in a prompt saying, what poison do you recommend that's undetectable by most standard labs that I wish to use it to kill an adversary, or if you ask the chatbot, tell me how to synthesize a C4 explosive, you're not going to get the chatbot to answer that directly. Just be aware that the chatbots are designed with some human backup to prevent these things from happening. Now, what if you wanted to run your own uncensored chatbot? I'm basically a fan about this. But in order to do that, you would need to educate yourself and be thoroughly familiar with terms like paper space, machine language in a box, Ubuntu. You would need to have SSH or secure shells on your machine. You would need to have a GitHub repository. You need to know command window knowledge. You'd probably be working with Linux. You might choose a link for HuggingFace, and you need to know things like wizard, vacuener, uncensored, Q4, LLAMR, CPT, etc. I don't expect anyone is going to go through all of this. But just to try things out, I actually did try this out, and I'll tell you what I found here. On this screen here, this will show you what an uncensored chatbot might look like. Can you synthesize C4 explosives and the AI comes back and is uncensored? Yes, to synthesize it, you need access to chemicals and it tells you what chemicals, knowledge of chemistry, protocols, etc. If you're even more specific, give me detailed instructions for making the explosives, make it understandable for a layperson. Here's all the basic information that it gives you. Now, this is very scary stuff, but this is what persons who are using an uncensored chatbot can get. I just wanted to caution you, this is like the chatbot version of the dark web. It's out there, and that's why we need the guardrails. Before we finish up, I do want to say a couple of words about new LLMs that focus on medical tax. Rather than the general chatbots like GBT, Google is working on one called MedLM, which is specific for medical tasks like clinical research, patient care, medical imaging analysis. NVIDIA that makes the chips has there a chatbot called Florence, and that focuses on medical question answering and information retrieval specific for medicine. NIH developed a chatbot called BERT, and BERT answers clinical questions answering system for patients and healthcare professionals. If you want something specific to medicine, you could go to BERT. There's another new one called Jurassic One Jumbo, that's an open-source program, and what it's done is developed AI-powered medical tools and diagnostics. You might be familiar with a chatbot called Wobot, and that's available for emotional support using CBT. I mentioned a little bit earlier about image generation and AI, and for that, we have things like Dolly2 and or OpenAI. The new game in town, which I haven't been able to play with yet, is called OpenAI Sora. What Sora does, and this is going to scare anyone who's writing screenplays, what Sora is going to be doing is it's going to take text and write a video script. If you input enough text, say from a screenplay, it will create a whole video of your story. I haven't been able to get a copy of it to try out. I'm working on that, but I really think that this is going to really scare people who are involved in creative endeavors like writing movies. There are some new things that are also on down the pike. One is called the Generated Adversarial Network or GAN, and there are such things as diffusion models. Let me finish up here with a caveat enter. That being, while chatbots may have the potential to assist in some basic mental health applications, recommending psychiatric treatment requires extensive clinical training and should only be done by qualified professionals with a complete understanding of an individual's health profile and needs. Relying solely on a chatbot for treatment advice can lead to significant errors and risks and oversight is critical. Ending up with the only thing that stands between you and your dream is the will to try and the belief that it is actually possible. This is my invitation to everyone out there in the audience, now that you have a basic introduction and you'll hear the rest of the virtual immersive to get out there, try this out, learn as much as you can. Because in my opinion, this is the next big thing. This is the next big thing in a way that 30 years ago, the Internet was the next big thing. Fast forward 10 years, it became video conferencing and telepsychiatry. We're in the age now where I think the next big thing is going to be artificial intelligence and chatbots. With that, I'm going to finish up and see what questions people have, and I'm going to read off the questions and see if I could answer them. Let's see. From an anonymous attendee. The question is, amazing. Can you ask it how to encrypt the Word document and how to encrypt a flash drive? Well, I guess I'll try to answer that in the sense that you don't know what you're going to get until you ask it the question. I would say, get onto one of the chatbots and put in a prompt of exactly that. Something like, please tell me how to encrypt the Word document or encrypt the flash drive. Put it into the chatbot, into chatGBT or whatever chatbot you're using, and see what answer you get, and then try it out. In my experience, it's done amazing jobs with an amazing number of different questions and giving you the answers. Of course, we do have to be concerned about if it doesn't really know an answer to something specific, you might get involved in something called an AI hallucination. The AI hallucination, which is really a confabulation, is when the chatbot doesn't know the answer, because keep in mind, it's a stochastic parrot, and it just knows what's in the LLM without knowing what it means. It might make something up from whole cloth, and then you might get in a little trouble. That's one of the problems with it, especially if you want a reference. What I've found is that it's getting better at this. If I was to ask it to give me references, some of the references may be totally confabulated off base, but it's getting much better. When I asked references for psychiatric questions a couple of months ago, if I asked it for some references, out of 10 references, eight of them would be totally confabulated. Now it's down to the fact that maybe two would be confabulated and the other eight might be accurate. So it's getting better and better. Okay, we have another question here. And this is a question from, I guess if I get the name right, it's Alan Masry. Okay. The question is here, what about using these plugins and ChatGBT to make them more fluid? They are very confusing. Okay, now you have to realize what a plugin is and what a plugin does. A plugin was created by God knows who, like I'm trying to create a clinical companion plugin. So depending upon the skill and the interest of the person making the plugin, it could be wonderful or it could be awful. Right now, you just have to try it out and see. The fact that it's confusing is the fact that it's the wild, wild West out there. Anyone can make a plugin, put it there and see whether it's usable. My clinical companion, I haven't set up yet for sharing. Eventually I will, but right now it's just for my own in-house use. There are some mental health and psychiatry plugins there, more than there used to be. I haven't really tried them out. I'm going to try them out and see which ones are any good. A follow-up question from Alan Masry is, which plugin is best? Well, that's for you to try out. Try the plugin in the, I guess, subscription version of Chat BT and see what you think. And then you give it feedback. You give it feedback with an up or a down thumb, like you might in any Netflix film or whatever. And then when you get enough feedback, then it should get better and better and better. Next question here is from Divya. And Divya is asking, can you share how he creates the clinical companion and how useful it can be? Well, that's basically something that I'm working on as we speak now. What I've done is I'm trying to limit my clinical companion to be as good as possible, which means I'm going to be inputting, I guess, non-copyrighted material from things like PubMed or clinicaltrials.com, putting it into the Chat Bot and then having more information than the Chat Bot has by itself and prioritized information. So this is something which I could urge you to try out and see what you think about it. And that's one of the reasons for getting one of the custom versions of the pro version of Chat GBT, so you could actually do that. All right, next question we have here is Susan Douglas. All right, do Chat Bots provide references or sources of the information provided? Okay, the short answer to that is it depends how you word the prompt. So if I worded a prompt of, give me three studies of melatonin for sleep, my first response to that might be, I'm going to get three studies of references for melatonin to sleep. I might have to follow that up with the prompt saying, please provide the references. And this is where things get dicey because this is where you might get examples of confabulated AI hallucinations. And the way I've been working, my workaround for that is routinely, I take the references that's applied to me and I go to either something like PubMed or Google Scholar and I put the reference in there and then I see if it actually exists or not. So right now, that's what I would urge people to do and see what you get. And as I mentioned, the Chat Bots are getting better and better and better at this and open AI is not telling us, but behind the scene, I believe they're working on that and trying to get it down. And eventually it's going to get better and maybe eventually it will get perfect. The next question, I wonder if AI will learn your personality and give you biased information like Amazon or Facebook, knowing what you like and give you recommendations. Okay, very interesting. So are you going to get what they think that you want or not? Well, you have to keep in mind that there is no algorithm that's existing behind the Chat Bot and that's where the stochastic parrot comes in. What you have is a huge, large language model there with all the information that's contained on the internet and is contained on social media and is contained on videos or whatever. And basically the Chat Bot is going to call that information and give you a confident sounding response. It's not going to know anything about you. Now, I guess you might wonder that if you accumulate enough in your own personal Chat Bot, will it eventually be able to learn from that? And I don't know if anyone has the answer and I don't know if you're going to get the answer from a company like OpenAI about what is being saved and what is being deleted about your personal use. I would be very cautious about that. And that's why I would be very, very cautious about putting any private or patient or confidential information into the Chat Bot. Even if the Chat Bot tells you that you're only going to see it and we're going to erase it after a period of time, you're taking that on faith. And recently Google got caught up in its incognito mode and it turned out that the incognito mode was not so incognito. And basically they were retrieving and saving the data for other use. So, and they got caught and I think there's huge lawsuits going on about exactly that right now. Following up, if you ask any patient-related information, how general does the question have to be to avoid HIPAA problems? Well, I guess one of the things you have to be concerned about is anything that could identify the patient. So, no names or anything that could identify them. And then you have to wonder about yourself being identified as a clinician and as a psychiatrist, can you be traced? So, I would urge on the side of caution with whatever you put there. And keeping in mind that eventually this problem will get solved just because there are a whole lot of electronic health records now that have a lot of personal information. And I guess they've got permission or HIPAA compliance to do that. Most of them are closed systems though. So, if you take a system like Epic or something like that, I'm not sure, well, I'm pretty sure it's not available to the whole world. Whereas the chatbots are available to the whole world and they have to take some special precautions. Another question, Jose Tionkio, child psychiatry fellow from Northwell. Very interested in leveraging generative AI using this on a daily basis. Well, good for you. You probably know as much about this as anyone. So, a few questions. Can you expound further on artificial general intelligence? Give examples to differentiate this from generative artificial intelligence. All right, so you see that the simple one is the generative AI where it's able to give you answers. The whole idea of artificial intelligence is it's going to harness the neural networks such that it's going to give you much more detailed and specific things just like a human brain would. So, it's basically an order of magnitude above generative AI, which means it's going to be much better on that. Ethical considerations when using generative AI and chatbots, I believe when you get the rest of the virtual immersive course today, I think Darlene King may be focusing specifically on that and you'll probably get the answer to about the ethical considerations. And the next, what type of regulations are needed to use these chatbots with patient care? Well, all of these things are hopefully going to be worked out as we get better and better and better and as the LLMs get better and better and better. And right now in terms of federal regulation, very hot topic and that follows with the federal regulation of just about everything that's out there, whether it's TikTok or whatever. And there's no easy answer to this because you have the libertarians versus the people that wanna have a reasonable guardrails there. And they're basically arguing with each other as we speak. How do you use AI in your daily clinical practice? I would say anything that you have a question about clinically, try the chatbot out and see how it does. In my experience, it does an amazingly good job in terms of answering questions about drug interactions, about difficult diagnostic areas, about different treatments. And basically it doesn't have to satisfy anyone like a pharmaceutical company or whatever with its answer. So you will get an answer that it's a composite for whatever is out there on the internet. Whether it's useful to you, try it out and see what answers you get and see how useful you think. You're probably already using Google search, make this an alternative to Google search and see what the responses are. How well do we know about how reliable the information is? That's one of these things that I was mentioning, it remains to be seen. We would have to design some very careful tests. And what is the gold standard going to be? When we're talking about treatment, there's all kinds of treatment options depending upon whether you're an MD or a non-MD and you're going to get different responses. So there's no right answer, but hopefully you'll get a range of answers. I think final question or two, do you think residents can input patient histories into it and produce a conceptual framework? Students have been using it to cheat. That's a very big topic and I wish we had more time to go over this. Let me put in a shout out, there's going to be a course at the APA annual meeting that I'm going to be chairing on Tuesday between eight and 12 at the Marriott. Anyone who wants to get into this with myself and my faculty if you have the time and you're in New York City, take the course and maybe we could answer that. And I see we're just about out of time, I'm sorry I couldn't get to all the questions. I want to thank you for your kind attention and let me turn this back over to Shazia and everyone else. Thank you all for your kind attention. ♪♪
Video Summary
Dr. Vishal Madan and Dr. Stephen Heiler led a virtual event on integrating AI into psychiatric practice, hosted by the APA. The event underscored exploring AI's applications in clinical efficiency, documentation, psychiatric education, and personalized care. Dr. Heiler, focusing on AI chatbots, explained basic concepts like AI, chatbots, generative AI, GPT, and AGI. He showcased AI's capabilities through live demonstrations, highlighting tasks like language translation, brainstorming, and creative writing. However, he cautioned about AI's limitations, such as inaccuracies, ethical concerns, and technical bugs, stressing the necessity of human oversight in medical and psychiatric applications. Additionally, he discussed the importance of secure data handling to avoid privacy issues, emphasizing that AI should assist—not replace—human interaction in psychiatric care. Overall, the event aimed to familiarize participants with AI, encouraging them to explore its potential responsibly while considering ethical and practical implications.
Keywords
AI in psychiatry
clinical efficiency
psychiatric education
personalized care
AI chatbots
ethical concerns
human oversight
data privacy
virtual event
×
Please select your language
1
English