false
Catalog
Securing the Digital Future: What Psychiatrists Ne ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, how's everybody doing? This is like, I've never seen a room like this. Like, I'm like, where's Beyoncé? Like, yeah, I just, I didn't even get it. Actually, the last time, the last time I had a room this full, I have to take a, my mother's, she's never gonna believe this. Wow. So, the last time I had a room this full, I was being protested. So, if you're here to protest me, we can talk afterwards. Thank you for being here. So, I, I don't think there's anyone introducing me, right? No? Great, perfect. So, I'm Jesse. I'm the president of the AMA. God help us all. Yes, applause is fine. And it's such a pleasure to be here, to be at this meeting, to talk about a subject that I know is on the minds of a lot of people in psychiatry, in medicine, and in health care, and just frankly, America, which is AI. And it's potential to improve the environment. Yeah, take the chairs, for God's sakes. The working environment for physicians, but more importantly, outcomes for our patients, right? At the end of the day, what good is technology if it doesn't do something for our patients? There's so much happening in health care right now, and certainly as president of the AMA, I am out and about six days a week in it. There's a lot that demands our attention, but everywhere I go, everywhere I go, I am asked about AI. I am asked about digital medicine. I am asked about technology. Where is the science leading us? How will this impact the future of medicine? And it's an important question to ask ourselves as physicians, because digital technology is so rapidly, rapidly changing the health care environment, both for, for patients, but also for us, right, as physicians. So this is true for every aspect of health care, but especially in how we care for people's mental health. Mobile technology, apps, telehealth, and soon AI are opening new ways to engage with our patients, monitor their progress, provide increased support, help people achieve their, their mental health goals, and it's, it's a very exciting time for medicine, but it's also important to recognize that the stakes are high. And, and I will tell you that as I have traveled the country in my role as president of the AMA, I hear intrigue, right, about the potential for AI to personalize treatments, enhance diagnostic accuracy, reduce, cut down on the administrative suck that just takes the joy out of the practice of medicine paperwork, accelerates, you know, potential drug discovery in, in medicine and research. I hear all that enthusiasm and interest, but I also hear concern. I have concern about AI and digital health's potential to worsen bias, to reduce privacy, to introduce liability risks for us as practitioners, and what about AI that's wrong, that gives us the wrong conclusion or, or answer, but, you know, that causes harm to patients. So, you know, we've all heard bold claims about technology before. The EHR is gonna solve everything. Interoperability, seamless friction, right? And a lot of us are still scarred from the hype around those technologies that have not delivered on their promise, and so that brings uncertainty. It brings unease to some of these conversations about AI and digital health tools, which I'm going to talk about today. Because at the end of the day, what good is any technology, even a revolutionary technology like AI, if it's not trusted by physicians or the patients that we serve? What good is it if it's just one more burden on our shoulders? So in the time that we've got, I'll talk about that. I want to highlight what we know physicians see when they view AI from research that we've done at the AMA, but also talk about what is a roadmap for the creation of these tools and digital health in general for regulators and developers to understand and shape as physicians, and I'll, we'll have lots of time for questions. There's a lot of you, so we'll see how that all goes. So if you have a thought that comes to mind, please, please keep it with you. So these are my learning objectives. I am, you know, a dean and an educator, so I believe in trying to at least tell you what you're supposed to be hearing, which is I want to make sure that we kind of cover some of the trends at a global level and what they mean for healthcare in America. Obviously, I'm the president of the AMA, so I'm going to lean into what the AMA is doing to support physicians and our patients, particularly in a, in a time, in a moment where we've got tremendous political divisiveness. We are in the full silly season, like elections are happening. It is chaos. But also we have things like rising professional burnout and pressure on the profession. And, of course, what is the role of the AMA in driving the advance of these tools, and how can that support goals around health equity, improving access, and whatnot. So I am an anesthesiologist. That is a real photo in the OR with some trainees and a colleague, and this is the one-page version of my 96-page academic CV. It is available for download on demand. If you want it, just please do not, for God's sakes, do not print it, because the tree will cry somewhere if you do. But obviously, I serve in my role as the AMA. I live in beautiful Milwaukee, Wisconsin. Do we have any Wisconsin people here? No, they stayed home. They knew I was coming. But I'm faculty at the Medical College of Wisconsin, where I'm a senior associate dean, and I lead the Advancing a Healthier Wisconsin Endowment, which is the state's largest health philanthropy. I have a couple of adjunct faculty appointments, including at the Uniformed Services University of America's Military Medical School, Vanderbilt, and I am deeply involved, really, in the development of technology standards through engagement with World Health Organization, AAMI, and others. I did 10 years in the military as a reservist, and so I've also had the opportunity to support the DoD, the Defense Health Agency, and the U.S. Surgeon General in the digital health space devising national strategy and standards. So that's sort of like, you know, me in a nutshell. And I always like to put this slide in my presentations. You know, not really fair for me to show you, like, my actual patients, obviously, because of HIPAA. But at the end of the day, this is what it's about, right? It's about the people we take care of and the systems that we work in and making sure that they work each and every day. So I still see patients. I was a day-a-week sort of person balancing my academics and research until this year, and because I've been on the road constantly and my husband and children sometimes forget that I exist, I'm down to just two days a month. But I do still see patients, and it's important for me because, again, this is what it's all about, right? Taking care of individuals. So sometimes I like to start a presentation like this by asking physicians in the room, think of a single word that sums up how do you feel about the current state of our health care delivery system? One word. How you experience it as a physician. One word. So what words come to mind? Non-productive, frustrating, inefficient, chaos, mistrust. Yeah, I hear powerless, frustration, loss of agency. Well, that's more than one word, but those are the words that I hear. Feelings that lead to burnout. If solutions aren't found quickly. And our last survey on burnout showed that two out of three physicians experience symptoms during the pandemic. I believe the data in psychiatry, it's two out of five, I believe, is the APA data that I last saw. Not surprising, given the extraordinary pressure that we all work under. But the problem is not with the individual, right? The problem is with the system that we work in. And we know that the rash of burnout reflects health care system challenges that an individual may experience. And that's why the solutions that we work are geared towards the system level. So what weighs on people's minds? It's this list, right? It's the administrative crap and bureaucracy that gets in the way of taking care of patients. It's the hostility online and increasingly in person towards us and our colleagues and our systems. It's misinformation, orchestrated disinformation campaigns, often politically driven. It's government interference into the patient-physician relationship and decision-making and autonomy. It's the surge of chronic disease. I am an anesthesiologist, so I don't often have a whole grasp of my patients' psychiatric comorbidities. I sometimes see a little bit, but they usually go to sleep. But I do see the chronic medical and surgical illnesses, right? And it's never, you know, one disease, one medication. It's a long list. And that growth of chronic disease is a challenge for all of us because it just complicates everything that we do that we need to do to keep our patients healthy. We live in a time of rapid inflation, growing economic uncertainty, geopolitical tension. I'm not going to talk about anything related to what's happening in the Middle East right now, but it's a tough time to be on a college campus, right, to be a young person in America today. That contributes to the environment that we all work and operate in. The lack of trust in medical institutions. People do not trust the CDC the way they did five years ago. People do not trust the FDA and the regulatory process around medicines and medical devices the way they did five years ago. That makes our jobs harder. The good news is people still trust their physician, just not as much as they used to. And Gallup actually has the best tracking data on this, longitudinally goes back decades. It's a rodent. We, of course, have burnout and workforce shortages acute across all of behavioral health and psychiatry, and then these foundational questions about AI and emerging technology, and what will they do to the practice of medicine? So, you know, when you think about the anticipated health needs of an aging population, there are a couple of trends that keep me up at night, right? By 2050, a fifth of the population of Medicare age 65 and older, all of the widening income inequalities combined with that rise of chronic disease, rising health care costs, a shrinking proportional health care workforce. So, for physicians, the AAMC, they're projecting a shortage of between 37,000 to 124,000. It's a wide range. Whatever the number is, it's big. And our survey data showed that one in five physicians plan to leave medicine within the next two years. One in three plan to reduce their hours. Nearly half of practicing physicians in the U.S. are 55 and older, meaning that they're going to hit retirement age very soon. We already have, of course, as you all are acutely aware, a severe shortage of psychiatrists and mental health professionals. The Kaiser Family Foundation, they did a really nice study. They found that we have barely a quarter of the mental health professionals needed to meet demand today. This is only getting worse. The AAMC reports that more than a hundred and fifty people, 150 million people, live in a federally designated area without sufficient access to behavioral health services. Trends that are all getting worse and worse and worse. So, we have all of these challenges, and I'm at the AMA. What are we doing about it? We have a pretty simple mission to advance the art and science of medicine and the betterment of public health. And we group all of our activities into three different buckets. Trying to get rid of the crap that interferes with us taking care of our patients. Physicians now spend, on average, this is across specialties, on average, two hours in front of the HR on paperwork for every hour that they see patients. That's not a real satisfier, and that's not why anybody that I know went to medical school. Leading the charge to confront public health crises, chronic disease, and then driving the future of medicine, innovation, technology, and that last bucket is really going to be the focus of my talk today. So, I'll just mention, we group at a federal level all of our primary advocacy priorities into something that we call our recovery plan for America's physicians coming out of COVID. We launched this in the waning months of the pandemic. To focus our core advocacy priorities on issues that we think can help bolster physician workforce. You can find the full thing at the QR code, but reforming Medicare payment to have a rational system that could promote thriving practices and innovation. Tackling prior authorization to reduce burdens on practices and care delays. Stopping the scope of practice creep that threatens patient safety and removes physicians as leaders of care teams. Reducing burnout and addressing the stigma around accessing mental health as a physician. Of course, advancing digital health and telehealth. I'll mention we did get a really important victory on the telehealth front with some legislation that has extended the pandemic telehealth flexibilities through this calendar year. Ensuring that patients continue to get remote care regardless of where they lived. We are actively engaged in trying to get legislation through that would extend those telehealth flexibilities beyond the calendar 24. To hopefully make those flexibilities permanent and we worked very closely with the DEA and we're very happy with their decision to extend the flexibility of prescribing controlled substances based on a telehealth visit through 2024. Then we've been working with the states to make sure that there isn't conflict between state regulation, licensing board authority, what they've done and the federal government. So, why is physician burnout such a big part of this? It's because again, two out of three physicians across all specialties, roughly two in five psychiatrists according to APA data, have symptoms of burnout. It continues to be a major factor in why physicians leave the profession. If we can't keep the people we have today, how on earth are we going to deal with these workforce shortages? So, we're working on this in a whole variety of different ways. State federal policy, changes to national programs, trying to recognize systems that are getting this right. We're working to get rid of, identify, reform outdated stigmatizing language on medical licensing board and health system credentialing applications. Getting rid of stigmatizing questions that ask about past diagnosis of behavioral health issue and replace them with relevant questions that only ask about current impairment. We know what those questions do, you know, right? People are afraid that they'll have to check that box and they don't get the help that they need. And I had a colleague in residency, a very well adjusted high-functioning person, who once saw a therapist in high school. And so, when she graduated residency and went to get her full license in a different state, she checked that yes, once she had seen a psychiatrist. And her license got held up for nine months. People know that happens. And so, there's a lovely Medscape survey this past year. Four out of ten physicians who realize they need help don't get it because they're worried about their license, they're worried about their job. So, we're working on this. The National Association of Medical Staff Services, I know you've never heard of them. They are a thing. They set ideal credentialing standards. They have changed their model standards. They're pushing this out to health systems, making it easier and easier for those questions to go away. And we've got ten state licensing boards that have also made reforms. And we're also working to bolster and stand up where they don't exist confidential physician wellness programs. So, there's a lot that we're engaged in around the mental health needs of physicians. Okay, so that's sort of like large overview of the system that we all work in and what the AMA sort of priorities are. What I want to now talk about is AI, technology, and the role that technology can play in addressing some of these challenges and what the AMA is trying to do to shape that future of the medical delivery system. So, the AMA is involved deeply in technology design in a lot of different ways. And the goal here overall is to make these things an asset, not a burden. And we believe that technology can be an important solution to some of the challenges that I've highlighted. But it's got to be. It's got to be designed and implemented correctly. And to do that requires the input and experience from you, experienced clinicians. And that's where we're focusing a lot of our attention with the ultimate goal of making, again, technology an asset, not a burden in clinical settings. Because we have enough already riding on our shoulders. We do not need technology that fails to live up to its promises. And I will tell you, I once was at a AI development meeting, this is probably five or six years ago, in Cincinnati, Ohio, actually. And this really excited, I was going to talk about AI, you know, policy regulatory framework. And this really excited developer comes up to me, he goes, Dr. Erfel, Dr. Erfel, I've got this great tool and an algorithm and a data set and we can predict who is going to have colon cancer and who you should operate on. I said, that's kind of interesting. Okay. So, so how does it work? Oh, I have no idea. It's deep learning. It's a black box, but you know, you can trust it because it works. I said, okay. I'm gonna tell my mom, we're taking out your colon because this algorithm thinks you have cancer, but I can't tell her why. What patient is going to accept that? What doctor is going to trust it? And what third-party payer is going to reimburse for it? Like, complete disconnect, right, between that developer, entrepreneur, the company, and the clinical reality. We're working to solve that in a lot of different ways that I'll mention. So, I want to step back and talk about the enormous challenge, the enormous potential for AI. But I want to start with a story that is very personal to me. So, this is my older son, Ethan. He's almost five. He's got a birthday next week. I promised I would be home from this meeting for his birthday. And he was born at 29 weeks. He spent 49 days in the NICU in Chicago. He was just over a thousand grams. And that picture on the left, it's, of course, you can imagine all the circumstances, you know, middle of the night, 2 a.m. stat section, delivery, all the things. I was not in the OR when he was born. But as the neonatologist was whisking him down the hall at a hospital where I don't work because of circumstances, from the operating room where he was delivered to the NICU, he paused for just a minute so I could look over my husband and I to see him. And I took that picture on the left just minutes after he was born. Anyway, he hung out in the NICU for a long time. And is here today. Doing great. Because of the seamless application of a whole set of reliable technologies, technologies that all of us take for granted. So there's a picture of him in his little incubator and some of you may know that there was a series of neonatal deaths in incubators in the 60s because the thermostats didn't work. They overheated and killed babies. There was another set of failure around insulation related the electrical wiring in those same incubators that electrocuted and killed several infants. So there was a an incredible surgeon very visionary named Joel Nobel, no relation to the Nobel Prize, who was shocked when a defibrillator failed to work and an infant died in his arms. And so he created the ECRI Institute. They're still around. They're outside of Philadelphia. That ushered in this era of comparative evaluation of medical device brands that did not exist in the 60s and led to standards that we all take for granted that ensure that when Ethan was born and put in an incubator that everything worked. And this is him. That was actually our trip to the islands in January. We're on the plane. He's reading the Dogman book series. I don't know if you know. It's rather funny. And you can see by the word he's playing on Scrabble down here that he is an appropriately developing five-year-old. And if there are any child psychiatrists we can talk later. So when I think about all of these issues and AI technology there's a really, really important lesson from another industry that we have to take to heart and we must demand. And that lesson is from the airline industry. So Boeing's have a lot of problems lately. But the original foundational major issue with the 737 MAX that you remember from a few years ago where tragically two airliners crashed and hundreds of lives were lost those crashes were because of an AI safety system. That was the problem. So it's worth just sharing the backstory. So the larger engines had to be put on a different spot on the wings because otherwise they would drag. And it changed the aerodynamics of the plane. And so you get more lift. And I don't know much about planes and flying but I know lift is how they stay in the air. The aerodynamics are such that during normal flight the the nose has a tendency to just creep up a little bit because of the extra lift. Boeing's engineers recognized this and they said no problem. We'll just create this back-end AI control system that just gently pushes the nose back down during normal flight. So they did that and that was all fine in testing until in production they had a sensor failure. And these two planes were flying normally the sensor thought they were going up. And so this AI control system pushed them both into the ground. Okay, what's the lesson for us? The pilots of those two doomed airliners had no awareness of the AI system. It was not in the operations manuals for the plane. There was no training provided on the system. They were completely unaware of what was going on. We can never accept this in medicine. We may not be able to explain the black box but explainability is a different concept for technology than transparency. If I walk into an operating room and I turn on a ventilator and there's an AI system in the background that's trying to optimize ventilation, I ought to know that. Right, because the only way I can supervise and correct the technology, unplug it and do something else is if I know what it's doing, right? That it's doing something that has an algorithm based into it. So there's a lot that we need to sort out in in AI and there's a lot that we need to figure out from a regular standpoint. But this is a foundational point that we must know at all times if there is an algorithm operating, even if we don't know exactly what it's doing. So, AI is not new. It's been around for a long time. I have an informatics background, board-certified in NCJ and in informatics and I found this graphic a year or two ago. It's of course wildly out of date because this was created before large language models and everything that's happened in the last year. Why did those pilots not know? Poor decision-making, poor management, poor practices. And there's no FAA requirement that you have to tell pilots some aspect of software. Should there be? I don't know. What should the FDA be required to tell us in a product label about a medical device that's FDA regulated, right? Or software as a medical device. That is an open question, but a really, really important one. So we are clearly at the peak of inflated expectations on AI and digital health. Like you've seen this curve. It is the Gartner hype curve and it's going to be a bumpy ride. We are going to have all sorts of issues, challenges. There's a pretty damning report that came out recently from the Peterson Health Institute. They're great people. I know their leadership pretty well. They looked at all of the diabetes like, you know, mobile health app management space. There's a bunch of companies out there. There are a bunch of companies, I'm not going to name names, that are pretty profitable, right? And the summary of this very lovely detailed report is that these companies and their products and tools are profitable, but clinically irrelevant. Meaning that they do not have any meaningful impact on a diabetic patient's journey, care, or chronic disease management. But the companies are all doing just fine. That's a sad state and the market will correct that hopefully at some point. But we're going to have a lot of challenges before we ultimately get to tools that we can each rely on in our practices. The AMA talks, by the way, as an aside, when I use the term AI, I actually try to emphasize the point that it really should be augmented intelligence, not artificial intelligence, to emphasize the fact that human beings, we must always be at the center of patient care. And what specialty is that more essential than psychiatry? Whatever the future of health care looks like, our patients need to know that there's a human being on the other side of any piece of technology guiding their course that actually cares about them. That is absolutely essential. We started to get a little bit of traction. I would say this five years ago, people would just like laugh at me. They're like, Aaron Feld, what are you talking about? AI is artificial. And now people get it, right? The goal is not to replace us. It's to boost what we're able to do and you know, when I see patients in the operating room, you know, the number of times that everything that's supposed to happen actually happens is like zero, right? We work in these very complex systems where we rely on all of us to be the final backstop to make sure that people get the right antibiotics or that something isn't missed or that, you know, their form was filled out or they got the right test. And we know that because of lots of complexity, we do not have high-performing reliable systems in health care. They do not exist. One pathway to reliability is to use technology, but again, it's not to replace us, it's to help us remember that there's something else that we need to do, right, at the right point of care. Okay, so of course the AMA is very interested in understanding how do we get these things adopted and into practice and in survey after survey after survey, and we do these nationally representative sample surveys. We've done this particular one three times, 16, 2019, and 2022. And what bubbles up over and over and over are the sort of four questions that drive adoption of digital technology, whether it's an AI tool or not. And the first is, does it work? And you would think that this would be a simple question, right, because if you look at, you know, the product label or the package insert for a medication, you can see this study, right? There's a table one and a population and side effects and whatever. But what level of evidence should we require and demand for an app or an AI algorithm or software as a medical device? This is an open question and nobody really has an answer for it. What the FDA says, and I was with the FDA this week, they would never be able to hire enough product reviewers to look at algorithms at the individual level, especially when there are changes, right, and an algorithm changes and you have to have an update and whatever. They could never scale their workforce, even if they wanted to, which they don't. So one approach that they have signaled might be of interest is a software pre-certification program, and I'll just tell you how this works. They did a pilot where they allowed nine companies to kind of do what they do for drug manufacturers, where a drug manufacturer, they use this good manufacturing best practice sort of checklist approach. So what the FDA does is they don't actually look at the pill coming off the product line, they look at the process. So, you know, is the factory clean? They inspect it. Do they have their material safety data sheet up on the wall? All those things. And if you check all those things and the facility looks good, you can make any drug you want in the factory. That's what they do with manufacturing of drugs. They've tried to sort of transition that to software, and what they did with a software pre-certification program is they said, okay, if you are a developer or a company and you meet this checklist of good machine learning best practices, you didn't steal your data off the Internet and you have qualified developers and whatever, then you can bring any product you want into the marketplace without review of the individual algorithm or software. That leaves us totally reliant on post-market surveillance to understand the impact of these tools. Now, if we had an interconnected interoperable health system where we could actually really understand what was happening to patients at scale, maybe that's okay, but you all know that we do not. And what I worry about is a framework where algorithms come into practice across settings, across specialties, they're not primarily reviewed, and harm insidiously creeps into the delivery system, but we don't recognize it until thousands or millions of patients have been impacted. So that question of does it work is a really tricky one, surprisingly. The second is payment. This won't surprise you, right? You know, if you acquire technology, how do you get paid? What's the coding and all those sorts of things? The third issue is liability and you know, liability for performance is becoming an increasingly bigger issue. We're already seeing proposals to hold you, the end user, solely liable if you rely on the output of an algorithm. I don't think that's right. You know, I think that if there's a problem with the validation or the underlying data set or the implementation that the manufacturer or the developer probably has some shared liability, but that may not be the case. And again, there's not much case law on this. So that will be a question that also need to be decided not just from a regulatory standpoint, but also in the courts. What happens when something goes wrong with an AI tool? And then finally, does it work in my practice? And I will admit I have made this mistake as like an informatics developer who has built AI tools and software over the years. Early in my career, probably 15 years ago-ish, I built some software in our adult hospital in Boston and I naively, and I should know this because you learn this in medical school, children are not small adults. Right? They're different, right? The workflows are different. And we turned the software on at our children's hospital and it and it failed miserably. And and it was because the workflows were different, right? And this was an anesthesia thing. So, understanding if these things actually work in our practices is really, really important. And again, if we don't address this at the outset, it has the potential to cause harm. So, we released recently our latest research about perspectives on AI. This is, again, nationally representative sample. And what it shows is widespread support and excitement for these tools, but also some trepidation, which I think is appropriate. Four in ten physicians are equally excited as they are terrified of AI. So, if you have a little anxiety and your GERD gets a little bit worse when you see the banners upstairs, you're not alone. People are worried about what does it do to the patient-physician relationship? A strong majority of those surveyed, 70% see the potential to support diagnosis, improve practice efficiency. Where there's concern is, again, what does it do to our interaction with patients? What about privacy, data collection? And the question then is, well, what do people want to see to put their minds at ease about these tools? Well, they want information to explain how a decision is made. They want to know what the limitations of the algorithms are, information about the intended use, how efficiency and efficacy can be demonstrated, how bias is managed and not exacerbated, and how does the tool adhere to standards, and how has it been validated? And if you're interested in any of the survey data there, you can find the full report on the AMA's website. So, we're doing a lot to try to influence the creation of these digital health tools, specifically related to AI. We're working on standards development, language, ethical principles underlying the development and deployment of these tools. We're trying, obviously, to provide our members and folks around the society, because we be coordinating a lot of federation calls, information about trends and what's happening. We provide some funding to support AI research, as well as AI validation and implementation into practices, and a lot of research around design of best solutions. And there are two specific ways that we're trying to get physicians involved in the design of new health technology that I thought I would highlight today. The first is, in 2016, we founded a subsidiary. It is a wholly owned subsidiary of the AMA in Silicon Valley, right on Silicon Valley's in the heart on Sand Hill Road, called Health 2047. The AMA was founded in 1847, so we're like, oh, 200 years forward, 2047. And this is not a startup. It is a company designed to take the things that we know matter to physicians and birth startups, driven by physicians, so that we don't have that entrepreneur bringing a cancer screening algorithm into the marketplace that is disconnected from reality. So, the folks at the company have decades of experience. I am amazed at the talent that we have recruited, and in fact, one of the early markers that we had as an AMA board, because, you know, we put some real money into this thing, was, well, who could we actually get to work for us? We have, as an advisor, Norm Winarski. He's the guy who birthed Siri at Stanford Research Institute. The person who still holds the copyright patent to the JPEG file format. You know, design engineers, security people, really, really amazing talent. A lot of folks who, frankly, don't really need to work, but they believe in our mission. And so, to date, there are about a dozen spin-outs that have come out of this venture. So, we are continuing to be very enthusiastic about the opportunity to lift up solutions that are physician-driven, informed by our expertise through that work on the corporate side. So, again, you can go to the website to find out more. The other interface that I always love to highlight is the Physician Innovation Network, called PIN. This was launched in 2017. It is free, it is online, and it is a platform to connect clinicians and trainees and physicians with companies. There's something like 19,000 users on the platform, a bunch of organizational collaborators, and the whole idea is to get physicians and companies to collaborate to bring forward better, improved, scalable solutions to make sure that companies never make the mistake of bringing a product in the market that doesn't really understand the healthcare ecosystem. And we have seen some really exciting results come out of collaborations on this. So, we have hosted conversations at some of the bigger tech conferences. They do occasional pop-up in real-life events where people can meet each other. But it's really an online platform. It is free. It is available to anybody worldwide. I wish we made money off of it, but we didn't figure out how to do that. So, feel free to check it out and see if it's something that is of interest to you. There continues to be a lot of uncertainty about the regulatory approach to AI. And again, as I mentioned, I agree with the FDA that, you know, the regulatory framework that we had that was set up in the 60s and 70s does not work for software as a medical device or AI-based products. But figuring out what we are going to pivot to is an open question and it's important from my standpoint that, regardless of, you know, how this all plays out, that we've got to ensure that AI regulation at a federal level only allows safe, high-quality, clinically validated products to come to the marketplace. And if we get the regulatory framework wrong, or if it's too permissive, then we will miss the mark on that. And we also need to make sure that we don't allow AI to introduce bias into its results. So, I mentioned this issue about liability before, and it's such a huge part of the conversation about any new digital health tool. So, I want to take another a minute here. Liability resulting from an inaccurate AI model or the misuse of a model is an area of great interest in sort of like the medical legal crowd. So, if an AI tool recommends a prescription based on a patient's data, and you prescribe that medication, and the patient has an adverse reaction or outcome, who bears the burden of responsibility? The physician? The company that owns the algorithm? The team that built and trained the algorithm and did the deployment? At the end of last month, April, on a Friday as they always do, HHS and the Office of Civil Rights released a 500 plus page rule. It took us a week to get through it. That is related to section 1557 of the Affordable Care Act which is the non-discrimination piece of the legislation. The rule included, the draft rule included a problematic provision and it creates some new liability for physicians who use AI enabled technologies and other clinical algorithms that results in discriminatory harm. That's why it was in the discrimination section of ACA 1557. The final rule is significantly better than the draft rule but it's still concerning because it places new duties on us and it creates the risk of civil or monetary penalties on physicians if we rely on an algorithm that results in discriminatory harm. So what we are telling folks, and again this is all brand new, is to carefully consider new requirements in your decisions as you're thinking about incorporating a tool into practice. And the lack of transparency requirements for these tools again puts the onus on us to be very diligent about selecting these tools and making sure that we've got really good policy to guide the implementation into practice. All right, that's bad news number one. Bad news number two. Is anybody here from the Federation of State Medical Boards? Anybody here on a medical board? Okay, good. So FSMB just announced that at their annual meeting a week or so ago and they pushed this out like yesterday or the day before, that the physician is ultimately responsible for the use of AI and should be held accountable for any harm that occurs. This is from the Federation of State Medical Boards. Now their guidance kind of generally aligns with the AMA policies and our AI principles because it prioritizes the importance of transparency in the development of AI, but it's not perfect. And it goes a little bit farther than we would like in terms of who is accountable for the harm. Last fall, we did release our principles for AI deployment, development and use. I'll talk about in a bit. And the principles highlight this issue of liability and transparency. And our principles underscore that it's critical that we understand the AI-driven technology and have access to information about the tool before we deploy it, how it was trained, how it was validated, so that we can assess the quality, the performance, the equity, the utility of the tool to the best of our ability. Transparency and explainability regarding the design and the development and the deployment, we think that should be mandated. We think that should be required where possible, including potential sources of inequity in problem formation, inputs and implementations. We ought to be able to understand where we're utilizing an AI tool without transparency, what the risk is. And the need for transparency could not be more acute. So we sent a letter to the state medical boards urging them that they ought to take the position that the person, the entity best positioned to know the risks of an AI system, best positioned to mitigate harm, avert problems, they ought to be the ones that are liable. And that may be you. In many cases, it's not you. It's probably somebody upstream. It may be the developer or the implementer or the system. So there's a lot that we still need to sort out on the liability front. Okay, let me talk about a few more things before I wrap up. So I get asked about chat GPT every day of the week now. How will large language models change medical practice? So what I tell people is that even though the most advanced algorithms are getting better, they still can't diagnose and treat diseases. That's the wrong approach. That's not how we should think about them. AI is great for solving the textbook patient, a narrow clinical question. And there was much news when one of these large language models, I forget if it was MedPalm, Google's thing or whatever, passed the USMLE. Does anybody think that passing the USMLE on a multiple choice question makes you qualified to practice? Right. But the tools and the AI and the algorithms can't help us. If you imagine AI tools that seamlessly integrate into electronic health records that can provide some predictive text for our notes, there's a lot of interest in virtual scribing and some of these burden reduction opportunities that are driven by large language models. Those things, those applications are only becoming more advanced. And in their current form, I think can be very useful to help us with some of these burdensome administrative tasks. So I wanna make sure that I mention, obviously, the perils of poorly designed health technology. And I wanna talk about what it means to design technology through a health equity lens and why it matters. So many of you probably know the story of pulse ox and its miscalibration. And a really skilled team at the University of Michigan very nicely studied and described a real problem during COVID, which is that pulse oximetry, the industrial version that I use in the operating room every time I see patients, is miscalibrated and overestimates oxygen saturation in people with dark skin tones. And during COVID, they demonstrated that that was leading to mistriage. Patients of color not going to the ICU, not getting supplemental oxygen, having disproportionately bad outcomes. The shocking thing about this is, this has been a well-known, well-described, identified problem for 30 years. It is not new information. We've just never cared. And if we are not very intentional about never allowing this to happen again, we will allow bias to insidiously creep into digital health tools and algorithms and so forth. So I've been fortunate to have been deeply engaged with the FDA. They're trying to look at labeling requirements. What should they do on the device side to fix this problem? But unfortunately, it's still real. And we need to do better, which is why our InfoHealth initiative was created. We launched this community with 14 founding collaborating organizations, all who are deeply committed to strengthen, amplify, integrate the principles of equity into the work they're doing at the intersection of health, equity, and innovation. And so this work is really about understanding how structural racism and sexism and bias impacts innovation. How do we pivot so that we can actually invest into innovations that were designed by, designed for historically marginalized communities and engage the industry to make sure that we get to scale? I forget the numbers on this, but if you look at the amount of venture capital going to firms that are designing for, driven by historically marginalized individuals, it is a fraction of a percent compared to the rest of the market. We simply have to change that. And lots of questions about tools like this. This is the AI-powered dermatologist, this tool that Google made. So they did pivot strategy a little bit. So they originally got a CE1, class one medical device approval in the EU, and then they didn't release it. And I was confused by that. And I finally got ahold of a team at Google to ask them what their strategy was. And they actually recognized that there may be some real issues. So you know what patients will do with us. They'll get the answer they want, and it may not be right. And if it's melanoma, that's a problem. And who do they call when there is a problem? Like 1-800-GOOGLE? What's Google's responsibility when the model changes and the answer is different? Because these things continue to evolve. So there are a lot of questions about that sort of interface between the tech and the consumer, the patient, the professional, that I think Google has decided they don't wanna own. So what they're now doing is not making this a consumer product, as they had originally described in the little video. Now it's available through a development license to academic centers and companies. And you can imagine why they made that shift. But there will be a flood of consumer-facing products like this. They're already starting to hit the market. Many will not be FDA regulated. And we will, of course, be asked lots of questions by our patients about like, well, what does this thing do? Should I trust it? Should I use it? And obviously, that is gonna be a challenge for us. As I mentioned earlier, we released principles around AI development deployment for use. The purpose of these principles is to give guidance to physicians about how do they engage with AI-enabled technologies? How can they get a better understanding of the policy that needs to adapt to support innovation and pace of change? And this is not entirely new. I mean, AMA has been working on AI principles since 2016. But we released sort of a big refresh this past year where we're trying to make sure that as we're engaging with lawmakers and policymakers, that there are eight key areas that we've identified to address. One is, what should the governance around the use of these tools be? How do we have transparency, the Boeing example, what should the required disclosures of tools in AI-enabled systems be? There's some special considerations around the use of generative AI that need to be managed. This whole question about liability, data privacy, cybersecurity, and payer use of AI. And I actually, I have on tape, I love this. As he said it publicly, it was recorded. One of the national medical directors for one of the big third-party payers said, he said on tape, AI should never be used to solely deny patient care. Applause, applause, applause. Have it on tape if we can just keep them to it and get his payer colleagues to agree. But you have seen the news stories about one third-party payer in particular reportedly using an algorithm, and I don't know if it was an AI tool or just an algorithm, to quickly deny care to thousands and thousands and thousands of patients through prior authorization processes. We can't allow that to happen. Anyway, QR code will take you to the full set of principles. It goes through all of that information and hopefully you'll find it of interest. Our principles do emphasize this concept of human-centered care in an increasingly digital world. And it is just so critical that clinical decisions influenced by AI all be made with specified human intervention points during decision-making. And as the potential for harm goes up, the time when we can intervene as a physician utilizing like our brain and clinical judgment to interpret or act or ignore an AI recommendation, that needs to happen earlier as the risk of harm goes up. And we obviously wanna make sure, because I started this whole conversation about workforce pressures and burnout, that the implementation of these tools and the utilization of AI doesn't exacerbate the burden on us. It's gotta be deployed in harmony with our workflows because we've already got enough on our shoulder. So I'm a big fan of Abraham Brighisi, author best known for his book, Cutting for Stone, although he has written other wonderful books. He's out at Stanford. And he wrote something very eloquent, which I like, I stuck on a slide, which is the way to hear is not to think about technology versus human, but to ask how they come together where the sum can be greater than the parts for an equitable, inclusive, human and humane care in the practice of medicine. So we need to make sure that we earn and keep the trust of our patients. There's gonna be a lot on the regulatory side that influences that. We also need to earn and keep the trust of physicians. And we need lawmakers to give us clear regulatory, consistent guidance. We need pathways to make sure that we have incentivized the development, deployment and payment of high quality validated systems. We need to get that liability question right. We need to build trust. So in summary, I do think that AI, machine learning, will transform how we work. Don't think of it as human versus machine. The sum truly is greater than the parts. There is potential, tremendous important potential to scale the capacity of all of us. Telepresence, virtual monitoring, remote care delivery, lots of examples that we desperately need because there's so much pressure on the delivery system. And I don't think AI will ever replace physicians, but I do think physicians who use AI will someday replace those who don't. And I will just leave you with an adorable picture of my children. And I'm happy to answer questions. But if anybody needs to take a break outside because it's very hot in here, I do not blame you and would be happy to allow that too. Thank you. Don't be shy, jump right up. Just don't step on anybody. I have a question. It's an election year coming up. Are there certain politicians or parties that are in a line with what AMA is saying about liability to physicians, how AI should be regulated? Yeah, so that's a thorny issue. So look, I'll start with an amusing anecdote. The AMA is the American Institute of Health I'll start with an amusing anecdote. The AMA is nonpartisan. I once in a public setting in front of one of my colleagues or lobbyists said, well, we're bipartisan. They said, you're not bi. And I was like, I know I'm not bi, I'm gay. He goes, no, no, no, no, we're not bipartisan. We're nonpartisan. I was like, oh yeah, there is a difference there. So we have a longstanding decades long tradition of meeting with all of the presidential nominees. We will do that offline. I'll tell you how some of those conversations have gone. We think there'll be different policy opportunities depending on which party is in the White House and in power. At the end of the day, there is a lot of friction and I will leave it at that. Hi there, I'm a primary care physician. So coming from... You and I are fish out of water. That's how this works. Yeah, so I guess my question is related to the point that you made about those who are best able to prevent harm should be the ones reliable. I think I sometimes see humans ourselves as a little bit of a black box at times in that historically we've made so many decisions of whether it be like a CVD risk score, miscalculations, PSA screenings, or aspirin as primary prevention, for example. Mistakes that we kind of look back and even 10 years down the line say, ooh, looks a little bit different from what we thought things should be. Is there a threshold for where we should... How much of this black box we really want to understand and uncover? Does everything need like a biochemical basis before it is put into a larger systemic algorithm or model? So a couple of things. One is we have these reliability issues with the system. So I'm a big believer, and my foundational informatics career was built on how do we have systems that can backstop what we forget to do to make things more reliable? So the whole science, there's a lot of implementation science that I think is really profound and desperately needed, and that takes you into this sort of idea about continuous learning systems where yes, you have a system, you generate the evidence, and then you can sort of do the iteration. Really, it's a great theoretical model, and there are definitely healthcare systems that have tried to sort of do that at scale, but it's very challenging for lots of reasons. I also think about my own practice. I give anesthesia, and I use medications that we have spent millions and millions of dollars, and we have absolutely no idea how they work on the brain. Siva fluorine, I can tell you the chemical structure. I know how to use it very safely. I know the therapeutic index, but we don't really understand how it works. So there always, I think, is going to be that level of uncertainty with the tools that we have. How do we use these tools to generate new evidence that can help us be more effective? How do we make sure that as evidence evolves, whether it's in psychiatry, around practice standards, or in any field of medicine, the technology can help us? There's the often cited study, I have no idea where this stuff comes from, that it takes 20 years before evidence that's discovered hits the average patient. I mean, you would think with technology, we could cut that down. Like, will it be instant? I don't know. But certainly there's a lot of opportunity, but still some risk involved. Great question. Thank you for the very informative and thoughtful session. I'm a psychiatrist by background. I have a question about, there's a lot of good information about AMA and FDA, how they're regulating this process. Are they also regulating research in this area, particularly psychiatry is, I think, has a lot of scope to use AI? So, yeah, we're not a big funder of research around AI. It's mostly around the sort of policy considerations and implementation. But certainly we interact with our colleagues at NIH and FDA and industry to try to think about, like, if you were to create a framework for where the gaps are in terms of the kinds of research that would be most helpful, where they might go. But unfortunately, it used to be just a given that the NIH budget would go up every year. And that's now off the table, right? And they have struggled for a variety of reasons coming back to some of that dysfunction in Congress to garner the support that they need. So there's a lot of foundational work that we need to do. But again, it comes back to this question of, what amount of research, what amount of validation should be required for these tools? And that is an open question that's unanswered. So also, what I was trying to get at is, is there regulation around the research itself? Is there a policy and where would we find that? Yeah, so there isn't specific regulation around research that differentiates anything having to do with AI. But certainly there are all sorts of ethical considerations that are a little bit different when you're using the outputs of these tools. And I can point you to some of the resources. Our AMA Journal of Ethics did sort of a deep dive through some scholarly work in that space that might be helpful. Yeah. Thank you, I'm Joost Meertens from the Netherlands, a psychiatrist. I truly, your elaborate talk inspired me working. I just started to work with AI and I think it could really helpful to, for instance, make the administrative burden less. I make a lot of reports on psychotherapy and it also could help to find out which tactics, which words, which techniques are helpful within huge piles of information. But how do we address the problem with privacy, which is a big problem in the computer world nowadays? What's your ideas on that? Yeah, it's interesting. So, you know, there's a lot of interest in virtual scribing, which you mentioned. And in fact, I know one clinician who got one of these tools and started crying because for the first time in months, she made it home to see her children for dinner. Because suddenly, you know, there wasn't five hours of notes to have to deal with. You know, that space has changed, right? It used to be a scribe in the room with you in primary care settings. And then it was a remote scribe in a different place. It may be down the hall, it may be in another country. Now there's AI versions of this, right? The privacy considerations, right, as a psychiatrist of having somebody else in the room is obvious, but the considerations around, you know, where is that information going, seeing who has access are really no different, but perhaps more sensitive. So, you know, clearly we lean on our sort of ethical principles around our obligation to maintain trust and privacy. But what are patients willing to accept when you say, oh, you know, there's a tool I'm using and there's information that is shared? I think that's an open question. Well, in the Netherlands, it's very difficult because there's a lot of problems going on in our politics about the privacy. And there's this board which is gathering lots of information, well, there are problems with that. No, one of the great pleasures that I have as president of the AMA is I get to interact with 75 of my counterparts from around the globe. And the Royal Dutch Medical Association, I believe I got that right, their leaders, we were just in Seoul, Korea two weeks ago, having actually a very similar conversation about the privacy challenges, not just in the Netherlands, but in the UN in general, there's some larger issues that we, they're just different than here. So, yeah, thanks. First and foremost, thank you so much for this wonderful discussion, it's very eye-opening. I'm Steven, a psychiatry resident over at SUNY Upstate. And I do have a question regarding as to how AI could be used to improve certain systems, whether it's with documentation, as to how to make things more efficient, how less can be more and what areas does more need to be done, or if it could be using QI projects, for example. And I know I'm wording this in a very broad-spectrum sort of way. Okay, I'll give you a broad answer. Well, yeah, fair enough. So 40% of US practices use AI today, right? But it's not the clinical stuff, it's the unsexy back-end office scheduling, supply chain management, billing. There's this whole fleet of companies, and this drives me bonkers, the prior authorization bots. It's like we're building this nuclear arms race of bots that use AI tools that do the auto-generation of the denial letter thing that goes to the third-party payers, and I'm sure they have their version. It's like they're just fighting each other, where we should just get rid of the problem, which is get rid of prior authorization, but that's a story for a different day. The more interesting stuff is on the clinical side, right? So the first autonomous AI FDA-approved devices in the diabetic retinopathy space, it's this cool little thing, it's what fit on this podium, spun out of a company from the University of Iowa that is very, very good. It uses image recognition to diagnose diabetic retinopathy. The company is so confident in the underlying technology that the company has medical liability insurance for the decision on it. Okay, so you all know diabetic patients, everybody needs an annual eye exam, people don't get them because we do not have enough people to do the exams. There's now a box that can do it with a high school-trained operator. You could put them in every retail pharmacy, you could put them in primary care offices, you could put them everywhere you want, screen the entire population, right, if you actually made the decision to scale and put them out there. That changes the work of the ophthalmologists. They're no longer screening normal people, they're only seeing people with the disease. It's a very easy way to understand how the AI can scale capacity of the delivery system to suddenly care for the population. The problem is we have now point solutions, right? There are these little, very narrow use cases with one algorithm for one problem and none of them are integrated. And I work in a large health system, we're not gonna buy 200 point solutions. Who could manage that? So what I think will happen is the next iteration is there'll be platforms, that may be tied to the EHR or not, I don't know, that allow the use of multiple algorithms across specialties to be deployed at scale. Now, what does that look like for primary care settings, for independent practitioners, I don't know, but that's sort of where we are today. Oh yeah, and thank you for bringing that up because we all learn in our training to treat the patient and not the number, and hopefully AI can be more intuitive like that in the future. Maybe. Hi, good morning, and thank you for that wonderful presentation. My name is Grace. I'm a nurse practitioner in primary care and work in Chicago in urgent care. I was just curious to know about if the AMA is teaming up or working together with the ANP or the ANC, nurse practitioner organizations, to improve the lives of patients. We work in urgent care, I see a lot of different ethnic backgrounds, very diverse group of patients, and many of us nurse practitioners, we treat the patients, but we're finding that we always have to refer them to psychiatrists, which you are correct, they are very hard to find. But I'm just curious to see if there's anything that the AMA is teaming up with our physicians. Yeah, no, we have a good ongoing conversation sort of across health professions with the National Associations for Physician Associates for nurse practitioners, pharmacists, and whatnot. As you might imagine, we have different viewpoints on a variety of policy issues, particularly around independent practice. But I will tell you, out of COVID, frankly, out of necessity, we had to find ways to work together in ways that historically we had not. And that, I think, has actually allowed us to be more effective in finding places where we can partner to make sure that, again, we all want technologies that work for everybody. Let's come together on some of these places where there are obvious agreements. And look, it's not like the workforce shortages in nursing are any better than they are in physicians. I mean, my job's not to represent that to you, but you know those pressures, right? And obviously, there are more and more nursing schools and PA programs coming online. But again, that's not gonna solve the foundational problems that we have across the delivery system, where today, yes, we have primary care docs and there's practitioners, but we still have 83 million people who don't have access to anybody. And we need to solve that. Thanks. Hi, my name is Jenville. I'm primarily a business consultant from the Bay Area, working with investors and startups in the global security field. Do you bring a lot of cash with you, because you can't get back to Chicago? I do have a network, actually, with over, is this being recorded? It's a yes, this is a, no, no, you're fine. Yeah, if you wanna come join our network, we have a very high net worth network I can invite you to. But I have a couple questions for you. One is, if you knew you could not fail today and you had a copious amount of funding, how would you play a role today as the APA leader in deescalating nuclear power and wars and global tensions, as well as revolutionizing health, if you just knew you couldn't fail, whatever you wanted to try? I've never had that question before. But what I will tell you is there is nothing worse, there is nothing more inhumane than humans not having access to healthcare. And when I meet with my counterparts across the globe, from small African nations to larger industrialized countries in Asia, all the folks from Europe, it is clear that there is so much more that we should be doing together as a global community to try to create stability through access to care that can reduce the challenges that drive issues that relate to the sometimes poor choices to create conflict in society. So that's probably my perspective and it's something certainly that there are pathways to that and we need to do more. Thank you. And then my second question was, what do you think about this idea which I've started to test out? It's a chat room for millions of human beings, I've only gathered a thousand so far, but from different beliefs where you empathetically collaborate to support each other. So let's say there's a thread called healthcare, government, life extension, aging, mental health, I've built this today already on our Telegram platform and we have psychiatrists, military, veterans, civilians, and a variety of people coming together to help each other. So rather than like one person's Twitter, X account talking to everyone else or one person's Instagram or LinkedIn talking to everyone else, it's a collaborative situation where people are helping each other. What do you think about a bunch of psychiatrists coming into a chat like this and helping everyone else in the world help each other? Because I just feel like psychiatrists are the key to everything in this lifetime. It sounds like the punchline to a bad joke. No, look, I think we all saw during COVID that our society is built foundationally on human connection, right? Like we said, and when we have the loss of human connection through isolation, that causes all sorts of stress and challenges at the individual, but also as a society. I mean, I'm watching what's happening with young people for all of our child psychiatrists in the room and the experience of COVID has been so damaging, right? Because of the loss of social connectedness that we're still trying to, I think, work through as a society. So figuring out like how do platforms support and facilitate those interactions, I think is a really interesting question in finding ways to do it that don't create some of the harm that we know is caused, particularly to young people through what's happening online, I think would be helpful. So it sounds like you are up to something very interesting. I'll come up to you after, thank you. Sure. Hi, so you mentioned earlier that there is a shortage of psychiatrists and I think that shortage is especially evident in rural areas, Georgia especially. And this is a multifaceted question. I also agree with you about AI being able to improve capacity. So I envision, I don't know how distant this future is, but I envision a future with technologies like chat GPT where human beings, especially in outpatient facilities are more comfortable speaking with an artificial intelligent, not a provider, but like under supervision of a provider and I think that would really increase capacity. But how do you see AI playing more that kind of a role, almost like a mid-level? Yeah, so there's two sort of things that are embedded in that. One is there's just like the sort of like the UI interface question, so it's well described, well documented, well known that in many circumstances, patients are much more willing to share sensitive information by writing it down or typing it into a computer. Like that's not new. So how do we leverage these interfaces, large language models, chatbots, whatever, to take advantage of that? I think that's an open question. It depends on the setting and the implementation. There are a couple of companies. There's one that's making a lot of noise right now that is in that triage space, right? It's all about how can I have a conversation with the patient and let's not go primary care specialist A, specialist B to finally find the right person, but shortcut that through obviously some triage algorithms that's driven by the AI to make sure we're getting you the right person on the first stop. So you can imagine that if you can reduce some of those unnecessary appointments, some of that unnecessary wait, that that creates capacity in the system and that's the investment thesis for at least one of the companies that's out there that's trained on thousands and thousands and thousands of hours of conversations with primary care physicians as well as nurse practitioners. So do you see this as a potential for kind of like an online interactive basis before patients come to the office? I mean, maybe. As I say with folks all the time, AI will change everything in healthcare. I just can't tell you how. And we will see, but certainly there are folks who think that that might be one of the ways that could be helpful. It's all up in the air. Yeah. Thank you. Thank you for the talk. I had a couple of questions. One is from a psychiatric educator component. When do we start taking this AI integration as a competency in physicians? At what threshold do we start considering that as a skill set? Because at this point, we're in that tinkering stage. There's early adopters and there's ones that are not. And so there's friction between organizations of those that might be more progressive in using it. But when do you foresee that happening? Yeah, we're not quite there. Their conversation is at the ACGME level about sort of minimal standards, common program requirements. They're not there. There's competencies have been described through sort of the health system science work at the AMA in conjunction with LCME at the UME level, but it's not AI specific. It's more global around informatics, right? We don't want students graduating who don't know how to use the electronic health record or understand these tools. So I think it'll be a work in progress. I don't think just based on what I know about how the standards community and what the regulatory bodies, ACGME and LCME do, I don't think it's imminent, but certainly there's deep conversation about what that ought to look like. Where I think there's likely to be more movement is in the precision education space, where there's now a pretty well-defined model and approach of let's not do the time-based, you need to see 100 cases of this. Let's truly use data to inform our understanding of competency in those intrustable professional activities to personalize the education. Then yeah, oh great, if you're ready in three and a half years, not four, then off you go to practice. But there's obviously all sorts of regulatory work because of licensure, timing requirements and other things that need to be adjusted to support that. Would you recommend then faculty having these EPAs to model for residents in the future then to get them? Yeah, and I haven't frankly just paid that much attention to what those companies look like in various specialties. Like are there EPAs around the use of digital tools and AI in psychiatry versus medicine? I don't know. My guess is that some are farther along than others, but I expect that they'll be there for everyone. But like with everything in medicine, it'll be variable in terms of the timing. Okay, and my last question is, you said that there would be a difference between physicians that would use AI and those that would not. Right. Does that have medical legal implications? Yes. So I've already seen a lawsuit where patient-patient's family are suing physician and health system because they did not use an AI tool. Now, the punchline is the AI tool actually didn't exist. So you can imagine how that's gonna go. But there's this expectation that patients have that we're using tech and technology. And again, so there is no case law on this right now, but there certainly will be. Okay, last one. Okay, I promise. So if you were to model integration of AI as a standard for your practice in a hospital setting, would you have it to use one AI model versus so there would not be discrepancy between different providers in the system, or would you give the choice to the providers? I don't think anybody has a good answer to that question. Yeah. All right, thank you so much. Yeah. All right, well, that second to last question anticipated mine, but maybe you have some more information. Do you think there will ever be a time in any of our careers where a psychiatrist will not be meeting the standard of care by not using AI? I don't know the answer to that. I look at ultrasound technology for central line placement in anesthesia. So people fought it forever. I don't need to use it. I've done it for 30 years. It's new, it's expensive, we don't have them. And the ASA, our professional society, it took them a while to get to the point where they said, no, it should be a standard of care because the safety data is clear that there's less puncture risk and blah, blah, blah. So that was a 20-year journey. My guess is that for AI, it will be equally, if not as long, but maybe not. And again, you have to remember that you all define the standards of care in a local setting. That's why the practice of medicine is regulated at a local level, not a federal approach. And so it may be that things are different in California than they are in Wisconsin for a while. I don't know. But again, I think it's an interesting question. It will be evolving for a while. That's great. Yeah, hello, I'm Mario Weiss, physician from Germany. I don't know if it's good to ask this in public, but maybe I try. When we implemented AI in Germany in the legal setup, the most opponents were my physician colleagues. So they wanted my approbation. So basically I had a lawsuit against me because I was developing AI. And we managed to do it against the physicians. And then we looked at the US and we thought, why is it not happening here? And hearing you now, it's just the opposite I expected. I expected a lot of people that, physicians that were against AI. And how come that you have this problems implementing technology into your US system while at the same time having physicians like you standing here being so enthusiastic about it? What's wrong with the innovation capacity of the US? I mean, that's a longer question. And I'm actually going to Germany tomorrow. So we'll see if my experience over there comports with yours. No, look, we live in a very complex healthcare environment that nobody in their right mind would ever build from the ground up. This driven by policy, I am not a single payer guy. The AMA is not single payer. But there are different advantages in a single payer system, right? In many industrialized parts of the country that allow the models to pivot, which has implications around technology acquisition and deployment that are just very different here. Make sense? I think liability is a big issue. Yeah. I just wanted to let you know about WeBot Help. It's an app working on yourself made easier. For those who are interested, there is psych apps where you can therapy. There's a fleet of venture capital backed psychiatry apps and a handful actually driven by psychiatrists. And you can imagine how I feel about the ones that are more tightly connected to the profession as opposed to others. Look, you all have been a wonderful audience. Thank you for sitting through 90 minutes of me. And thank you for having me at DAPA.
Video Summary
In a presentation at the American Medical Association (AMA) event, Dr. Jesse Ehrenfeld, the association's president, discussed the potential impact of artificial intelligence (AI) on healthcare. He shared both the excitement and concerns surrounding AI's integration, highlighting its potential to improve healthcare delivery and patient outcomes while acknowledging issues like liability, bias, and privacy risks. Dr. Ehrenfeld emphasized that AI and technology should enhance, not burden, healthcare professionals and stressed the importance of human-centered care in an increasingly digital world. Current challenges include healthcare workforce shortages, especially in psychiatry, and the difficulty in integrating AI into practice efficiently.<br /><br />Dr. Ehrenfeld outlined how the AMA is involved in shaping AI development, emphasizing the need for tools that a trusted and integrated seamlessly into medical practice. He also addressed concerns about potential liability issues, encouraging transparency and shared responsibility in case of adverse outcomes related to AI use. Furthermore, the AMA is actively working on various initiatives to tackle physician burnout, advocate for reforms in Medicare and telehealth, and ensure equitable access to healthcare technology.<br /><br />Lastly, a case study illustrating AI's potential was provided, underscoring the need for stringent regulations and ethical considerations to prevent bias, akin to historical issues with medical devices like pulse oximeters. Dr. Ehrenfeld's address highlighted the balance needed between embracing technological advancement and maintaining the integrity and trust inherent in the patient-physician relationship.
Keywords
artificial intelligence
healthcare
AMA
Dr. Jesse Ehrenfeld
liability
bias
privacy
human-centered care
workforce shortages
physician burnout
telehealth
ethical considerations
×
Please select your language
1
English