false
Catalog
Education: An Essential Component of Consultation- ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
I don't know if the mic is working. The mic is definitely working. So there aren't many people here right now, but we're going to get started because it is 8 o'clock. Looking at my watch, which has a black face. So as you can see here, the title of this session is Education, an Essential Component of CL Psychiatry. I'm Phil Bialer. I am the chair of this session. I'm the immediate past president of the Academy of CL Psychiatry, ACLP, and the theme during my presidential year and the theme of our annual meeting was making connections, inspiring transformation through education. And I am looking to my right, whoops, I just, I'm trying to, here we go, I'm looking at Sandy Rackley, who was our program chair for that meeting. We're going to be going in order, as you see here for our speakers, and I'll start with our first speaker, Scott Beach, who is a CL psychiatrist, the MGH, Psychiatric CL Service. He's an associate professor of Harvard Medical School. I'll let him introduce his talk. Let me get out of here so you can get your talk up, Scott. Thank you. And I've held it for a while, so I'm working on the first half of it. Thanks, Phil. This is a new experience for all of us, talking in the dark, so we get bonus points for that. I think this is mine. Nope, that's not mine. Mine's working. It's more important than the one I have. Here we go. I'm going to talk today about residency training in CL. And this is informed primarily by a survey that the Residency Education Committee of ACLP did, surveying residency training directors nationally to ask them about training in CL. So we're going to walk through that process. I'll present some of the results from the survey, and then kind of focus on some take-homes and some next steps as well. I don't have any disclosures relevant to this talk. So for a historical perspective, the ACGME currently requires two months of full-time training in consult liaison psychiatry. So that's where we are at now. And in the new milestones, the Milestones 2.0, there's actually an entire milestone that's dedicated to consultation work. There have been surveys looking at CL training and education going back as early as 1966. And ACLP, which used to be the Academy of Psychosomatic Medicine, did its first survey of residency education in 2010. And that led to the 2014 APM recommendations for residency training in consultation liaison psychiatry. And a few of the highlights from those recommendations were that they advocated for a minimum 3-month FTE rotation, so at least one month longer than is required. Said that residents, if they weren't on the service full-time during the rotation, should at least be there for 30 hours a week. Said that some of the core CL time should come in the latter half of residency, so either PGY-3 or PGY-4 for part of the required CL time. And that there should be a minimum of one FTE of faculty for every 1.5 to 2 trainees. That was done in 2014. And since that time, anecdotally, there's been a lot of shift in training. And the biggest trend that I've heard residency program directors talk about is moving CL earlier in residency. And that's due to a variety of factors. But probably the biggest ones are competing demands of outpatient work. And that specifically has to do with the way the outpatient requirements are written in terms of the ACGME requirements for there to be a full-year longitudinal outpatient year. And a desire to protect elective time later in training, which has caused CL to move out of PGY-4 in programs that used to have it there. There's also been an expansion of CL services. There's been introduction of new models, including proactive consultation models. And then, of course, during the pandemic, we started having a broader use of telepsychiatry. So we set out to basically follow up the 2010 survey. Again, this was done by members of the ACLP Residency Education Subcommittee. All right. And we basically spent the first year of the pandemic designing this survey. We wanted it to be really comprehensive. But we also wanted it to be something that program directors could fill out in under 10 minutes. And we came up with an 81-question survey, which is quite a large survey. But it was relatively quick in how it went for people completing it. Went through the process of getting IRB exemptions, set it up in REDCap, and we distributed it to U.S. residency program directors using a listserv that is available to program directors. And then we really hammered people with follow-up emails, because a lot of these program directors were not CL psychiatrists, they were not members of ACLP, and their investment in filling out the survey may have been lower. We also, for political reasons, could not get an organization like ADPERT to officially endorse the survey. So we really wanted to do things to maximize the response rate. One of the things that we ended up doing was identifying as many ACLP members as we could at institutions that had not completed the survey and sending them emails asking them to harass their local program director. So we did that for several weeks. The survey topics were basically broken down into six areas. So we wanted to learn about program demographics, we wanted to know who was responding. We really wanted to focus on the structure of both core and elective resident rotations in CL, and we talked about a few areas in that regard. We wanted to think about attending physician staffing. We wanted to ask about fellows and other trainees. Since we were doing this survey of all programs, we thought it would be a good opportunity to also ask some questions about fellows. We wanted to focus on didactic curriculum a little bit in CL, and then relevant to the timeframe, think about the impact of COVID and telepsychiatry. So I'm going to walk through some of the results. The first thing that's important to understand is that the overall response rate was lower than we had hoped. So it was about 38.4%, so pretty low for a survey. But we did manage to get over 100 responses, which makes it the largest survey of residency training in CL that's ever been conducted. So this is a table showing who responded. And you can see that the majority of responding programs identified as university-based. Those first four categories are categories that ACGME uses in identifying different types of residencies, and residencies typically just identify as one of those four types. So most were university-based, and then second was community-based, university-affiliated. Response rates by region are shown down at the bottom. So you can see that northeastern programs had the highest response rate, and the other regions sort of clustered together. So because we had a low response rate, we really wanted to examine the differences in responding and non-responding programs to better understand what biases that was going to set us up for, and how we might use that information to better interpret the results. And what we found were that programs that responded to the survey tended to be larger residency programs. They were significantly more likely to have a CL fellowship, and they appeared to be less likely to be community-based without an academic affiliation. There were no differences in region between responders and non-responders. So overall, what we suspect is that programs who responded to the survey actually had more of an investment in CL training, and these results likely overestimate the presence of CL training nationally, and may paint a more optimistic picture than what is actually the case if you look at all programs around the country. So keep that in mind as we talk through some specific results. So in terms of structure of CL, one of the good things that we found in this survey is that a near majority of programs have actually increased the total amount of time that residents spend on CL that is required. Nearly half the programs have increased that amount of time in the past decade, and it's only decreased in a very small percentage of programs, 5.8%, with about 40% staying the same. We also were surprised to find, and we hadn't asked this in previous surveys, so we don't know how it compares, but over a third of programs now have residents rotating on at least two completely separate CL programs, generally at different hospitals. About one-fifth of programs have a required outpatient component, and then the majority of programs have separate emergency and consult rotations. So only about 10% of programs have those experiences combined for residents, and nearly a quarter of programs identify as having a chief resident in consults. This is a comparison of the 2010 and 2021 survey in terms of where CL is placed, and I'll just pick out a few highlights here. If you look in single year, you can see that having all of the core CL training in PGY-2, the percentage of programs doing that has increased since 2010. So it's gone from 34% to 43%. That's one of the bigger jumps that you see here. The number of programs who have all of CL in PGY-3 has decreased from 11% to just down to 5% now. And then at the bottom I would also highlight that if we think back to those 2014 recommendations, that program should have at least some of their required CL training in the second half of residency. In 2010, 61% of programs were compliant with that recommendation, and now only 44% of programs are compliant. So that is a change and a trend that we should think about. This looks at all four surveys. Some good news is that 100% of programs have CL experience. So at least all of the programs that responded to our survey are meeting the ACGME requirement. But again, you can see that things have changed over the years in terms of the placement with a significant increase in placement in PGY-2 and a relative decrease in placement in PGY-3 and 4. When we look at attending physician staffing, a few interesting things emerge. About a third of programs say that all of the faculty who attend on the CL service are board certified. So that's great news. But about a fifth of programs say that none of the faculty who attend on their CL service are board certified. Many programs have non-physicians on the staff, but only a small number of those actually have those non-physicians staffing cases with residents. And then there's a broad range of FTE for faculty on the CL service. Some programs have more than 4.0 faculty equivalents, and almost a quarter of programs actually have less than one full-time faculty member on service. Thinking about staffing models, a full-time staffing model would be a faculty member who's always on consults year-round. That's actually the most common staffing model. A longitudinal staffing model is a group of faculty who work certain days or half days of the week. That's about a quarter of programs. And then a block staffing model is a rotating group of faculty who work for periods of a couple of weeks to a month at a time in a rotating fashion, and that's about a fifth of programs that use that model. When we think about fellows, of the programs we surveyed, about 37% offer a CL fellowship with two positions being most common. Nearly all programs report that attending staff all new consults with fellows, and an equal number report that attending staff at least some follow-up consults with fellows. We don't have data to know if this has changed, but I would suggest that it has. I think 10 or 15 years ago, attendings were staffing fewer cases with fellows. That has changed significantly primarily because of billing requirements and an attempt to generate more revenue rather than necessarily educational goals. In most programs, about four-fifths of programs, fellows supervise and directly staff resident cases, so they're doing direct teaching with residents. And then if we look at placement into fellowship, again, it's kind of a good news, bad news situation with a really broad split. And the most concerning thing is that in nearly a third of programs, no resident has gone on to CL fellowship in the past five years. And remember, these are programs that tend to be more invested in CL than is probably the case nationally. So this suggests that we're not putting nearly as many graduates into CL fellowship as we would like to be, though the average is two to three residents per program over the past five years, with some programs putting in more than 10 over that time period. In terms of other staff and trainees, as I mentioned, many programs now have non-physicians working on the CL service, NPs, social workers, psychologists. All programs that responded to the survey have some form of other trainees on the service, with medical students being most common, neurology residents, and family medicine and internal medicine residents also being common as well. I won't dive too much into didactics, except to say that didactics in CL are a key part of many programs. The majority of programs do have a formal CL curriculum. And we found that a lot of programs are using more novel ways of teaching CL, including journal clubs, case conferences, research groups. Most of the CL talks are happening in PGY2, which makes sense if most of the residents are rotating in PGY2. And there we asked about some specific topics, particularly newer topics, and we're pleased to find that many of those are being incorporated into the CL curriculum, including things like anti-racism, DEI efforts, and cultural psychiatry. And then finally, the impact of telepsychiatry, or the impact of COVID and telepsychiatry. So prior to the pandemic, only about 14% of programs utilized telepsychiatry on the service. And by the time we surveyed, which was about a year and a few months into the pandemic, over 80% of programs had been using telepsychiatry, and half of those programs anticipated continuing to use telepsychiatry in some form in a post-COVID era. So that's become a new modality, and I think one that we have to think a lot about in terms of training residents on the use of telepsychiatry for consult work. I want to emphasize some key limitations of the study. Again, the main one being sampling bias. So this was a survey that was sent to all program directors, and it looks like programs that responded probably were those who were more invested in CL training and had more resources to dedicate to CL training. So again, these results likely overestimate the current strength of CL training in residency nationally. We also lacked granularity in some topics. For example, we did not distinguish between programs with two months of CL versus three months of CL. So we can't comment on how many programs are in line with the ACLP recommendations for three months of CL training, though we know that they're all in line with the ACGME requirements for two months. And then this survey was not linked to the 2010 survey in any way, so we don't have a way to assess interval changes for individual programs. So take-home points from my standpoint, a couple of really important things stand out. I would say the best news in this survey is that residents are spending more time on core CL rotations. They're doing more outpatient work, but they're also spending more time on inpatient CL rotations. And I think that's really good for our field if we think about wanting to expand the CL training for residents. The biggest trend, and the one that gives me the most pause, is that programs are also moving CL earlier in training. So you saw we went from two-thirds of programs having some of the core CL in the second half of residency in 2010 to now less than 50% of programs meeting that recommendation. That is, if I put on my program director hat, that makes total sense. Having CL in the PGY-2 year, or even in the PGY-1 year, is so much easier from a program structure model. It fits really well in a block rotation schedule. It's right in there with other core psychiatry experiences like inpatient work and emergency room work. And it allows you to fully separate out the PGY-3 year and really focus on outpatient work and then keep the PGY-4 year elective. So on the one hand, I totally get it. On the other hand, from the perspective of somebody who attends on the consult service, I worry about the impact that it has for residency training in CL. And I worry about the impact that it has not only in the care that we provide for our patients, but also on how we're perceived by other teams in the hospital, right. So if you have a brand-new PGY-2 or a PGY-1 rotating on the consult service, they're not in a position to give expert recommendations. They have very little psychiatric training under them. They don't have any understanding yet of group therapy, of psychodynamic psychotherapy and all those principles that we use every day on the consult service. And it's really hard for them to act effectively. And so they're basically acting as a first line with significant attending backing. Whereas if you have a PGY-3 or PGY-4 on that service, that can be a really autonomous experience for them. They can get a sense of what it's like to lead a team, to serve as an ambassador for psychiatry to other services, and to really take ownership of those cases. So it's a very different experience. The next step for us in the Residency Education Subcommittee is going to be to do similar to what was done in 2014, to take these results and hopefully create new recommendations for CL training in residency. And I think this is something that we're going to have to wrestle with a lot, is we can't probably reverse the trend. And so how do we work around it? And I think one of the key recommendations is really going to be bringing residents back to the CL service at some point in their third and fourth year to be able to have a more robust leadership experience. The final take-home for me is just we need to do better work in terms of continuing to increase the number of graduates going into CL. Nearly 20% of programs have no CL-trained faculty, and the same percentage have no graduates pursuing CL. So we need to figure out how to do outreach to those programs. I think ACLP is starting to do this a little bit, so we're putting together a program for residencies who don't have a strong CL component or don't have CL-trained faculty to offer them some education and some meetings and some mentorship with folks through ACLP. But I think we need to do a better job of promoting that. And then finally, also recommend that residents are rotating on multiple different CL services and they're part of larger services. And we probably have to think about the impact that that has on them. It's hard to change hospitals. It's hard to change systems. It has advantages in terms of learning in different systems, but I think it may be increasingly hard for them to cover multiple different domains and feel like they have a sense of how to master that work. So lots of work ahead, but that's kind of where we're at in terms of residency training with regards to CL. Thank you. Any questions for Scott or comments? I have just one quick comment in terms of PGY2 doing CL rotation during the PGY2 year. I mean, one argument that some CL directors will say to that is that they're closer to their medical training and medical school. They may not have the full psychiatric training that is also very important during consults, but the medical background and the closeness to that is also very important. Any thoughts or comments about that? You know, I will full disclosure, I am somebody who did CL as a PGY-1 and PGY-2, so I trained at UVA and that's when we did our CL rotation, and then I directed a program where we did it in PGY-3. I think that is, that's absolutely true, and I will say from a resident perspective, I love doing CL early, and it was something that really generated for me an interest in doing CL. So I do think there are advantages, and you're right, Phil, in terms of being closer. What worries me is they are closer to medicine, but the people who are consulting them are closer to medicine as well, and so I wonder how much they're adding beyond that, and I think that's the key. You know, I think it's, it obviously can be done. It's been done for many years and many of us trained under that system, but it requires a different approach from the attendings, and I think for these programs who are shifting, it's going to require a lot of thought about restructuring the service around that, and what are the needs of a PGY-2 resident who's rotating on the consult service as opposed to a PGY-3 resident, and they're pretty different developmentally, actually. Thank you. Thank you. I really like your idea for putting a PGY-3 or 4 as like a junior faculty, so that's what we did during my residency time. So we were lucky. We actually did six months total, but we were a very large program, so, but bringing a PGY-4 back on, so it gives that buffer for the staff if there is a non-actual CL trained, but that I think is a great way, because then you can have that middle ground for somebody who actually has the experience, seems what, from a practical way, and yes, there's the conversation about electives and all that stuff, and we get really sensitive to that, but there's a really strong argument that we really only need three years for actual training to be successful, so let's use that year more effectively, and so I think that's like a really powerful way to then get people back interested in it. Yeah. Absolutely, and I think you've anticipated the pushback with that, which is what happened to our fully elective year, but I think that there are probably ways to work around that as well. Thank you. We're going to move on in the interest of time. Our next speaker is Sandy Rackley, and as Sandra Rackley, Sandy is an assistant dean of training and well-being for graduate medical education at the Mayo Clinic, and Sandy is a consultant in child and adolescent psychiatry, and she'll be speaking some more about the impact of COVID on training. So I'm the token child person, which means I think developmentally and in terms of trajectories, and so I'm going to be talking a bit about how the pandemic has disrupted learning trajectories for our learners and the colleagues that consult us. I don't have any relevant financial relationships, and again, hopefully by the end of this, you'll have some food for thought on the ways that your own learning trajectory, those of your trainees and those of the non-psychiatric folks who consult you may have been disrupted during the pandemic. So starting with the before times, and this is kind of an idealized view of our U.S. medical education system, but the idea was that we worked hard to select qualified candidates, and once we had them with us, we used didactic experiences to give them language, theory base, empirical rationale for their decisions about diagnosis and treatment. We gave them thoughtfully sequenced and well-structured clinical practice experience under the supervision of senior physicians so that they could learn those cognitive, affective, psychomotor skills to provide competent clinical care. We made sure that they were engaged in formative and summative assessments, both formal and informal, to help guide decisions about the scope of their independence and their own next plan for learning. And we did all of this while attending to our clinical learning environment and the hidden curriculum to ensure that, you know, what was happening in the environment that our trainees were working in supported the goals of their learning, including honoring the humanity of everyone involved, learners, faculty, staff, and patients. Enter chaos. And so one of the kind of key take-home points here is, you know, this is John Hopkins, waves of hospitalizations. But as we all know, the impact on individual hospitals, systems, training programs, even individual clinicians was variable. And different things hit at different times. The impacts had different impacts at different times. And so everyone had their own kind of individual experience of the pandemic. That said, I've kind of taken those peaks and superimposed a training trajectory across the top of it. So, you know, in reviewing COVID timelines, it's been a long three years. But if folks remember early on, we had those shutdowns. Everybody was home. You couldn't get groceries. And then as things were starting to reopen in varying waves around the country of opening and closing and opening and closing, that was when we were really dealing with PPE shortages. So even as places in the country opened, we were seeing in our healthcare systems that we were still really struggling with getting people in and keeping people in and then making sure they had the equipment they needed to do their jobs safely. Then we had a bit of, you know, kind of a honeymoon in the spring of 21 as, you know, we were all excited about the vaccinations, those spread. And then we had the Delta surge followed by the Omicron surge. And now we're kind of in the, what is supposed to be the recovery phase of disaster, but seems to be kind of continuing staff shortages, burnout, struggles within our systems. And so then I've superimposed here timelines of sort of a standard. If we were to talk about a med student who went straight into residency and straight into CL fellowship. So if we would talk about somebody who was a second year medical student when the pandemic started, they've essentially had all of their typical clinical training as a third and fourth year med student and first year resident during the COVID times. If we move down to somebody who was an intern when the pandemic hit, they are graduating from psychiatry residency right now, essentially doing their entire core psychiatry training during COVID with those peaks disrupting various aspects of it. Thinking about our, you know, people who were, say, PGY2s, and we'll talk about this in a minute. They did their PGY3 clinic during the peak of the shutdowns when we were all just trying to figure out what happens and how we do this. And then even thinking about somebody who graduated from their CL fellowship and is now finishing their third year as a new attending or a staff member anywhere out in the world, they haven't practiced independently in a healthcare system that was not in the midst of chaos. And so I'm going to walk through each of those aspects of our healthcare system, our medical education training system, and talk a little bit about impacts. The first one is just that aspect of candidate selection. For that cohort that was starting whatever it was, whether it was med school, residency, fellowship, or a job in summer of 2020, by the time the pandemic hit, selection was pretty normal. But we've had these two years of people starting in 21 and 22 where things have been completely disrupted by the pandemic. So we had disruptions in grading and standardized testing, testing centers being shut down, you know, you couldn't get appointments at Pearson VUE for eight months, scheduling board exams. We had the ECFMG and immigration processes highly disrupted by the pandemic, among other political issues. We also, residency program directors, were seeing significant increases in application numbers, a trend that predated the pandemic. But with the virtual world of interviews, we saw applications just increase dramatically. And so through lots of consensus, we've seen virtual interviewing for residency and fellowship, for medical school, we've been hiring our new faculty in this virtual world, which has impacted people's decisions. Sometimes people are signing up for four years or the start of a career in an institution that they haven't set foot in. We're trying to select candidates in a virtual world. And as the travel restrictions and social distancing have come, you know, have winded down, what we're seeing now is an attention to equity issues and the ways that there's disparate impacts of the finances of interviewing and travel. And so a push for moving forward, a lot of our professional organizations pushing for continuing virtual interviewing indefinitely as a way of allowing, kind of leveling the playing field. And if there's any in-person recruitment at all, that that be no stakes. So that people are not selecting candidates on the basis fundamentally of whether they have the finances to travel and interview. And us thinking together about how we make that an effective way of selecting and recruiting our candidates at all levels of our system. Moving on to our didactics. We all know that initially there was a lot of just outright cancellation while everybody's getting their feet under them. And then when you think about those waves, a lot of us saw didactics, whether it was at the med school, residency, fellowship level, or our CME courses being canceled during things like the Delta wave and the Omicron waves because people were too busy providing clinical care. We also saw rescheduling or substitutions. So things where didactics may have been carefully sequenced so that one set of didactics built on another that may have been completely disrupted. People who weren't used to teaching topics filling in when colleagues were sick or caring for COVID patients. And then the shift pretty rapidly to virtual delivery of didactic content, which had a whole host of struggles and continues to. From just the very basic bandwidth and technology issues, a lot of freezing. The thing that we've seen, everybody on the Zoom meeting who starts talking and everybody's signaling them that their microphone's off. The ways that those disrupt the flow of a conversation, the flow of learning, even the pause that it takes to figure out who's going to start talking if we're having an interactive seminar. We talked about Zoom fatigue. And what happened in many settings, at least for a period of time, is that didactic experiences that were much more rich and interactive and case-based shifted back to a passive learning experience where people were silently, maybe with their cameras off, listening to talks and maybe not taking things away. There isn't a lot right now in our medical literature about impacts on learning in that setting. We're starting to see data in the K-12 world that kids really, when they learned virtually, didn't learn at all. We've got a whole generation of kids who've lost a year of academic progress. And that data also, at the K-12 level, talks about disparities. So the school districts or schools that had more resources, those kids kept on a trajectory and in some cases actually accelerated learning. Probably when parents were home and more available to be involved in teaching. Where kids who didn't have those resources available to them, I know in Minnesota, rural broadband is a huge issue. We had parents who were driving their kids to the McDonald's parking lot to try to log into school. There are still schools around the country where they don't have enough teachers and teachers are teaching virtually from different schools and alternating. And so we're watching kind of this divergence of trajectories right now in learning. That when we think about what does that look like 10 years from now when those kids are now applying to our medical schools, our residencies, and our fellowship programs, the way our educational system builds on itself. But we can kind of maybe draw some parallel to what our students, our residents, our fellows, even ourselves, have learned over the last three years in virtual experiences. And that perhaps our own learning has not been as high quality either. So practice experiences suffered from a lot of the same things. For many med students especially, they were just sent home during PPE shortages. And right at the beginning the idea was that it was unethical to expose students to an infection that we knew very little about if they didn't have to be there to provide that clinical care. We had some clusters of students who graduated early. We had waivers by several of the specialty boards who for that first year and to some extent the second year said to program directors, you know, we get it. You can't give all of your residents the core rotations that they need right now. And so we're relying on you and trusting you to make decisions about competence. Around the world, there have been different systems where they've actually added a year of training in recognition that that amount of disruption meant that graduates weren't ready. In the U.S. that didn't happen. Again, in fact, they accelerated graduation for students. We also saw substitutions. And thinking about not just our psychiatry trainees, but thinking about our medical trainees who got pulled from other non-core rotations to provide COVID care, to go to the ICU, to go to the ER. And fellows who were pulled from fellowship programs back into their core specialties to provide that COVID care at various waves. The ACGME allowed for a time fellows to spend up to 20% of their fellowship time in providing core care in their specialty. In a one-year fellowship, that's a significant portion of learning lost in specialties. And whether it was actual changes in rotations or just alterations of current rotations because of waves of COVID patients, because of shifts from in-person to virtual, because of census changes, the experiences that our trainees had clinically were very different than what they had before. And what that's led to is islands of competence and Swiss cheese holes of skills deficits that have been extremely individual. And so as we've talked, I put some quotes up here that I've heard from people at various levels of training all along. One was, you know, I was on a virtual ICU rotation. I hope all of my patients only code virtually from an incoming intern. One of our interns surprised us by saying he actually never had the experience of having to get up in the morning and shower and be at work on time, that that was a new experience for him. A child fellow that I supervised who had been in a program that didn't have great resources at the start of the pandemic whose PGY-3 outpatient year essentially didn't exist and so started child fellowship without much experience of, you know, seeing patients in clinic. And another child fellow who we were just talking about the mechanics of setting up an office for therapy because they'd never had a therapy patient in person before. So when we think about these things, and then we think about our medical colleagues who may have shifted from some of those, whether it's their CL rotation that they may have, you know, originally had scheduled, whether it was just some of the more subtle psychosocial aspects of care, those may have been lost during the pandemic while they were focused on providing critical care, or just when things like continuity practices dropped hugely for 6 months, 12 months, 18 months. And so when we're thinking about the colleagues who are consulting us, they may also not have especially the psychosocial and interpersonal communication skills that we would expect of attendings or residents at their level. And they're needing additional help from us for those competencies. Assessment has changed. Step 2 CS was paused and then eliminated. The ABPN allowed for virtual CSEs. But also thinking about informal assessments, the ways that those moment-to-moment feedback that you get with an attending, with senior resident, even with your peers during social distancing, those opportunities were lost. I was just talking with a faculty member last week who was one of those CL fellows who graduated in 2020 who, you know, I was asking him what he's doing to learn right now. And he said, I've never been to a conference since my fellowship. I don't even know how to choose a conference. So those self-directed learning skills that are so much a part of how we continue to grow and develop over the course of our careers have been impacted as well. And then again, thinking about our clinical learning environment with just the waves of impact that we've had, again, social distancing affecting social support and informal learning. Staff shortages from quarantines and isolation early on and now the great resignation, shifting responsibilities to our faculty, to our fellows, to our residents that were handled by other members of the multidisciplinary team. The fact that we sit in a broader culture that has been traumatized over the last three years and the way that chronic stress impacts civility, professionalism, resilience, all of these things. And then thinking too about the way our trainees' access to their own personal therapy as both a support experience and a training experience has been disrupted because of availability and just because of the pandemic. And finally, thinking again about the ways that these impacts have been varied. And like many situations, have tended to more negatively affect the most vulnerable members of our community, including the most vulnerable members of our learners groups and colleagues, which may again lead to these diverging trajectories. And so thinking early on about population impacts of COVID-19 itself, thinking about these learning disruptions. My fellow who hadn't had an outpatient clinic experience because his hospital couldn't stand up Zoom for nine months. So that differential impact on learning for less resourced students, less resourced schools. And then remembering that personal impacts have been very individual. People who've had losses, have had caregiving responsibilities, our IMGs who were separated from their families and their loved ones for years. And the ways that those things have disrupted individuals' learning and growth over this time too. And so what now moving forward? I hope the big takeaway here is that we don't make assumptions and don't generalize. I know as a faculty member, I used to be able to get a sense of a trainee based on a few days of working with them, kind of where they fell in my bell curve of trainees. And that gave me an idea of what kinds of skills and knowledge we might want to work on first. Now, again, thinking about sort of mountains and holes in people's learning and understanding, what we're going to find is that people have islands of real strength and expertise that may go beyond what we have as faculty. While they've also got kind of surprising areas of gap. And that idea of really working together with our trainees, with our colleagues, with ourselves to kind of assess where are those areas of strength, where are those areas of gap. And recognize that our kind of standard experience and trust that people will get what they need from progressing through may be shaken. And that we really may need to think more individually about learning trajectories and being intentional about planning, learning, and reflection to allow people to fill those gaps. And what that's going to mean is that we can't have undergrad, UME, GME, CME be separate islands. So we're going to have to be thinking about learning kind of being a transition that occurs smoothly across a continuum. You know, these ideas of handoffs during these transitions and helping people think about, again, taking charge of their own learning so that they can continue at whatever level they're at. And then we are now in a hybrid world, which means we all who teach, who deliver education, need to educate ourselves and become better at delivering effective education in a hybrid world and understanding what that changes about the ways that we teach, assess, facilitate learning. And that is where I'll stop. Any questions, comments for Dr. Rackley? I actually have a question. I'd like to ask you. There's the recent requirements that, I think it's, they're requiring that attendings can't supervise through telehealth. And that, like I know at our institution, is creating a fair, because people have gotten really used to that. We have attendings who live pretty far away now, but, you know, it's really, people have sort of really developed, one, their own personal quality of life, and then also the way that they're doing things based on sort of the ability to meet their needs. And so I'm wondering if you could talk a little bit about that. earlier, in prior studies, treatment initiation rates are less than 25%. And treatment sustainment is either not studied or it's like 0 to 1%, so really low rates. And MCPAT for moms, this rate was 43% for initiating treatment, and PRISM was 52. So a slight difference, but this was not statistically significant between the groups. Treatment sustainment was 20% with MCPAT for moms, and PRISM was 25. Again, slightly higher, but no statistically significant difference. So both programs appear to be equally effective in increasing these rates. What we found with depression symptoms, as I mentioned, we hypothesized that MCPAT for moms would improve by two points on the EPDS, which is the Edinburgh Postnatal Depression Scale. It's a scale we use in pregnancy postpartum for measuring depression symptoms. And PRISM would improve by four points. What we actually found was that in both groups, the depression symptoms improved by 4.5 points. The scale is similar to the PHQ-9, for those of you who aren't familiar with the EPDS, just so you have the scale. It's a 30-point scale. And so MCPAT for moms, actually, we saw it was associated with depression outcome improvements far more than we thought. So this really speaks to, when we're providing this ongoing education, and we have these providers' backs, and we're doing this longitudinal case-based education over time, they, you know, we heard earlier, they treat more complex illness, and it also is suggesting that they're able, that patients get better. And we also did a sub-analysis, and we actually found that the more providers called our, that increased utilization of our program is actually also associated with increased improvement in depression symptoms. So the providers that called more often compared to those that didn't, their patients got better more quickly than others. So that is also, that's another analysis that we did. So then you might be wondering, well, what happened from a perspective of the quality of care that we provided? So what we then looked at, because I just talked about, you know, treatment rates, I talked about improvement in depression symptoms, what we also looked at is, is their quality of care improving at the practice level? Because, you know, we were, really wanted to understand, well, the depression outcomes appear to be the same in both groups, but something has to be different in PRISM, because we've worked, spent so much time working with the practices to increase their quality of care and providing all this education and training. And what we found was that we had this tool that we developed where we looked at, are they meeting the standards of care as recommended by ACOG? And what we found was that in the MCPAP for Moms, our Massachusetts Access Program, their quality of care score actually, like this is the workflow piece. We did not see a significant change pre-post. This is sort of the follow-up piece. But with PRISM, we did. So and we did this by step in the care pathway. So screening, PRISM practices were screening more often. They were screening more often at three time points, whereas in the MCPAP for Moms practice, many of them were still only screening at the postpartum visit. PRISM practices also, there was a difference. They were implementing screening for bipolar disorder more often than MCPAP practices. Assessment and treatment, we didn't see a change. I think this is partly because they didn't document it, because this was all based on medical record abstraction. And then we didn't see a change in follow-up monitoring. I do suspect that this is because it wasn't documented. But regardless, we're seeing changes up here. And overall, the practice policies and procedures, we saw a pre-post difference. So when we're looking at PRISM, this is comparing their practices before implementation and after. So what we did was we've done, we've been collaborating with ACOG, you know, throughout all of these, all of this developing this, we've collaborated with ACOG. And we've really done this in partnership with a close colleague of mine who's been, you know, a partner on all these grants. ACOG said to us, well, it's great we have this model, you know, we want you to make this more scalable, because we had a whole implementation team that go into the practice and we would do these trainings, and it was quite time-consuming. And so what they said is they said, we want to create something that OB practices can do this themselves without any implementation, without your team doing this. So what we did was we created the self-assessment instruments. So these are all now available on ACOG's website. This just came out this past week. And so we took that whole process I described without the navigator, because that would be hard to do, but we basically took that whole process. We created a self-guided toolkit or guidance document. And for practices to do this themselves, we also created some videos where we're teaching the practices how to do that. This is now all available on ACOG's website, and we're actually doing another randomized control. We just completed this. We haven't analyzed the data yet. Well, we're in the process of looking at, does this still improve depression outcomes? Are we seeing that difference when we have the self-guided version? So we'll see. I don't have those results yet, but we'll have them in a few months. So in summary, we've developed a lot of tools and products to address mood and anxiety disorders in obstetric settings. It's exciting that the ACCESS program model is out there. The PRISM tools certainly are showing that. These tools combined are showing that OB providers can provide more complex treatment with this ongoing longitudinal education and implementation assistance. However, we still have a lot of opportunities to increase ACCESS. And when we think about how we didn't see a difference between those groups, so how MCPAP for moms and PRISM were equally effective in improving depression symptoms, what we will be doing in our future studies is thinking about the reach. So in the implementation science RE-AIM model, impact is reach times effectiveness. So as I've talked about, both programs were effective in improving treatment rates and depression outcomes. We didn't look at the denominator though, but they also screened people more often. So if we think about this, if the PRISM practices were screening more people, they probably reached more people because they screened more people than MCPAP for moms did. So I suspect that they reached more people. We did not design the study in a way to look at that, to look at among the people who were screened how many actually got treatment. We really looked at a subset of all the patients served by the practice. So we just didn't design the study. So in future studies, what we'll be doing is looking at when we think about PRISM, yeah, we know it's effective in improving depression outcomes, but what is the reach of PRISM? And how many people is it reaching in the practice as compared to MCPAP for moms and other interventions? And that way we can truly understand the impact. Because we know that PRISM is improving the screening processes and the quality of care provided more so. So that's what we'll be doing in future studies. And so thank you to all the collaborators, all the many funding sources we have had. And that's the end of my talk. So thank you. I think we have some time for questions. I think we might have time for one question. We're going to move on. Thank you. Okay. So we're going to finish up today. I'm Phil Diallo. I'm a psychiatry attending at Memorial Sloan Kettering Cancer Center. And I'm going to be talking about a communication skills training program for oncology practitioners that we developed there. I do not have a disclosure slide. I apologize. I don't have any personal financial disclosures. However, I should say that the study I'm going to be describing here was funded by an NCIR25 training grant. So what I'll be talking about today, I'll be starting with giving some background about communication in cancer care. I'll describe the learning-centered experiential training program, communication skills training program that we call ComSkill at Memorial Sloan Kettering, the evaluation process we went through to study the effectiveness of this training, and some new developments and collaborations we've developed since the time of the training. So communication in cancer care, we do know that from our experience, from the literature, from studies, that cancer patients often have many unmet communication needs. They often do not have a full understanding of the extent of their disease, of their illness. They may not have a clear understanding if the treatment is meant to be curative versus palliative. They may not have a clear understanding of their prognosis, how treatment, adjuvant treatment, may decrease the risk of recurrence and so on. If they have more advanced cancer, they may not have a clear communication with their providers about advanced care planning and end-of-life goals of care. Cancer patients often also do express concerns that they aren't receiving enough emotional support from their doctors and not having an adequate response to emotional reactions. The discussions that we have with patients are often very emotionally charged, and I often say to our trainees that if we don't respond to the emotion in some ways, the patients are just not going to hear the next thing you say, and it's not worth going on. So with that, the mission of our program is to work in partnership with clinicians of all disciplines to improve communication with oncology patients and their families in order to enhance patient-centered care and the overall adaptation to illness. And so this is an evidence-based approach. We certainly went through the literatures to identify communication skills that were important in training our fellows and residents. It's an evidence-based approach. It's also very experiential and reflective learning sort of approach. Adult learning theory says that adults learn better when they're learning things that are practical for their day-to-day work, and they often learn better through experiential kind of learning rather than just listening to a lecture like this. And also, the training is module-based, meaning when the patients are meeting with their oncologist, many things can happen during this session. There may be a discussion about the results of the scan, which may lead to a discussion about treatment planning and a change in treatment. It may lead to a further discussion about prognosis. If there's been progression of disease, it may lead to a discussion, again, about advanced care planning and end-of-life goals of care. So many things can happen during this interaction, but we break these modules down into individual challenging communication situations, such as sharing serious news, breaking bad news, or discussing prognosis. So we develop these modules in what we call the COM skill conceptual model, and it is a skills-based approach. And we identified 20 communication skills organized into five categories. This is representation of the module, starting with communication goals, communication strategies, communication skills, and process tasks. So I'll quickly define each of these. So a communication goal, basically, is the desired outcome of the particular module that we're having. So, for instance, for sharing serious news, the communication goal for that module is to convey threatening information in a way which promotes understanding, recall, and support for the patient's emotional response and a sense of ongoing support, the overall desired outcome of that particular module. A communication strategy, basically, the steps we recommend using to accomplish that goal. Complex a priori plans, which direct communication behavior toward the successful realization of the communication goal. And one example is offering ongoing support and helping to promote adaptation. Communication skills, this is something you say. They're standalone utterances by which the physician can further clinical dialogue. They're observable, they're discrete, they're observable, and they're measurable, such as checking patient understanding, which is very important. Often we're having very complex discussions with patients, and we may assume they understand what we're talking about. And you ask, what is your understanding of the treatment we're going to be offering? What's your understanding of your prognosis based on what I just said? And they may not really have a clear understanding. So what we were trying to do with our communication skills training program is to enhance more effective and empathic communication. And then, so these are the five categories of communication skills that we have. Agenda setting skills, questioning skills, checking skills, information organization skills, and empathic communication skills. And these are the 25 skills that we did identify through the literature that are used across all of our modules. For instance, something like agenda setting, simply starting the meeting by saying, we're here today to discuss the results of your MRI. It sets up the framework for the meeting, which we're going to be talking about, inviting the patient's agenda, and so on. Again, checking skills, questioning skills. We're often very good at asking open questions. But our oncologists sometimes aren't as good, and they ask more yes or no questions. But just simply saying, tell me what's going on. An open question. Again, checking understanding, what's your understanding of what we just talked about, and so on. Empathic communication skills, again, comes up in all of our modules. And then the last component of our model is what we call process tasks. These are sets of nonverbal behaviors, which create an environment for effective communication. Maybe include behaviors to be avoided. Having a discussion with the patient in an inpatient setting. Sitting down to your eye level with them, and really enhancing, promoting the patient-physician interaction. Making sure their clothes are covered if you're doing this in an inpatient setting. Trying to ensure privacy. Handing tissues to a patient in an outpatient setting if they're crying. These are all process tasks that we can use to enhance the physician-patient interaction and communication. And then we put this all together in what we call a blueprint. And this is the blueprint, again, for breaking bad news or sharing serious news. We have the goal at the top. And then we have the strategies, and skills, and process tasks. And this looks very structured. And it is structured, but it can also be flexible. You're not going to use all of these skills and all of these process tasks. And you may not go specifically in order. As soon as you walk in, if the patient is crying when you walk in, which may happen sometimes, you're going to start with empathic communication. But it does provide some sort of structure in having complicated and complex discussions with patients in challenging situations. So when we do our trainings, our workshop, the trainees receive a book with some background information. We have a brief 20 to 30-minute didactic presentation. And then we broke up into the small group role plays, working with standardized patients, actors. We had scenarios specific for the discipline we were working with and specific for the module we were working on. All the role plays were co-facilitated by com-skill faculty and also faculty from the department we were working with, medical oncologists, radiation oncology attendings, and so on. We had video feedback. We were also able to record the interactions and get some video feedback. So the role plays, it allowed the fellows sometimes to try, practice new skills they hadn't tried before, asking a patient, how much information do you want? Do you want a lot of details? Do you want more general overview? Their preference for information is one example, for instance. The other observers, the other fellows would also give feedback to the person sitting in the learning seat, learner seat. It allowed some room for self-evaluation, some self-reflection. And then when we were evaluating the program, we used this Kirkpatrick model of evaluation, starting with the most basic level, level one, course evaluation, their satisfaction with the course. Going to level two, looking at pre-post self-efficacy. We also had our fellows work with standardized patients, a standardized patient assessment with a checklist that the standardized patients filled out. Then moving up to level three, we also recorded interviews with patients in the clinic setting, real patients, not just standardized patients. And then the patients also filled out a checklist looking at patient outcomes, the highest level of evaluation. And we obtained a lot of data. So before the training, we got two video recordings with the standardized patients. We had the standardized patients fill out an assessment. We had two clinic recordings with real patients, and the patients filled out a checklist. The fellows went through the training, usually two days of training, three modules each day. And then after the training, we again got two standardized patient assessments, two clinic recordings with the checklist involved. So we got a lot of data, actually. So the results, we had 262 oncology fellows and residents. I say residents. Our radiation oncology residents came during their PGY-2 year. Otherwise, we were working with medical oncology fellows, surgical oncology fellows, and some consulting fellows that participated in the training. And what we found, level one, the lowest level of evaluation, this was at a one to five Likert scale. And the fellows were very positive in terms of the course evaluations, very high ratings. Level two findings, pre- and post-training, again, we found there was a significant increase in self-efficacy. The fellows were saying that they felt much more able to address some of these challenging communication situations. Level 2B, working with the standardized patient assessments. We had significant findings in terms of agenda-sending skills, checking skills, questioning skills, information organization skills. So a good uptake in some of the skills. We didn't find a significant change in empathic communication. And I think one of the reasons for that was that many of the oncology, people that came to do oncology fellowship training at Sloan Kettering, I think already came with a very high level of empathic communication skills. And you can see, so they started out very high. And there wasn't much room for improvement in terms of their empathic communication. They were already pretty good at doing that. Level 3, evaluation of clinic recordings. We actually did not find any specific significant uptake in skills in the clinic setting. And I think one of the reasons for that, and maybe some of the reasons for that, was when we were doing the clinic recordings, they often were of initial evaluations. So the fellow was basically getting a basic history. And then the attending would come in, and they were the ones having the discussion about treatment planning and prognosis and so on. So we didn't necessarily capture much uptake in skills in the clinic setting from our recordings. In terms of the highest level of evaluation, in terms of patient outcomes, we did find some improvement in the doctor seemed basically less hurried from pre- to post-training. And the doctor was listening to what I was saying, to findings that there was a significant improvement. And again, this might not seem like a huge difference, but again, in the cancer settings, just that the doctor was slowing down, was listening to the patient, which can also be very empathic, just listening and showing that you can really tolerate the emotion, I think was very helpful for the fellows and for the patients. I've been rushing through this. I apologize for this, but we're running out of time. So since we've finished the training, that study, we've developed more trainings for mid-level practitioners, nurse practitioners, physician's assistants. We have developed a team communication training, which many of our regional sites have been requesting us to come to do, where there are smaller groups of providers working together in a team. We have developed a communication skills training working with our geriatric patients and families. We developed a communication training for LGBTQ health, mental health, and we have developed a very, again, robust training for communication skills training for our nurses. So our overall conclusions, advanced cancer communication training can be integrated into an institution's regular practice. There were high levels of satisfaction with the course itself and pre- and post-self-efficacy. There were some significant increases in skills uptake in our standardized patient assessments, and there was some improvement in patient-reported outcomes. So I want to acknowledge the people that collaborated on this study, and thank you for your attention. Again, I apologize for rushing through, but we're actually over time, so about three minutes over time. If there are any comments or maybe one question, we can try to take that, otherwise I think we're going to have to finish up here, and I want to thank all my colleagues for their participation today.
Video Summary
The session on "Education, an Essential Component of CL Psychiatry," led by Phil Bialer, featured presentations on advancements in training CL psychiatrists and adjustments due to COVID-19. Scott Beach discussed the evolution of residency training in CL psychiatry, highlighting a survey from the ACLP Residency Education Subcommittee. Results indicated a trend towards increasing rotation time but also highlighted a shift of CL training to earlier in residency, often due to outpatient work competition, while still promoting transformative education. Concerns were raised regarding training placement in PGY-2 compared to later years and how it impacts residents' learning experiences and professional development. Sandy Rackley expanded on the pandemic's disruption across training programs, suggesting a need for flexible, individualized learning paths to address competencies affected by COVID-19, across undergraduate, graduate, and continuing education levels. Recommendations emphasized fostering continuous learning across transitions. The session concluded with a focus on how collaborations and revised curriculums could address gaps and support the progression of education and practice in CL psychiatry.
Keywords
Education
CL Psychiatry
Phil Bialer
Training Advancements
COVID-19 Adjustments
Scott Beach
Residency Training
ACLP Survey
Rotation Time
Sandy Rackley
Flexible Learning
Continuous Learning
Curriculum Revisions
×
Please select your language
1
English