false
Catalog
The Measurement Based Care Imperative: Knowing is ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
All right, the whole team is here. It's super loud and awkward. Are we ready? I think we're ready. We ready? We're ready. Okay, I like the size of this because I think it's actually going to be good, but I have to tell you, you are in a workshop, and it's the afternoon, and it doesn't mean that you can just hide away and check your email and sleep after you've had lunch. We're going to be wanting to do some integrative, collaborative work. Like, I think the purpose of meeting in person, my name is Eric, we'll talk about this in a second, but the purpose of being here in person is so that we can work together and learn from each other, and I think that's a very unique opportunity to do that. If it's all remote or online or other things or just asymmetric learning, then it's not as rich and vibrant and contextual as what we can accomplish here, and so when we propose this workshop, we're thinking critically that we really wanted to make this a useful tool for you to learn from each other because there's probably a lot of wisdom in this room. That's what we have discovered in doing this workshop before, so I'll just say that. I've got, my name is Eric Vanderlip. I'm here with an amazing group, a team of folks who are very knowledgeable about this concept of measurement-based care. I wouldn't pretend to be the world's only expert on this. There's a lot of people who care deeply about this subject, but I do think that it has utility in spite of, if you went to our session earlier, you might have felt like you came out of it thinking it has no utility. I think it does have a lot of utility, and so that's what we're wanting to do. I am on the APA quality committee, and Carol is the chair of the committee, Carol Alter, and Catherine's on the committee. Catherine and I both are chairs of a work group on the committee that is putting out a document to facilitate small and small practice and solo group or solo practitioners with implementation of measurement-based care, because we feel like, from the APA quality committee perspective, that's one of the most fruitful things we could be doing as a psychiatric workforce, is implementing measurement-based care. But the uptake has been paltry. We feel like we need to help with that, and that's why we're having these sessions in the annual meeting, to have genuine, earnest discussions about implementing measurement-based care in psychiatric practice. The work group membership also includes Andrew Carlo here, as well as Bashkan Kadriu and Cecilia Livesay, and we have several APA staff that have supported us in the provision of this white paper that's coming out soon. We've been working it through all the APA groups and coffers to get consensus on the content to be an official APA document, so it should be, we anticipate that hopefully it will be coming out as a resource from the APA mid-year to later this year. Until that is out, we've been riffing off some of the content of the white paper to generate these sessions. That's this work group, and that's the context through which we're doing this presentation this afternoon. So we should talk about disclosures. I'll go first since I have the microphone, and then we can go down the road if that's all right. I'm right now 0.6 FTE as an assistant professor at Oregon Health Sciences University. I do a small group cash-only private practice. I have co-founded an online mental health startup company that doesn't generate any revenue and is not patient-facing yet, so my main source of income is from OHSU and the private practice. That's myself. Dr. Alter? I'm Carol. Oh, and your disclosures, you might also sort of introduce yourself briefly. I will do that. Excellent, that's wonderful. Thank you. So, hi, I'm Carol Alter. I am, I'm a professor and associate chair for clinical integration and operations at the University of Texas Austin Dell Medical School Department of Psychiatry. As Eric said, I am the chair of the Council on Quality Care, and in fact it was this plus Dr. Brendel's sort of interest in this area that she invited us to come and talk to you all about the work that we're doing. It also supports a lot of what will be talked about over the next several days about the future of psychiatry work group. And, disclosures, I don't have really any disclosures. I'm 100% at Dell Medical School, and I am also a senior consultant for the Meadows Mental Health Policy Institute, which does policy work in many of these areas, but no, no disclosures. I'm Andrew Carlow. I work 75% at the Meadows Mental Health Policy Institute, which is based in Dallas and Austin, Texas. I'm the VP for health system integration there. I also work 25% at Northwestern Medicine in Chicago, and I've done some consulting for Asuka Pharmaceuticals. And I'm Catherine Rideout, nice to see you all. Thanks for being here. I work for the Permanente Medical Group. I'm actually up in Santa Rosa, so pretty close to home. I'm an assistant chief of our department and a quality lead for mental health, and no disclosures other than I work for TPMG. Thanks. And I should be clear, I don't have any financial interests in the topics that we're going to be talking about. I don't own the PHQ-9 or any other measurement-based care things. Okay, so some brief context before we start to get to work in the session. This comes out of the work group, like I was saying, which formed in late 2021. We're trying to provide supporting materials for advancement of measurement-based care within the psychiatric profession, and we're focused on small group and solo practitioners because I think that's where a lot of the barriers to implementation, frankly, are. It's out in the weeds of psychiatric practice, and that represents anywhere from 30 to 50% of our workforce. And before we get started, we're going to go over a couple of brief things in case you all haven't bought into the evidence base for measurement-based care. We'll review that briefly. I want to present to you a framework to consider when you're looking at which measures you want to implement. I think it can be an overwhelming jungle of potential measures out there, many of which are under development, many of which have been already developed. But there's ways to think about which measures you might want to use or review the measures that you're already using. We want to present to you some nuggets of things to think about, a framework to consider. And then we're going to break you up into small groups so that we can work, and we'll help facilitate the discussions. And at the end, I hope that we can learn from you all as well as the rest of us. When we are done with the small groups, we're going to ask that we talk about those experiences and what your findings are out of your small group. And actually, we are recording this session as part of the annual meeting on demand workflows, and we're going to want to potentially look at the outcomes of your small group discussions to help inform us as to what your real-world barriers have been to implementing measurement-based care. Okay, so back in 2017, which was just after Donald Trump was elected, pre-pandemic time, we had this article. That's like five years, six years ago, six years ago now. John Fortney, University of Washington, published this article calling measurement-based care at a tipping point. And I think I generally agree. There was more than enough data six years ago to suggest that measurement-based care consistently improved the quality of care that persons seeking psychiatric services got, as well as their outcomes. It led to faster, more efficient outcomes, and better care overall. And that was a no-brainer, and it seemed pretty cut and dry back in 2017. But here we are, 2020-23, still talking about it. Did I say too many 20s there? Too many 20s. But we're still talking about it, and it's still slow to uptick. I think it's obvious, but we need to reiterate the fact that we only recognize like 20% of patients who are really clinically deteriorating, and the routine use of symptom-based measures can allow us to identify those who are deteriorating faster and intervene before symptomatic or clinical deterioration comes to light. Clinical inertia, or the failure to advance clinical treatment in spite of, you know, when outcomes haven't been met, is probably one of the leading causes that people haven't achieved their outcomes. We just think they're better when they're not fully better yet. And measurement-based care, the routine use of symptom-based measures, can improve our clinical inertia where we fail to advance or make more aggressive treatment recommendations to patients when they deserve a chance to get better. And I'm going to keep on going. There's some evidence that only 18% of us really use measurement-based care in our practice, and fewer psychologists, but even only 18% is exceptionally low. We use intuition and observations rather than true measures of symptom performance to gauge clinical status. It's been easier to implement measurement-based care with technology these days. We have patient portals, and people can go online and complete PHQ-9s asynchronously, and that gives us better capacity to gather data for them outside of the clinical encounter, which frees up time and other things, and that's very great, but we still don't fully use it. There's barriers to implementation with the technology. And value-based payment mechanisms and other efforts to improve quality are incenting the routine use of measurement-based care in practice. All this should be helping us improve measurement-based care, but we still remain behind the times from what we think should be the standard of care. And people who receive measurement-based care have improved in superior outcomes. I'm going to stop belaboring this point because we've already talked about it, and if you aren't sure or equivocal about this, then we can talk in a sidebar later on. And for 2017, this is a summary of the article. These findings are robust and consistent across patient groups and provider types. Like, this stuff's real. It seems to have a real effect on patients and populations. So there's some better characteristic traits of good measures and worse measures, and these are things to consider when you're selecting a measure. And before we get into the exact framework, I think it's useful to know a little bit about characteristics of measures that seem to be better. And when we've done this workshop before, I think one of the rich discussion points that we've had with persons in the workshop is they like one measure, but it has limitations because they haven't thought about the fact that it's a clinician-reported measure rather than a patient-reported measure. And so it's important to be aware of what some of the evidence is leaning towards in terms of better outcomes, better measures, and worse measures. So measures are better probably in the ambulatory or outpatient space. I know when I'm in the psychiatric emergency services, which I do a couple of times a week, if there's a PHQ-9 and it's 11 p.m. at night and they're in crisis, I don't find a lot of clinical utility in it at that point in time. So my point is that these measures are generally more better employed in ambulatory settings. If a patient is rating the measure themselves, it's likely to be more accurate and that has more validity than if it's a clinician or staff-rated measure. If we could do the measure more frequent, if the measure only asks about symptoms over the last six months, but our treatment cycles work faster than that, the measure is not going to accurately reflect the status of the person. So the measure needs to be frequent and repeatable. If the measure is not within the cycle of clinical decision-making, if I don't see the PHQ-9 score and that doesn't help me inform the next step in their treatment, I look at it after the fact, after we've already made a treatment plan or at another time and I'm not considering the next step in the treatment plan, the measure is non-useful. It needs to be implemented within the clinical decision-making life cycle. So I think there are some domains to consider which then come out of all this. The first is implementation and particularly cost of the measure. There's some proprietary measures that you have to pay for and some online measures that you can only sign up for and charge you every time a patient completes the whole panel of measures. There's other measures that are free and readily available and so that's something you need to consider when you're thinking about which measure to implement. You want to think about the ease of administration. Is it computerized or is it paper and pencil that you have to scan in? How are you going to collect the data from the person? Can this be implemented within your clinical decision-making workflow? How will you track the measure over time because that seems to be one of the most utilitarian purposes of measurement-based care is being able to see their trend and symptom burden over time. The measures that are brief are better. If it's a long, exhaustive, hour-long survey that they have to fill out every month, people probably aren't going to fill it out and so you have to consider that when you're thinking about the measure that you select. And if patients can put it out there on their own, it's a patient-reported outcome measure, it's probably better at this point in time than a clinician-reported outcome measure. The measure obviously needs, it's better the more reliable it is and consistent over time in reporting the actual clinical status. And then, you know, clinical validity. There's a lot of measures out there that are just kind of ad hoc measures that have not been validated against a gold standard, that are sold on the marketplace, that don't really seem to have a bearing in actual clinical decision-making or don't really guide us in what's going on with the patient. They're just weird, gestalt measures that somebody has chosen to charge for. And so it's important to think up who's made up this measure, what's the gold standard they referenced it to, and does it seem valid. So when you abstract this, and we use the PHQ-9 as an example, PHQ-9 is a depression symptom-based measure. PHQ-9 is free, it can be patient-completed, patient-reported, it's administered on paper, online, or many EMRs have it now, so that's nice. Frequency is every two weeks. It's repeated within RADR and inter-RADR reliability, and it's been shown to be a valid measure of depressive symptoms and validated across multiple populations. It's been translated into a number of different languages. It's got a lot of nice features. That's part of the reason why it's so widely implemented. Okay, so we've talked about the evidence-based measurement-based care. A brief framework, which I just showed you for measure care selection. Now, I think we're at agenda level three, which is to say it's time to break up into small groups. And I know it's the afternoon, and you may be thinking, oh my gosh, what did I get into? But it's time to meet your stranger, and you're here at the APA meeting so that you can meet other people, and connect, and network. That's what the whole purpose of this meeting is, or at least you probably told yourself that when you registered and signed up. So, I see about... 21 of you. 21. I'm thinking, do four groups total. Each one of the four of us will be the sort of facilitator for each of the groups. But you'll be here for the next half hour. So why don't we just call out numbers? One, two, three, four. Does that work? We're gonna really force it to be mixed up. So you're gonna have to meet with someone you don't know, probably. So why don't we start over here? Just remember your number. So, two. Oh, you're one. Oh yeah. Back to one. We're doing four groups. Two. Okay. You right there, sir. I'm actually staying on this side first. Sorry, you right there, sir. You're three. Three. One, two. Okay, perfect. So why don't we do the one's in that corner, two's in this corner, three's in that corner, four's in that corner. And please designate someone right away as the spokesperson for your group who's gonna kind of tell the rest of the group what you talked about. And as you're moving to your corners, the agenda for your work group is up here on the board. So once you get to your group, we're gonna say you've gotta select a measure, or at least, if you've already implemented measures, share what measure you use in your clinic setting. Maybe talk about yourself. Talk about the clinic setting and the patient population you work with. Talk about any measures that you use. Or if you're gonna select a measure, how you might use the framework to think through which measure you select. And then you're gonna talk about and share with each other. I think the real fruit from this is sharing with each other what is the challenge to doing this measure consistently in your practice? What do you find useful? How did you overcome that challenge? Or how do you imagine, what challenges do you imagine having that measure in your practice? And then share amongst the group your obstacles to implementing those measures. Yeah, and we'll do 25 minutes, gang. So starting at 50 after, we'll say 15 after, or 50 until, so we'll say 2.15 will be the time that we'll break out of the small groups. Okay, so please use this time richly. You can talk amongst yourselves. And here's a list of possible measures. I could flip back to this any time if you don't know some measures that you could think about. A whole gaggle of them. Okay, good luck. If you'd like to hang around for the session, we're totally open to any questions. We're totally open and happy to have you. We're almost done with a small group breakout, and then people will be organically sharing the fruits of their labors. Make sure you've designated your spokesperson to come and speak at the microphone with your group's experiences. All right, well, thank you all very much. I can say for our discussion, we had a very rich discussion. So I'm looking forward to hearing from each group. So I was thinking we could kind of just move around in a circle. In case you don't remember your group number, you probably remember where you were sitting in the room. So the group that was sitting right here at the front right, who's the spokesperson for the group that was on the front right? All right, sir, do you wanna come up here and come on down? I should pull up the price, right? Yeah, exactly. You want people up there, right? We think you just have them go to that. Oh, either way, yeah. Yeah, there, yeah. I'm not a, listen. Or for people who's life goal is to be up on the podium, come on up. We are the panel, so we know best. So we get to be up top. And just take 30 seconds to put my thoughts together. Go ahead. So we were kind of an eclectic group, which is a good thing. We've had folks that are in the community mental health center, one that's in private practice with group practice that has had some transitions. Kaiser, VA representative, access to large databases. So we talked first about past, recent, and potential future uses of it. And we took different tools that we talked about, like the PHQ-9, highly dissected advantages, disadvantage of a symptom-based thing versus one that's more dimensional, the BAM, and the different versions of the BAM that gives more dimensional data. And then we were talking about how does that affect clinical decision making in terms of predictive ability, giving meaningful information that changes our practice. So we talked about the pros and the cons of different measures, how we might implement them, and we were starting to talk about future directions in terms of meaningfulness, and ran out of time because we were about 2 3rds of the way, I think, through where we actually could come up with some specific recommendations. Thank you. And so, spokesperson for back corner, is that you, sir? All right, thank you, come on down. Hi there, I'm Robin, I'm from the University of Washington. We had quite a varied group. We had a couple of child psychiatrists, one from Melbourne, Ketamine Clinic from Brooklyn, a couple of medical directors running kind of multi-state organizations. So it was an interesting discussion. I think common themes that came up were, you know, PHQ-9 and GAD-7 were pretty universal, more specific measures. I think one barrier that came up was finding measures that were free and didn't cost anything. Also, how to implement them with electronic health records. Getting winning hearts and minds of both the clinicians, and in some organizations, all of this infrastructure was actually getting people to do it, was challenging, getting patients to complete the measures as well. And the solution that came up for that was, what we do at, you know, up in Seattle, is that patients have to complete their PHQ-9 to get into the telehealth appointment. It flags up when they're logging in. Or at Seattle Children's Hospital, they get a tablet when they come into the clinic, and they have some time in the waiting room to do that. So we see those results, share those results with the patient in the appointment. And I think that was a theme that came up as well, is how to include that data with the patient so they see the meaning behind all these kind of questionnaires that they fill out, rather than it being kind of ignored during the appointment, and that data just being used somewhere to report quality measures. So somehow making it therapeutically relevant to the patient might be a good way to go. Did I miss anything out there? Just about everything. Robin, I just have one question. Have you seen a gap in data integrity as, like, let's say, after the pandemic, I might hypothesize more people are wanting to do more in-person visits. And so if they come in person, they might be able to completely skip the login for their telemedicine appointment and not complete the forms. Yeah. Have you found that to be the case? I have, actually, we do give tablets out to families, you know, I'm a child psychiatrist, when they're in the waiting room. I know that's costly. In my practice, if I don't have the outcome measure, I will spend time at the beginning of the appointment to fill in the questionnaire with them. And that somehow teaches them that I find it important enough to use up their time to fill in this questionnaire. And that's been effective. But then that eats into your clinical time. It does, yeah. I will acknowledge that, you know. You'll say, bad you, you didn't fill out this form. Now we have to spend all this extra time that we could have spent doing this. Not quite as passively aggressively as that. But I'll just say it's important. And, you know, we need this kind of objective measure alongside our conversation, you know, that it's important to me. Yeah. And have there been actual, what kind of, has the gap in the data integrity led to issues with how you look at the measures or how you report on outcomes? I don't know. I suspect it depends on the clinician. I'm probably in the group of people sold on the importance of this, so therefore make it important. And I'm pretty sure not everybody does that. Yeah. You know, it'd be an interesting project. Okay. Thanks for letting me ask you those extra questions. Hi, my name is Sunil Khushlani. I'm from New Jersey. So in our group also, we had a very diverse group of people. Some in private practice, some in academia, some working for insurance companies, some designing their own measurement-based tools. The commonest tools that people have used are PHQ-9 and GAD-7. The limitations they felt of these tools were one, that they don't cover all the symptom presentations. The question was, are they really sufficient? Are they really meaningful? They don't work on fast-paced units. We are not really measuring people's quality of life or disability. A question came up about, if a patient fills out something and you haven't looked at it and you discover this a day later, are you liable for it? Another important discussion thread was really figuring out the process of how these measures are collected, reported, get into your chart, and making sure that that process flow has been figured out and not just left to chance or not really designing the flow of that process because that can determine the burdens placed on the clinic or on the patient. If you don't really design the workflow for how you're integrating the measurement collection and reporting and graphing into your work, it can render it completely useless. If you're able to graph this and show the graphs to patients or use these graphs to make decisions, then patients can understand the value of that. So people talked about, for instance, mood charting. Mood charting had an ability to not only graph things but also see how that graph changed vis-a-vis med changes, sleep hours, psychosis, psychotic symptoms, daily events. And so people can see the interrelationship of all these things with regards to what your curves are. So this was the summary of our discussion. Thank you very much. Yeah, yeah, perfect, thank you. And then the group that was over here, please. Thank you. Hi, I'm Paul. I'm from Sensible Care. It's a group, a patient practice down in Orange County. We, just like a lot of the other groups, had a pretty mixed group with three clinicians, a couple of people that are involved with research and digital technology. So they definitely are very interested in hearing our clinical perspective and how they can incorporate that technology into clinical care. So yeah, I think similar to some of these other groups, PHQ-9 was probably the most common one that's utilized. One of our providers is a child adolescent psychiatrist, so she has a lot more of the child-based measures as well as the adult ones as well. And one of our providers, he's actually from Canada, and so it's very interesting from his perspective. He was mentioning that they didn't really have a lot of barriers related to administering, but it's more with the monitoring and reacting to some of these scores. So like, you mentioned that PHQ-9 is something you wanna try to monitor every two weeks, but they're typically following up with their patients every four to six weeks. And so that was one thing that he was wondering about, how that may affect the timeliness of that. In my practice, we have the fortune of having a software development team, and it's been very helpful because we have implemented PHQ-9 and GAT-7 into our intake. So every single intake has those questionnaires involved. That's something that we do want to implement into every follow-up as well. And so our plan is to make it something where during the appointment reminders, they get a link to the PHQ-9 and GAT-7 and have to complete that before the visits. Now, it was interesting that you brought up that question because you said that, well, do you force it and make it a requirement or do you just leave it as an option? And that's exactly one thing that we've been debating with the software team. If you do that, make it a requirement, you might end up with patients getting frustrated because they don't get the care, they don't get the follow-up they really need. And then on the other side of that, yeah, if you don't, then you sometimes don't get that data, especially if you get really busy in your clinic. It's really hard to be able to assess that on a regular basis. So those are kind of the issues that we just discussed. Wait, don't go. I have a question for you. If, oh, never mind. No, I thought I had it. Never mind. No, actually, I do have a question and it's for the broader group. Does anybody have an answer as to whether or not forcing the completion of the questionnaires is useful or bad? Yes. Oh, please come to a microphone. We'd very much like to record this, but I'm wondering if anybody has a lived experience in doing it one way versus another and what works best. Yeah, so we have tried that in our practice. I was thinking what we were mentioning. So in general, we have problems with no-shows and when we tried that in our, like where I worked before, it increased number of no-shows, actually. It increased no-shows by forcing people to complete the questionnaires? Yes. Okay, interesting. So it was interesting for me that a couple of people mentioned that if that affected, I work in Canada, so I don't know if it's different in the States, maybe. Okay, and do you think, just exploring that further, do you think it's because people didn't want, they were put off by the experience and they didn't want to complete it? Or what do you think was underlying the higher no-show rate when you were forcing it? So it was like, first, where I work, it was like it was sent to them by email. They had to complete it, they had to come, and then we had the iPad as well, so they had to complete it so they would be able to see the psychiatrist. The billing system is different there, right? Like it is covered by the government. So it's like we gathered the data and it was like suddenly the number of no-shows increased because they really didn't want to spend that time to do it. They find it useless to do it. In the group, I discussed that some of the physicians were not maybe discussing this with the patient. Even they were discussing what they found more helpful. It was spending more time with their psychiatrist or clinician about how their week was, how their symptoms are, rather than going through the questionnaire. They didn't find that was helping. And did the patients see the results of their outcomes? They were. Actually, after there was the iPad, when there was the paper, it was only the score, but after they had the iPad there, it was a chart there. They were seeing it. It was charted and they could see it, but that still resulted in them not wanting to complete it. Yeah, so they were seeing that the scores were changing, right, like, oh, you were worse, you're better, or like, they were like, yeah. And did that and sent them to complete it, they still didn't want to show up. Yes, exactly. What kind of care practice are you in? I'm just curious, is it community mental health or integrated system, or academic? I work in academic center, hospital. And would you say most of your patients are high-functioning, expecting psychodynamic psychotherapy, or it's kind of a mixed bag? I worked at Toronto University Center for Addiction and Mental Health. Got it, thank you so much. That's really great information, thank you. There you go, so some insights from others, and do you have other insights on this topic? I don't want to totally go off on a branch here, but I think it's interesting. So any time we create any scales, we have to think of the whole population, not just the one that is high-functioning or low-functioning, we need to think of everybody, because we'll be giving these scales to everybody. The other thing we need to be mindful of is trauma-informed care. And it's not just about collecting data. We need to put that hat on, but we also need to put a human hat on to be mindful that we are collecting this from the individuals that have struggled with some mental illness and may have some additional challenges. And so we may have to have several workflows. So if they are able to fill out, when you send them the appointment reminder, or you give them tablet, or we may need for small group of individuals that have human interactions where your nurse or the clinic assistant that's doing their vitals or checking blood pressure or checking them in, just say that, hey, I'm going to ask you questions so that I can get you ready for doctor, and then you can talk to them some more. And then just have that interaction. I think we have to keep our mind open and have several options, not just get caught in the one way of collecting the data. Thank you. Okay, thank you. You favor a flexible approach. So one of the most useful definitions of value I have found is benefits received versus burdens endured. If you make it a burden, the ratio of benefits versus burden is going to make it not valuable for the patient. And I think ultimately you have to make them feel that this is a valuable process. And it is not very different from stages of change applied to this process. Like some patients are in pre-contemplation or contemplation about this methodology, and you really have to have different interventions based on if somebody's in the action stage of change versus if somebody's in the pre-contemplation stage of change. So I think it's very important to apply that thinking even to this whole process. You know, one of the most useful things about this I have learned from lean methodology or a Japanese pioneer. He was talking about how to make redesigning work for the people who do the work. And one of the very useful phrases is make things easier, better, faster, cheaper, but only in that order. I mean, it takes a little time to think about it, but if you think about it, if you don't make, if you make things harder for the person doing the work, they're not gonna support it. If you have added five more minutes to somebody's day, every with every patient, they're not gonna support it. So really have to figure out how to make it easier on the person doing this work and similarly for the patient. I love it. Okay, thank you. Great comments. Robin's stepping to the microphone. He must have some thoughts. No, I just wanted to kind of build on this thought of kind of it being a barrier for some patients to accessing care. And my response to that would be that if at the beginning you explain the rationale, even the data, that we know that you're likely to get better quicker if we add in this measurement to your appointments, but then also having the ability to kind of say, well, why can't we do it every third appointment or every fourth appointment so that we can get the benefit of both worlds, but also meet them halfway. I just wonder if that would be a solution to that problem. Has anybody else seen, not had that experience? We're forcing it, forcing the data collection, didn't result in worsening no-shows? Has anybody found that it didn't have an impact on shows? Because I think that's a major question a lot of people have is like, should we force it? Should we not? How do we do it? So it's very interesting. Other observations, again? I wanted to ask the group about, and we talked about this briefly in our group, but about billing for measurement-based care or some type of incentive payments for measurement-based care. Does anybody have examples of that where it's gone well and not gone well? Anything along those lines? Is anybody billing actual codes for administering scales in measurement-based care? No? It's all about it, it happens. Okay. Why didn't you? Can you, wait, Mimi, come to the microphone. I did that consistently as an experiment. I got paid for, very few of them, got paid $4 for some of them. Which, and it costs about three bucks to actually bill for them. I think we absolutely have to do that. If we said to radiologists, oh, you know, we're not gonna pay for your radiology, then we wouldn't get any radiology. So I think we need to push for that. My understanding is primary care physicians get paid a higher rate for doing that, and I think that is a parity consideration. And Mimi, you guys had thought about billing? So we have thought about it. The reimbursement rates for them, I mean, you have to do high volume to make a big difference in reimbursement rates. One of the disincentives for doing it is for patients who have higher deductible plans, that ends up adding to their bill. And so there's disincentive to push patients away by increasing their bill if they're. Oh yeah, interesting. Like there might be perverse or unforeseen consequences by doing it where people have to bear more of the burden themselves if we're doing it consistently and wanna then get reimbursed for it. Did the patients not find any utility when they had to pay for it out of pocket? I'm assuming not. Yeah, so. Oh, oh, I got it, okay. So that was the, just for the recording, that was the anticipated reaction that patients might be surprised or shocked by the surprise billing that would come through on codes. Where is this charge coming from kind of thing? Mike Doss, VA Boston and Boston Medical Center. I think we need to frame this in terms of just thinking about this of what system you're working in. I mean, if you're a Kaiser or a VA, you know, national VA type of thing, it's a whole different question from if you're a private practice, small group practice, community mental health center. So I think we need to think of how is this useful in the patient delivery system itself. And if we don't do that, then we're really missing the big picture, I think. Wait, before you leave, you mean like if you're in Kaiser, it's a capitated model and billing that fee for service, code for reviewing the PHQ-9 doesn't seem to make as much sense? Is that what you're saying? You know, I think you'd have to have a healthcare economic and have a debate. It's not a simple yes or no. It depends, it's always a prior, what is the situation? And VA national, I mean, yeah, finances is a part of it. We've got a two year budget, so we don't have the same worries. In Massachusetts for Medicaid, for example, we're going from fee for service to capitated model. So the whole rationale, if you're just thinking of just the financial model is different because if you're having better outcomes, all of a sudden it makes a whole lot more sense to use these than it was a fee for service. So I think that we just need to think about this and we probably need to get more healthcare economics people chiming in with this conversation in order to make sense of it, because there's not enough people in here to really see all parts of the elephant. But I think we really need to look hard at what kind of practice and populations and systems we're working in. Were you in our session earlier today? Oh, we had a healthcare economist talking about measurement-based care incentives. But you're right, I think financial incentives do play a role in implementation of measurement-based care and the incentives individually for providers or for systems. Other comments, questions? Catherine? Andrew? Oh, yes, please, go ahead. We have spoken a lot about measurement tools, how to get people to do them, whether you get paid for it or not. I think a lot of clinicians probably would also need help about how to interpret the variation of these scores over time. You know, in statistical process control, we talk about simple common cause variation, special cause variation, and whether you should even intervene into systems if you have simple up and down variation. I mean, you could have a score that fluctuates between four and eight for years. And every time you go to eight, you kind of jump and do something, it goes down to four, you feel happy about it. But it could just be a normal variation that you just need to ignore and talk about other things in the meantime. So I think a lot of us clinicians may also need help on how to actually interpret these results and variations over time. Yeah, that's a great point. With the PHQ-9, try to do a little bit of research on that to look at different ways that you can interpret changes in the scores over time and sort of correlate those with other outcomes like quality of life, subjective distress, other measures. And I think essentially whatever metric you use, you basically get the same data. It's like, you know, however you measure progress, if you have progress using that measure, you will probably see progress in quality of life and other measures. But I agree, it's very important for us to have those delineated so it's clear for clinicians. Even if it doesn't matter too much which metric you use, the fact that there is a metric that helps people interpret the score can be really helpful. Yeah, just to add on to some of those comments. So, you know, actually, at our clinic, we do have a TMS treatment and we just recently started Spravato as well. And what we're seeing is from the payer side, they're requiring us to include PHQ-9 scores, especially for TMS after they've completed treatment. They wanna see at least a 50% improvement in their PHQ-9. Typically, it doesn't have to be PHQ-9, but that's pretty much the industry standard. Most of them are asking for that. Some of them are actually asking for- And pausing there, what happens if you don't get it? I haven't tried that because we wanna get them, you know, in the Theragood treatment, but, you know, I presume that they would probably ask for that if it wasn't included. Well, actually, that's more on the follow-up side. But, you know, we haven't had too many of the re-treatments yet, but at least on the front end, they're not really asking for that too much. Some of them do ask for the PHQ-9 up front. But then, this was Spravada too, we just started that, and they are asking for some type of measurement-based scoring as well, too. So it is something where, we talked about that a little bit, it's kind of the carrot or the stick. You know, for us, that is like the stick at times. But, you know, I think what was alluded to before in terms of the education piece is very important, is that, like, you've gotta educate the providers, but you also have to educate the patients. And if you don't have those two lined up, then it's gonna be very difficult to get the buy-in from both sides. And so, I mean, even for us, and one of the child and adolescent psychiatrists, Alice, she was saying that, you know, for them, they had done a survey and asked about what the providers think if they were to have this PHQ-9 or other measurement scores added in, they got 50-50 responses. Some wanted it, some didn't. That's kind of like what we, we didn't really ask for that ahead of time, we just kind of instituted it and said, hey, we're doing this. But I think it was easier because for us, we didn't have it as something they had to do. We didn't have it as something they had to do during their visits, it was already done because it was software developed as something that the patients had to do ahead of the visit during their intake forms. And so, I think that helps, but I think, you know, on the education side for the patients, like what was mentioned, if we educate them and say, hey, if you, you know, do this, all the research is showing that you're gonna get benefit, and we can see how your scores are tracking over time, I think that's a very important piece of this that can really help the patients feel like, okay, this is a really important reason. Yes, there's the holistic part of, you know, being there as a psychiatrist and caring for them and listening to them, but, you know, I think we can marry those two together to really provide the optimal care. I think that's gonna lead to the best outcomes. And you're not done yet. So, but I'm still confused. If you don't get the 50% improvement, do you have a sense of what the insurance company would then do? Yeah, I mean, so for us, we're not actually applying for those retreatments if we don't have that 50% because we're afraid they're not going to. Yeah, they likely won't. Yeah, a lot of them require that 50% reduction in scores in order for them to qualify. To authorize ongoing treatment. I mean, we could try, and that's a good point, we could probably try and try to push for that, but in general, it's probably gonna be something they're gonna say no. Great, and the question then is what kind of insurance is that that you're contracting with for that? Well, the ones that we are, we see the most with like Anthem, but we have some others. We haven't had too many so far with TMS treatments with a lot of the other insurance carriers, but Anthem is probably the biggest one and they do require that. Are they requiring the PHQ-9 specifically? Because this is the first I've heard of this. They don't necessarily require that one per se, but. Can you say a little bit about how the data are communicated to the payer? Is that cumbersome or is it a process? No, I mean, we keep track of their PHQ-9 scores on a weekly basis anyways, and so it's really easy to see and track that over time. And so we're able to see what their peak was and where their lowest score was as well. But then how do you get it to the payer? How do they know what happened? Oh, they just take our word for it. Obviously, they don't follow up on that, but yeah. If they wanted to, they could audit us and we have the proof that this was the score that they do. So I mean, we use Neurostar and they have this TrackStar system, which is where their software to be able to track and really easily gather that data from the patient, get email sent out to them with a link, so we definitely rely on their system. We want to try to incorporate that more into our own clinic system as well in the future, but for now, they do a good job of keeping track of that. They take your word in terms of whether you've reached that 50% threshold to authorize another round? Exactly. They'll just say like, this patient achieved this metric, yes or no, and then they would authorize another. Is that how it works? Exactly. Okay, that's interesting. The comment over here is that if we worked with public payers like Medicaid or Medicare that that might entice more commercial insurers to take up a similar practice. We see that with the TMS authorizations too, like in the past it used to be that you had to have six different medication trials before you could get approved. Now it's down to two. And so it's always like one insurance knocks over that domino and then the other was kind of following suit. So it definitely something and I know the APA guidelines say you really only need one medication failure before you can actually try TMS. I think we'll get there eventually because you know for them it's all about the money. So you know in general I think that's that's the cool thing is that we see the industry change with that. Just following up one more question on this. Are there times when there have been patients who have not from the score had significant or appreciable changes but clinically you feel like continued treatment is warranted and you're at odds because the score doesn't accurately reflect. That's a great point too. Yeah. And actually so I had gone to this training at Neurostar. So they talked about how we should try to push for extended treatment beyond the 36 treatments that they allow for. And the research shows that as well TMS if you can get them to 40 50 treatments across the board they do much better. And even for those that are already getting 50 percent improvement they have even better chance of getting remission and having a longer remission if they get those additional treatments. But you know it's something we just have to rely on them approving it first. Has the fear of not getting approved for more treatments disincentivized any of your clinical staff to not want to do the measurement. No no no nothing like that. Yeah. But it's just something new that we're trying to implement more with our patients fabricate the measurement results or have the patient fabricate the measurement results. No it's all relied on the patients. The PHQ-9 is pretty much what we go by. OK. OK. Oh that's so interesting. Thank you for sharing. One of the questions for the group is anybody using computerized adaptive testing instead of static measures. OK. Yeah. So I have to say so computerized adaptive testing is kind of like I'd like to think of it as like the next generation of measurement based care where instead of everybody getting the same scale every time with the same items the computer will present an item and then subsequent items that are presented to the patient are tailored based on their responses. So essentially what you get is a algorithm that learns what the most critical question is for the patient and it usually presents fewer items because it's it's essentially tailoring the measure specifically for that patient and it saves time and it makes the patient. So I tend to think it increases engagement because the patient is getting something that's specific to them and they're also not asked questions that they've said no to every time like on the PHQ-9. So I don't know if anybody's used it. They don't seem to be used widely but they're the promise measure. They have a computerized adaptive testing version. There's another one called the CAT-MH series that has a computerized adaptive testing measure. One of the biggest challenges is that you typically have to pay to use them which is something that would be awesome. That's why I was hoping somebody was using them. But yeah. Kaiser? Yeah. Oh, please, please. We use Lucid which is previously Tritium but it's super long with baseline and then right every time if they were negative first time they might give like the PHQ-2 and then if that's negative they skip the rest of the questions and then if they were negative for the PCL-5 you know that they didn't give that you know and follow up and things like that. But yeah. They paid for it. Kaiser paid. Foundation paid for it. Yeah. It's expensive. Do you think it increased engagement from the patient side or clinician side? No? No. Well, it was hard because it's a different implementation, right? So there was a really well-developed workflow around paper AOQs and then inputting into the EHR and then Lucid or Tritium came along and that was electronic so we had iPads and that went really well until COVID hit and then once everything changed to telepsychiatry relied on patients logging into a separate system, reading an email before the visit and then clicking on the link which then didn't really happen. Then the other problem was and I think our small group actually touched on this was in the EHR you have to click on something and then it goes to another web page and it's not integrated in your note so those extra steps decreased staff engagement. We have patients who do it and like it a lot better. But the comment being that there's an opportunity to extend clinical decision making and have more algorithmic type care based on prior responses to flesh out more data. And then more in real time prompt the clinician and notify them of risky results through the adaptive testing and that the VA is exploring something like this and the VA and Kaiser should talk. Sure, OK. Yeah, we'll talk. Yeah, I'm interested. I think this is a good point to actually try to take a second to summarize some of the learnings, I think, from the discussion briefly. And then we're ending at 3, so that will give us a few extra minutes in case we want to have some sideline discussion and stuff. I really appreciate everybody coming. I've heard a couple of things kind of come through this afternoon. The first is that when we're able to make the patient part of the experience and feedback the data with them, it helps with engagement. And it's much more of a useful process when the patient is brought into the measures as well. And that can be very powerful and very helpful. And having the patient and the clinician both engaged in looking at the measure and the outcomes and tracking them over time can really be where measurement-based care sings. Attaching the measure to some kind of forced action by the patient can be helpful in terms of increasing data integrity and actually getting the measure done. But it has risks, which include patient disengagement and turning off patients, that need to be considered. And when you're thinking through the measure, considering computerized adaptive testing or more fancy algorithms rather than just a block measure that might be more nuanced to the patient could maybe help with some of that engagement. I hear a lot of clinician pushback, liability concerns. If the measure is not actionable, if it's not within their clinical decision-making framework and time frame, then it can be really disincentivizing. And there's a lot of clinicians who are already overburdened by documentation. And so it can be really difficult to ask more of them to do this when they're not really bought into it in the first place. And then, oh, yeah. If patients aren't seeing that the clinician is actually using the data, they're also going to be disincentivized to complete the stuff. So if you're collecting it and it's going into a dead-end street and nobody's reviewing it and it's not meaningful or actionable, even if the patient is seeing it, they're going to be disincentivized to do it. And then easing the burden on the patients in terms of their total load, cognitive load, time load to complete the measures can be helpful for implementation. Those are some takeaways. Were there any other major takeaways that you guys captured? I think that we did a great job. Thank you all for a very rich discussion. It was a really nice experience. Thank you. Thank you all. It's so nice to have you collaborate. I mean, we're here. So it's nice to be able to do that and talk to each other. So good job. You've come all this way. Thank you for your participation and the workshop. Go enjoy the rest of the sessions. Thank you.
Video Summary
The workshop, led by Eric Vanderlip and others, aimed to explore the implementation of measurement-based care in psychiatric practices. The session emphasized the importance of in-person collaboration to maximize learning and sharing of wisdom among attendees. Participants, divided into small groups, discussed various aspects such as the selection of measures like the PHQ-9 and GAD-7, barriers to implementation, and the importance of integrating these measures meaningfully into clinical practice. Common challenges identified included the cost of measures, integration with electronic health records, and engaging patients and clinicians in the process.<br /><br />Several strategies were suggested to improve implementation, such as using technology to administer measures before appointments and ensuring results are discussed with patients for context and engagement. Concerns were raised about potential increases in no-shows when measure completion is mandatory, and the need to consider patient-specific workflows was highlighted. The discussion also touched on the financial aspects, such as billing for measurement-based care, and the use of computerized adaptive testing to enhance patient specificity and engagement. Overall, the workshop encouraged open dialogue about the practical challenges and potential solutions in implementing measurement-based care in psychiatric settings.
Keywords
measurement-based care
psychiatric practices
PHQ-9
GAD-7
electronic health records
patient engagement
clinical practice
computerized adaptive testing
billing
×
Please select your language
1
English