false
Catalog
Is Measurement Based Care the Future of Psychiatri ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Well, good morning everybody. I think I'm going to get us started because this is actually one of the virtual sessions and so there are people at home, we think, who are watching this. So, I'm Carol Alter. It's really great to have all of you here and all of you at home here as well. I am the chair of the, maybe if I can get this to work, let's see, great, great. I am the chair of the APA Council on Clinical Quality. What am I the chair of? On Quality Care, CQC. Boy, good morning everybody. Anyway, and so we're really excited to have you here today. We have been working on this topic of measurement-based care as a council and as the APA broadly for quite some time. And I think this is really hopefully going to be a fun session where we can begin to really think about what that means and how it impacts practice and what is good or not good about it. And so that's what we're going to do today. I am a professor and the Associate Chair for Clinical Integration and Operations in the Department of Psychiatry at the University of Texas, Austin, Dell Medical School. And we have, I'll introduce folks as we move through, but what I wanted, and I'll talk about the format that we're doing and how the folks who are watching this virtually can interact with us. So before I introduce our esteemed panel, I just wanted to kind of give you some background on why we're doing this and why I think it's important. If I can make it work. Where is, how do I make it work? Okay, great. I'm a Mac person, so these are confusing. Okay, they're back. Okay. So for me, the reason that measurement-based care is important is because I really think about value. And I think about value in terms of outcomes over cost. And when you think about, I mean quality over cost. And when you think about quality, you're really thinking about outcomes. And so when we think about outcomes, we're thinking about improvement in symptoms, functioning, well-being, and improving health in general, right? But they start, when we think about the end, it really starts at the beginning. And the beginning for us includes things like an assessment and a diagnosis, and then symptom monitoring and being able to look at our treatments and how they work. But how can we really know what outcomes we have without measuring? And I think this is really at the core of what we do. Now, we all measure things by our clinical interactions, right? But I think the reality is, is that may not be necessarily the best way to measure. And there's clearly lots of other ways that we measure things. And one would imagine that perhaps if we measured with objective validated strategies, one could question whether that measurement was going to be better than what we do with all of our clinical skills. So what measurement-based care is, is it's the use of repeated validated measures to track symptoms and outcomes in the clinical setting. And it should be used, it should be and can be used to help us drive treatment. And hopefully to measure whether or not we are getting to the desired outcomes, whether we're there for providing quality care, and which then can lead us to really think, is there value in what we're doing? So what we do know about measurement-based care is that in fact, it significantly improves outcomes of care. And I'm not going to go into a lot of detail on this. There have been, there will be several, there are several sessions over the course of the meeting, which get into more detail about this. But just to summarize, there was a really kind of important landmark paper called The Tipping Point, which was published in 2017, which went through some of that detail. And it in fact, looked at a number of published research studies and included things like clinical trial data and registrational data that use measurement-based care. And what they found was that when you included, when you did any kind of repeated validated measures, that patients got better, regardless of what you did from a treatment or an intervention perspective. And that there were 75% higher remission rates versus usual care. And so the question then is, and this is something that we've been really working on, about why not use measurement-based care? Because in reality, probably 20% of people routinely use measurement-based care in their practices. Well, why not? I mean, I think in general, it's because people find it to be too difficult, that it may require tools and technology, that it may change practice workflows, which then becomes difficult, time-consuming, et cetera, and that there's not reimbursement. I think the other thing that people have raised is that it really, that using rating scales changes that physician-patient relationship. And therefore, it may be something that we don't want to use because we find that our, again, our special powers, our clinical acumen may be, in fact, be influenced in a way that doesn't make sense. And I think the other piece that's clear is that people feel that rating scales are not accurate enough. And so the Council on Quality Care, essentially we've been working on this issue. And in fact, Dr. Vanderlip, who's going to be one of our speakers today, along with Dr. Rideout, Catherine Rideout, and together they will be doing another session later this afternoon, have just completed a resource document on implementation of measurement-based care to try to address some of these issues and give, I think, some pretty concrete tools about how you can do this within your practice. But it really is a general, it is both these sort of concrete issues, but it is also, I think, an existential issue, which is why we're here today, which is to really address this. Because measurement-based care is really also at the root of changing the way we think about diagnosis and treatment. Specifically, using flexible, dynamic, and maybe digital tools can help us develop a different approach to diagnosis and effective treatment, and that really is the future of psychiatry. So while I think we all learned a certain way to sort of think about the clinical, the patient in front of us, the question is, is that the best way? You know, we've developed a DSM that really depends on field trials and categorical symptom assessments and statistics to say, this is what depression is, and this is what bipolar is, and this is what schizophrenia is. But are we always right? Without a sort of a strong biological framework for those diagnoses, it really raises questions about whether the methodology we've employed is the right methodology. And so I think that is also at the core of what measurement-based care is about and why we're having these conversations today. So what I'd like us to do today, I hope you all will join in this interesting exploration, would be to really look at some of those barriers and some of those interesting kind of existential issues with us, and consider how the basic approach to clinical interaction, delivery of care, and innovation might compel us to think differently about what we do. So I'm going to, let's see, oops, sorry, maybe, if I could use, this is good that I don't have a lot of slides, because I would be sunk. So, what are we going to do today? I'm introducing what we're doing today, and then we're going to cover three roundtable topics. And they include, and I'm going to introduce our speakers, but the first one will be on payment and the economics of sort of practice and decision-making, and that will be presented by Michael Schoenbaum, and I'll give you some introductions in a second. Eric Vanderlip will talk about the idea that use of validated rating scales interfere, or may not interfere, with a psychiatrist's ability to effectively treat patients. And Dan Carlin will talk about measures specifically and whether or not they reflect the complexity of symptoms, behavior, and clinical decision-making. I want to just make a comment that Glenda Wren was supposed to be here with us, but was unable to attend, and thanks to Eric for jumping in and hopefully representing whatever Glenda was going to say. But he will be himself, so that's all great. So, the format today, just so you know, is that each of the participants will give a statement about kind of their approach to the question and problem. Then the panel will jump in with provocative remarks, and then we will really throw it to you all, because I think this is about the conversation, how you think about these topics, and how you think about sort of the pros and cons of each of the statements. And because we have a virtual audience, Jacqueline Posada, who has agreed to help us with this, who is a member of the program committee and is also on faculty with me at UT, will be fielding questions from the audience and will help us with that. So, without further ado, I'm going to introduce our panel. The first person that will speak is Michael Schirmbaum, who is a health economist and senior advisor for mental health services, epidemiology, and economics at the NIMH in the Division of Services and Intervention Research, and has been one of my great educators on this topic and has been very much involved with the work that we've been doing at the APA, and we thank him for that. He conducts analyses of public health and mental health services issues in support of the Institute decision-making and works to strengthen NIMH's relationships with public and private stakeholders and works with many of the other agencies across the federal government, specifically focusing on this type of issue and also on improving and expanding prevention, identification, and treatment of suicide, and on the wider implementation of evidence-based behavioral health interventions in the real world. He has been involved with collaborative care and was part of the initial work that was done on that several years ago, and he received his undergraduate degree from Yale and a PhD in economics at the University of Michigan. Eric Vanderlip is a board-certified psychiatrist and family physician. He received his undergraduate degree from Rice and medical school at the University of Oklahoma, doing a combined residency in family medicine and psychiatry at the University of Iowa, also doing a fellowship in integrated behavioral health services at the University of Washington. He has been doing a lot of really interesting work in innovation, pioneering an on-demand healthcare startup, ZoomCare, and served as its CMO, and he participates with us on the Council of Quality Care, and he is now an assistant professor at the Oregon Health Sciences University. Dan Carlin is a psychiatrist and also is board-certified in addiction medicine and clinical informatics. He's the CMO at MindMed, a company developing psychedelic medicines, but he was previously the co-founder and CEO of Health Mode, which is a company that developed digital measurement technology of medicine in clinical trials, which is his special power. So we're really excited that he can be here today. He's an assistant professor at Tufts and has been the director of psychiatry informatics and the associate training director for psychiatry there. He trained at Tufts Medical Center and attended medical school at the University of Colorado and did his undergraduate work at Columbia. So with that, I think we shall get started. And just a comment that when you all start, so I think that's Michael, if you could please make sure that you provide any disclosures you might have. Okay, I'm up? You're up. That's great. So I'm actually going to start with obligatory disclosures. As Carol said, I'm an economist and epidemiologist at the National Institute of Mental Health. I work under a mandate from the institute to help turn our science into policy and practice to increase the public health impact of our work. Still, I need to say that the comments I'm going to make today are my own and do not necessarily reflect the views of my employer. I don't have any conflicts of interest to disclose that I'm aware of, unless you think that being not a psychiatrist is a conflict. We can talk about that later if you want. One of the privileges of working where I do is that I truly feel like the only dog we have in the fight is the public interest, that is, how can we deliver effective and efficient care to the people who need it. Building on Carol's opening comments, from where I sit, measurement-based care for behavioral health is effective in the sense that patients who receive measurement-based care have better outcomes than people whose care is not measurement-based. Broadly, I think it's also efficient in the sense of superior outcomes at similar cost. This does not mean that financial and other incentives are necessarily aligned to enable and encourage you all to furnish measurement-based care as standard practice. Evidently, it is not yet standard practice. If they are not aligned in that way, why and what could we do about it? Mainly, I, and I think this panel, will focus on the perspective of individual providers and practices, but I actually want to start somewhere else. My long-held view, which I've expressed repeatedly when I've had the opportunity to talk to APA audiences, is that psychiatry as a field, and the American Psychiatric Association as its organized representative, actually has a strong institutional interest in meaningful documentation of patient outcomes, which measurement-based care enables, because many of the services that psychiatrists furnish can also be furnished by other types of clinicians who charge less. For that part of what you do, you're in some sense competing on quality, and if the market can't observe quality, people may choose based on price. I think that means you all have a stake in making sure that at least enough high-performing psychiatrists furnish measurement-based care to establish and periodically confirm the reputation that psychiatric care produces better outcomes. But evidently, to the extent anybody actually agrees with me and the institutions of psychiatry view themselves as having this kind of incentive, that incentive is, again, not enough so far to make this standard practice in any general way. External forces might ultimately do that, such as expansion of the Joint Commission's recently introduced accreditation standard on measurement-based care, if that accreditation standard were applied, expanded by the Joint Commission or others in some way to a wider scope of psychiatric practice. And there may be other steps that external actors, payers especially maybe, and accreditors might take to make measurement-based care the standard of practice. And then there is what I think I am expected to talk about as an economist, which is sort of micro-level economic forces. So what people often say is that we need direct reimbursement for the measurement. The assertion is that providers are just not going to use valid standard measures until they get paid to use valid standard measures. And I think actually beyond what currently exists, this is unlikely to happen, at least in the most direct sense of a la carte fee-for-service payment for administering PROMS, you know, the PHQ or the GAD or the audit or whatever. I also think that thinking in this way is somewhat of a red herring, because the variable cost of doing this as part of, say, E&M services is actually almost certainly low. And especially since it's not just inefficient, but also generally ineffective for the psychiatrist to administer standard measurement tools verbally to the patient in real time. If that's the way people are doing it, like that's not the way we want people to do it. It's, as I said, inefficient, but also it tends to yield bad answers. You take this standard thing, you administer it in this nonstandard way, and you get variable response. So I think focusing on, again, focusing on fee-for-service payment for the measurement per se as the criterion that if we don't do that, then people will not administer and use measures in measurement-based care in a general way, I think that's a dead end, actually. I do want to acknowledge two other kinds of costs that matter, and then I'll turn to my colleagues. What's typically not trivial, so I asserted the variable cost of doing this is not substantial. What's typically not trivial is the fixed cost of buying and implementing the health IT that can make measurement-based care easier and more effective versus harder and less effective. And here there's a chicken and egg problem. Current health IT for measurement-based care workflows in behavioral health are, to be blunt, often clunky in many different ways. Maybe that'll be the subject of some of our dialogue. Having better tools, not in the sense of the measures themselves, but in the sense of the infrastructure for administering the tools and scoring the tools and communicating the results in a way that the clinician can use to make treatment decisions, having better tools for all of that would facilitate wider use of measurement-based care, but vendors may not develop better tools until or unless there's obviously demand for them. And this is not a problem individual practitioners can solve, and us going and banging on individual practitioners and wagging our fingers at them that they should be doing more of this in the absence of some organized engagement with the vendor community to try to make better tools more readily available, not on a boutique basis but as default products. Again, I think we need to think strategically about that. In an organized way, that's a kind of APA leadership thing, I think, a role for the APA. Finally, another non-trivial thing is the transaction cost of shifting practice from we don't do this in any regular way to we do do this in any, you know, as standard care. Very broadly, U.S. healthcare is pretty good at proliferating new drugs and devices, discrete intellectual property, where there's some patent holder who has an economic stake and lots of skill at getting people to shift from the old technology to the new technology. We are much less good at adopting new care delivery models, and measurement-based care is a care delivery model, so we are not good at shifting the process of care. Carrots and sticks may influence existing providers, may encourage that. It tends to be slow and quite cumbersome, and so I guess the last thing I would offer is that I think the most effective long-term strategy, and maybe this is also a role for the APA or anyway, you know, kind of a top-level strategy for the field, is to build measurement-based care into initial psychiatric training. I mean, the short version is that it's way harder to, I mean, yeah. People tend to continue to do what they learn in their initial training. You know, that's a polite way of saying it's harder to teach old dogs new tricks, and so if we train the new dogs to do measurement-based care, I think then that will change a lot of, it'll change the form of the conversation about what else we need to have in the world to make it possible for this to be standard practice. Okay. All right. I'm going to hand it over. Thank you. Oh. Michael is so smart and eloquent, I'm going to attempt to go after that, but I went into the women's restroom this morning, and I don't think my brain is in the right space to follow that well. I'm Eric Vanderlip. Yeah, it's just been a long week, but it's a pleasure to see you all this morning. So I'm actually delighted to be here, and I think this is a really important topic, and Michael brings up some really, I think, interesting challenges with implementation of measurement-based care from a micro and macroeconomic angle that we are confronted with as a psychiatry profession, and my role in this panel is to bring up the more interpersonal physician-patient, client-patient relationships with how we think of measurement-based care and changing our dynamics in the patient-provider relationship. But I want to step back for just a second because I think we need a little bit more context for this panel so that we can have a rich discussion. It's depending on how long you've been here, day one of the annual meeting for you, it's rare that we get a gaggle of psychiatrists in the same room together. Many of you all are from different places, maybe even different countries, and it's unusual that we have a chance to talk about some of these very difficult discussions and concepts. And here we are, and we have a whole open forum and space to talk about some of these things, and the goal of this, as we were dreaming up this session, was actually to start a conversation about the challenges for why we don't, as a whole and as a profession, engage in more measurement-based care, because we just don't do it. So before I really talk about the challenges from the doctor-patient relationship standpoint and the implications of measurement-based care in our dynamics with patients, I just wanted to take a second and ask you all, where are you practicing? Who does measurement-based care? And I wanted to kind of see the audience's perception of how this was going. So if I can just invite some participation for a second, I know it's still early on Saturday, you're still jet-lagged or whatnot, but how many of you are like in a small or solo practice, small group or solo practice? Just show your hands. Yeah, a minority. How many of you are in a larger practice setting with like a large group or like maybe an academic medical center or something like that? Ooh, ooh, okay, more, ooh, interesting. And for the audience at home, there were like six or seven people who were in small practices and oh, I don't know, there are like, what, 300 people in this room, right? So the majority about, I would say, 50, 60 people who in fact are in larger settings. So I think we have a little bit of a bias in our sample selection here in the audience. I think the psychiatric workforce in large part practices in small group and solo practices, and it's harder for us to speak to that audience necessarily. And that is a different audience that we have a difficult time reaching. Nonetheless, for those of you who are in the different practices that you have, who actually does measurement-based care the way that Carol has kind of defined it? Show of hands. Just three of you? Can you raise your hands high, please? Okay, all right, all right. You should ask how many of them are at the VA. Yeah, all right. How many of you are at the VA? None. Zero. All right. So I run a department where we're trying to implement measurement-based care, and the uptake is very low. Okay, so this gentleman over here has volunteered. I'm going to repeat comments from the audience because we have a virtual session ongoing and I want them to be able to hear what's going on. But this gentleman has volunteered that he runs a department trying to implement measurement-based care, and the uptake has been very low. And can I just add from the virtual? It looks like there's FQHC presence, too, on the virtual meeting. So that's a big group. FQHC setting? FQHC. Thank you. Excellent. Okay. And how many of you in your gut are skeptical about the use of, let's say, a PHQ-9 score to track depressive symptoms in your patient population? Come on. I'm skeptical about it. Dan's skeptical about it. I mean, just a handful. How many of you are familiar with the PHQ-9 score? Oh, how many of you actively use the PHQ-9 score? Okay. How many of you believe the PHQ-9 score is helpful for you in your practice? Great. How many of you think it's bollocks and you want to move on to something else, so don't even care and think your gut is better than anything else? I don't know if that last part was... Oh, is that too far? Look, I mean, my point is that I think that there's many barriers to implementation of measurement-based care in a practice. One of those barriers is the doctor-patient relationship and the veracity of the scores, et cetera. So my role is to be a little provocative because we want to get this out in the open. We want to have an open dialogue around the challenges around measurement-based care and the implications for the practice. So I want to talk briefly about what those implications are, and we'll move on and forward with that. Thank you for humoring me for a second. So I'll say in my own... I have a private solo or small group practice. It's cash only, whatever. My point, though, is... Oh, do I have to say disclosures? Oh, I am an assistant professor. I'm an assistant professor at OHSU. I do psychiatric emergency services work. I have a small solo group practice. I have co-founded a mental health startup company that's online. Pre-revenue doesn't earn me anything, and I have no other conflicts of interest. I'm mainly salaried from OHSU. In my experience in the solo and small group practice, I've had patients completely refuse the measures. I'm just going to put that out there. I'd like to do measurement-based care, and I've handed them a PHQ-9 score, and they balk at it. They're like, what the heck is this thing? You think I'm just reduced to this? I mean, there's some clientele and some patients who hate it. They think that it's silly. Secondly, and that is a very real barrier to implementation for me, especially in my practice, some people just don't like doing this stuff. They think, as a consumer of mental health services and somebody who's struggling, they think that this score is really weird, and they don't like doing it. And that puts me at odds with them, and then they don't like me very much after I ask them to complete this score. And that's a very real challenge for me sometimes. And it has dissuaded me from doing more measurement-based care in my private and solo practice. A lot of patients get really distracted by, you know, the prompts on, like, the PHQ-9. They're like, should I answer some days or more than half the days here? They're really caught up with that. And the more neurotic the person is, the more likely they are to really get caught up with that. And then they spend an inordinate amount of time splitting the hairs on the subjective difference between these symptoms, the frequency of these symptoms. And they get really hung up on it, and it's very difficult for them to move past it. That's a barrier for me in how I think of our relationship together and the time we spend together. A lot of docs believe that when we do these symptom-based measures especially, it over-emphasizes the symptoms of the illness above functioning. And even though I know, and I'm using the PHQ-9, for example, because all of you apparently are familiar with it, and many of you use it. It's one of the most frequent measures that we're citing and one of the most frequent measures used in measurement-based care. But a lot of people, even though the PHQ-9 has a functionality score at the bottom, it's somewhat difficult. I mean, many people forego that just to get the score. It's not part of the overall score. And, like, do we really care if it's somewhat difficult if their total PHQ-9 score is 24? You know, I mean, my point is that if we over-emphasize the symptomatology, then do we lose the bigger picture on the overall function of the person sitting in front of us? And there's a lot of reticence, I think, from clinicians that by measuring the symptomatology burden, we're losing sight of the bigger picture of the person sitting in front of us. And that's a barrier in the doctor-patient relationship to implementing measurement-based care. I think, obviously, like it's already been said, a lot of clinicians feel like because of this and other reasons, the time spent in doing measurement-based care, the measures are just so off that they don't accurately represent the true status of the patient. And their gut intuition and their nose, clinically speaking, with their experience, put experience in quotes, probably shouldn't, but with their experience is vastly better than whatever the measure is telling them. And because they'd like to perform really great care for people, they feel like their gut intuition and vast years of experience in treating thousands of people with depression is better than what a quick, subjective, symptom-based measure will tell them about how the patient's really doing. And that's a barrier. Michael, Schoenbaum offers that they're wrong, but it is a barrier nonetheless. I think it's a barrier nonetheless that we've got to talk about. We've got to talk about it. And if we don't talk about it, we're not going to ever get the skeletons in the closet out and talk about the real challenges to doing this on the doctor-patient relationship. And then to that end, I think a lot of docs feel like these measures aren't accurate or reliable over time, that they're not valid to the true symptomatology of the person. And that's a challenge. And I'm stealing some of Dan's thunder, potentially, with this. But those are some of the barriers to the doctor-patient relationship that I've identified and that I think are very real challenges in us moving forward. I'll finally say the last one is a fear as a clinician. If you start to do this more regularly, will I be recorded in my actual outcomes? And what does that mean for me? If I'm starting, if people, if my clinic manager starts to look at my performance across a panel of patients, which is the natural consequence of measuring depression outcomes with a PHQ-9, what does that mean for me? If I don't perform as well as my friends and colleagues, am I going to be put under the microscope? Will my livelihood and well-being be dependent upon outcomes which I really actually have very little chance to control? I could tell somebody that they should take Lexapro, but do they take it? I can't make them take it every day. And that is their outcome in and of itself. And I have some bearing on that, but it's very little in the overall bearing of how they're going to do. And if I have very little control over that and I'm starting to be measured based off of the outcomes of their symptomatology, I don't believe in a PHQ-9 score and I'm afraid it's over-indexing on symptoms, etc., then I'm just very reluctant at a macro level to do this because I'm afraid of the slippery slope. I don't trust modern healthcare. I don't trust value-based payment systems. I don't trust that I'm not going to somehow be dinged based off of poor performance, which is something out of my control. And so I think those are very real disincentives to doing measurement-based care at the doctor-patient relationship level. And I throw those out there because I'd like you to think about that and maybe reflect on it. Do you resonate with any of those? And if so, have we missed other things, other barriers for you in the doctor-patient relationship to prevent you from actually assessing your patients in a structured format and using that assessment to guide your treatment? And if you do resonate with some of those things, have you found ways to overcome them? Or let's get these skeletons out in the open so that we can at least talk about them. That's the point of this forum this morning. That's what I hope to get to. So I'll put those out there for thought. Thank you. So thank you, Eric, and thank you, Michael. And I'm gonna actually try to have all three of you, if I'm still on, talk a little bit about both what Michael said and what Eric said, and then we'll get to Dan's comments. And I really wanna kind of have some time while people still can remember what anybody said this morning to talk about. So Michael started us out with really thinking about sort of the economic barriers and the financial barriers to going ahead and doing measurement-based care. And I'm wondering, and raised both what he seemed to suggest are inconsequential barriers, like the actual time it takes for a psychiatrist to either administer the PHQ-9 or read it or discuss it with the patient. But then talked about, I think, bigger system barriers as well as some opportunities for how we, as an organization, we as a field, can get there. Eric talked a little bit, and I hope a little bit of a tongue-in-cheek way about why it's so hard to do this and why you don't wanna really disrupt your practice. But I wanna kind of have you all sort of think, first of all, what do you all think about what Michael said about why we can or can't do it? And I guess I was struck by the idea that why should we do it if it's this expensive and this difficult? Does it seem like, even after looking at the evidence, that we should even be investing any mental energy on this? And I would sort of throw that open to you all, and then I will let Dan and Michael counter Eric's points, which were quite obvious. But anyway, I don't know. What do you all think about the idea that Michael raised? How do we measure the time and cost of doing measurement? Thank you. Well, so I guess if you're gonna put me in the box, make sure it's the right box. So I don't mean that the whole process of assessing how the patient is doing and discussing it with the patient and considering whether to adjust treatment or treatment options and stuff, I'm not asserting that that is trivial. I'm actually saying that's central to the enterprise of what E&M codes are designed to pay for. What I'm saying is trivial is in the course of doing that, the clinician needs to get information about how the patient is doing and how that compares to how the patient was doing before, and doing that via the use of standard measures, especially if those standard measures are implemented via technology, which is a better way to do it than if the clinician reads the instrument to the patient and during the patient visit. I'm asserting that it isn't any more burdensome for the clinician to do that via standard measures than in whatever it is that they're doing now that they accept gets paid for as a bundled service with E&M visits. And so the straw man I was poking at wasn't, you shouldn't get paid for doing the work, it's that you shouldn't expect special payment additional to all that, the work you're already getting paid for, for the modular component of administering a standard measure. And I mean, how does one do that? I mean, there are to time and I mean, there are people other than me who go into work settings and measure stuff like that. Yeah, I mean, I guess Eric, your comment struck me because I think about a situation where say I was treating someone with, as if I do this, where I was treating someone with anxiety and administering a HAM-A every time I saw them and half of the questions on the HAM-A are somatic symptoms of anxiety, yet this patient has never had a somatic symptom of anxiety. That means half of the questions they're answering are gonna be zeros and they're gonna be irrelevant to this patient every single time. I mean, I'm curious from the audience, I wanna get your participation. So if you, I mean, if you did the PHQ-9 routinely and you were reimbursed from a commercial insurance, $3,000 a month to prove for their panel of patients that you were treating that you were routinely doing PHQ-9 scores, would you be more motivated to do a PHQ-9 score every time you saw a patient? And I promise when I'm done talking, we're gonna regret using the PHQ-9 as a surrogate for measurement. I mean, I see a thumbs up here, but I mean, I'm not saying that, I mean, my point is that economic incentives can, could have the potential to drive changes in practice. So what thoughts from the audience? Yes. Has it been proposed for insurance companies to give an incentive for the patients to fill these out and instead of reporting to us, they report to the payer? Yeah, so the question is, has it been proposed for insurance companies to give incentives to practices for patients completing the PHQ-9? And instead of reporting to us, that would report to the payer. And yes, that's been done. And it's, there are some codes that are available for the provision of measurement-based care. Oh, that's a different question. That answers something else, because the... So I think the answer is yes. I'm not sure that I can point to specific environments in which that is done. I mean, there are lots of systems that have pushed out patient-facing tools that are intended to nudge and encourage and enable patients to do this. I will say, you know, that kind of explicit incentive, I'm totally for that. I think payers paying patients to fill out these tools and in that sense, de-burdening the clinician from doing it is a really terrific idea and follows from the payer's interest in presumably in better outcomes. It has the additional benefit that you're providing incentives to... The incentive to the patient affects the patient's behavior, which that's the thing you wanna affect. Paying providers, payers offering providers money to pay, you know, to administer these tools. There are also previous examples of that. There was a national insurance company 20 years ago had a presidential initiative, their president, their CEO had an initiative where they would pay providers who were seeing that insurer's patients to administer a PHQ-9 and further, they offered free collaborative care services to the treating clinician, that is a care manager that the insurer would furnish and pay for and psychiatric consultation that the insurer would furnish and pay for. The patient didn't face any cost, the provider didn't face any cost and uptake was essentially zero and the reason is because the incentives were facing the provider and it's incredibly hard for providers to implement payer-specific workflows. But the payer incentivizing the patient to fill out the measure, that, you don't have that problem. »» So I think to maybe get us off the PHQ-9, one of the virtual questions is, would you recommend certain functional outcomes? But I think the question is, would patients have more buy-in to functional outcomes instead of symptom-focused scales? And do we have any of those and is that something we can talk about? »» I think that's actually one of the things that Dan's going to talk about in a second. So let's hold that one for a second. I don't know if you want to remind us. But I think the question is, is there, to me I guess the question is, how do you square the data that this works, that this helps with these kinds of barriers? And do you, what do people feel about the idea that you can either do it one way, which is what we've been taught, or do things where the evidence, we should be doing the things where evidence drives us? I wonder if anybody has any thoughts about that. Does that create any sort of tension for you as an individual? Yes. »» If you have a comment, you're welcome to come up to one of the microphones too so we don't have to repeat it and everybody can hear it online. »» Great. Thank you. »» Hi. »» Dr. van der Leep, you successfully convinced me that it's a bad thing to do measurement based care. I'm really scared now. »» He's done. »» Right. So I guess when I was talking, when we were talking about it, it reminded me last time I was at my primary care physician and he told me that I was supposed to go to the gym and do more sports to lower my cholesterol. And it feels like that. It feels like we all know that it's a good thing, but the emotional component is holding us back. Like you pointed out very nicely, like there is anxiety on part of the providers, what should we do, is it going to affect me kind of thing. So I wonder if, I mean as psychiatrists, shouldn't we kind of address that emotional component and how do we do that in order to lower fear for providers? »» How would you do it? »» I don't know. I don't have the answer. »» We're the panel. We're supposed to have the answers. »» I mean, for my cholesterol, eventually I got scared, right? So I guess there was a... »» Wait, is the answer exposure therapy? »» Oh, yes. »» Right. »» You do it and see how it works for you? »» Also, you're weirdly comfortable with one measure in there, which is that your cholesterol is meaningfully measured. »» Right. »» I think that's the point, is that the reason that you know that your cholesterol is bad and it finally gets to you, is that you actually see the number. »» I would put no in big quotes there. »» Right. But I think that's what we should be talking about because that is a significant barrier, right? »» So do you think, you're not done. »» I'm not off the hook yet. »» Sorry. So do you think that knowing that your cholesterol is 220 and that you're going to have a heart attack before you're 55, does that propel you to, and now that you went, does that number help you understand that impact on your health? »» To a certain extent. But I think it was more my primary care physician who was very insistent in me doing something, right? »» So that's a challenge. We are psychiatrists. You don't think that we can help people think about their health and what to do and counter the resistance? »» Given how much we're incentivizing future question askers right now, I'm not certain we're very good at motivating behavior. »» I mean, I think that I'm trying to speak to some under-written or under-acknowledged truths in reticence that the general psychiatric practitioner has around measurement-based care. And yet, I will also tell you, just showing my full deck of cards, that in other practices I've routinely used the PHQ-9 and the GAD-7 and found them very beneficial in guiding treatment and helping. And I believe they do provide better quality outcomes. So I want to counterbalance my prior comments with that. And I'll also say, though, that unless we acknowledge these challenges emotionally with doing measurement-based care and other things, we're probably not going to be able to move forward as a field. And we need to get them out in the open. That's my point. »» Okay. »» I want to be ranty for a minute on one thing that you said. I want to acknowledge on the one hand the reality that practitioners may fear being measured. Actually, practitioners, you know, I've sat in technical advisory panels at the National Quality Forum where they've debated new quality measures across the scope of medicine. And all those technical advisory panels always have representatives from the guild of whatever subspecialty the measure is relevant to. And the role of the guild in those conversations is always to say, you can't hold us accountable for that because patients don't do what we tell them and we don't have any influence over that. And so you can't hold us accountable for it. And I guess here have maybe the conflict of being not a psychiatrist. And I mean, you know, as the CAR TALK people used to say, bogus. Like we just don't want help. I understand that that is the micro motive for providers. And I will say from outside and maybe representing the public health perspective, we need to organize healthcare so that people cannot continue in general to practice in that way. Because it's just, I mean, it's not, I mean, in the short term, maybe you can continue to practice that way, evidently you can because there's excess demand for what you do. And as long as, you know, but in the long run, that I think is not what anybody wants. And I can't imagine that that is why people go into medicine either. Right? And at some fundamental level, I don't think people go in so that they can just do unaccountable things that yield a decent income stream. But it's so much easier. All right, in the back, yes. While we don't want to be putting up obstacles to patients, patients accessing care, I do think that if we, whether it's incentivizing patients or in other ways, socializing that measurement-based care is the sine qua non of seeing the doctor. If patients learn that they can't get to the appointment without answering some questions, it might change behaviors. If you go to a pain management doctor for some, any sort of issue, you can't get into the doctor's office without filling out some scale of radiopaint every single time. So to my mind, forcing it ultimately might change behaviors. I think, I work in a telehealth practice and some of the challenges are, as you said, the clunky implementation and how poorly embedded some of these tools are. But when that changes, if we change patient behavior, I see it as very viable. So basically, your argument, and I understand it, is that if we can increase demand for measurement-based care from the patient perspective, it would entice more providers to therefore engage in it because it would be seen as a standard of care. And if a patient were to show up to a provider practice and sit on the couch and not be given a PHQ-9 score, they might be put off by thinking this provider doesn't know what the hell they're doing. Yeah. Interesting. Okay. I just want to make a comment. Oh, can you step up to the microphone? There's no anonymity. I want to make a comment because I have been practicing since 80s, 90s, 2000, and now. So my major time was in the western side, UPMC, I'm talking about Pittsburgh. And I was both system, you know, Allegheny-Westman system and the western side. At that time, mostly my time was 80% in inpatient care, then 20% outpatient care. And eventually now, I'm not doing the inpatient care, just the, you know, private practice. Every PCP, it is they have to do PHQ-9 testing. They have to do, you know, cholesterol and diabetes and this and that. Even you start antipsychotic, you have to measure the metabolic score. Now it is not patient-controlled, you know, thing is going on. It is we have to do so that we can show that we have done something. Otherwise, suppose somebody sued you, and in the court, your chart is haphazard. You are just describing here, there, nobody reads that. If it is systematic questions and answers, that's what people will see. Because I know that I, you know, I was sued many years ago. They know, they don't want to hear, say yes or no. Doctor, I asked you yes or no, so you cannot answer that I have done all these kind of things. So I think psychiatry is changing, and we have to accept it, and AI is coming. So if we don't do anything, you know, how you are going to practice? You just don't go and go one hour just chatting with the patient without understanding what you talked. Yeah. So scales are extremely important, and I have been doing for a long time. That's my comment. Yeah. Yes. So I was in a practice setting where we had an iPad that we would get to the patient registration, which included the PHQ-9, the JAD-7, and the audit seat. And we got those incorporated into the electronic health record. We would review it right as the patient was coming in. And we did get repetitive samples from the patient. There were no questions about it. So that did work. Now, what's the problem? The patient population we had that I took care of was actually a pretty severe population, ranging from psychosis to multiple people with personality disorders. And so the measures, the issue is did I trust on an aggregate level, looking at my performance with the patients at aggregate on a practice level, did it really reflect what was going on with the individual patient? It was great on a patient-individual basis because we could then take that and say, hey, last time you were here you said you were doing great, now you're saying you're not doing so well, and that was only two weeks ago. And so for the personality disorder folks that were reactive, it created that foundation. But to say that measuring my outcomes based on a PHQ-9, JAD-7 in the population we had, so do we trust the tool, number one. Number two, a patient may sit here like a madras of less than 10 is considered in remission, but if you tell the patient how do you feel, terrible. So do we trust the tool and how do we get tools that we trust? Well, I thank you for that because I'm going to actually hand it over to our third person, Dan Carlin, to talk a little bit about are the measures good enough and really the existential question of measures in general and how we move forward. So thank you for setting that up. I think you raised some really important questions. So Dan. »» Yeah. Thanks for that. So I'm going to come to this from the perspective of, let's say, reality. Not that I have any special designs on reality that you don't have, but I come to this from 10 years of doing measurement science in pharma, in device companies, in research. And so what do I mean when I say I come to it from the perspective of reality? What's the point of measurement? Not these guys' point. What's the point of measurement in general? It's a way of accessing truth about the world, right? It's a way of accessing a reproducible reality and gives us the tools to describe it. Without measurement, there is no science, right? Most of science is either measuring something or tweaking something and then measuring something. Without measurement, we don't have science. But without measurement, we clearly can't have care because here we are sitting here saying you need to measure stuff in your practice and come up with reasons why you aren't. So from the perspective of measurement science, all we're doing is trying to learn something about the world in a way that can be reproduced again and again and again, right? So we think about measurements that exist in the world that we tend to trust. We've already heard cholesterol brought up. I mean, I looked at this bottle of water. It's got a pH, a volume, and I can translate that volume to a weight in my head, right? Patients have exactly those same things. And we tend to trust that those measurements, when we do them, represent a reproducible reality. We know we're supposed to weigh our patients when they're in antipsychotics. How often do we actually do it? There's an easy reproducible measure. It takes a $20 tool that you can use for the duration of your practice. You can translate that number into your EHR, no problem. Just type it. It's usually two or three digits, right? But we don't even do that. So maybe it's not about whether we trust the measures. So I was told to come up here and be a little bit provocative, and I'll do that. I have some disclosures, financial ones. I've worked for a bunch of drug companies. I still do. I'm a shareholder in a drug company. I'm a shareholder in a bunch of device companies. I'm paid by them. But none of them will benefit from the things I say. And in fact, if I told you the specific ones, they might get hurt by the things that I say. And it certainly, I don't think, would make you trust me anymore if you knew who paid me. But I will tell you someone who used to pay me. Pfizer used to pay me. Do you know what the P in PHQ stands for? You think it's patient, right? Who paid for the PHQ to be developed in the 90s? All Pfizer, right? PHQ-9 is just one little subsection of a larger patient health questionnaire. What happened in 1992? Zoloft was approved. Why on earth would Pfizer, in the mid-90s, be motivated to get you all to scream for depression? Yeah? Yet, it's ubiquitous now, right? As you just said, every primary care doctor does a PHQ every time somebody comes in. That thing exists to sell Zoloft. Now it's probably not Zoloft anymore, right, because we got better than that. Now it sells Lexapro. If the Lexapro doesn't work, then it sells something you call an SNRI. SNRI? SSRI? Are those different? Not really, right? SSRIs aren't that selective, and SNRIs don't really get to the norepi until you raise the dose quite a bit. They're just SRIs. But we've been told they're different. So anyway, there I am being provocative. My other provocation would be, when I do research, you guys are pretty distrustful of it. When pro-measurement people repeatedly find that measurement makes things better, shouldn't we be just as distrustful? Wait, but nobody's paying them, right? No one's paying them to be pro-measurement. You're getting paid to be pro-measurement. But nobody's paying them. Most people are doing academic research. They get a grant. They're not funded by a company to do pro-measurement research. So why are they doing it? Well, because they think they work, right? So when I report my disclosures, I'm supposed to tell you who pays me and how much, and I have to report that regularly. Shouldn't the people who think that measurement work have to tell you that they're conflicted, that they're likely to find the thing that they want to find? Why don't they? Why don't we ask that of them? Well, how would you measure it? How would you measure their conflict? Maybe the number of papers they've written. Is that a bad thing? Someone has a lot of papers saying the same thing over and over again. Should we think that they're conflicted? No, we think they're an expert, and we ask them to come talk about it, right? Yet if they got paid to do that, we would think they were conflicted. Why? Because capitalism invented money to measure value. So we use money, even in our world of measurement, as a surrogate for value, and you know what surrogates are good at? They're good at turning into numbers, and you know what numbers are good for? Well, comparing one thing to another, right? So you can say, well, you get paid less than you, so you're more likely to tell me the truth. Really easy to measure, but I think arguably not all that important. I promise I'm not going to lie to you because a pharma company gave me some money. I'm almost guaranteed to incidentally misrepresent something because I really believe it. Yet that's what makes me an expert, right? So constantly in measurement, we go around and we measure the things that are easy to measure, and we call those important, whereas the things that are hard to measure and are likely the most important sit on the side. So I'm going to tell you a quick other story. This is actually from when I was at Pfizer. We had a drug, preclinical, that in, well, it became clinical, but in monkeys, it increased their blink rate. It would engage dopamine, and there was a hypothesis that if you engage some dopamine, you drive blink rate. Simple, right? Nice, easy phenomenon. It's like, wait, right? Something we can measure, something we believe is true about the world. And so we got this drug into people, went through phase one, was in phase two, and we decided to do an early sign of efficacy study to go look and see if we were getting pillar two engagement. Are we on dopamine? So we put the people in essentially the same setting that the monkeys were in. Now, with monkeys, you can do things that you can't really do with people, but fundamentally, we had people sit on drug and not on drug, and we counted their blink rates. We had a machine to do it too, right, fancy, and someone should have tested the machine before we paid for it, but the machine came back and we were getting wildly discordant blink rates between the two sides, which is fine. You know why your dog always looks like it's winking at you? It's because dogs are monocular blinkers. They blink separately with each eye, so they are always winking a little bit, whereas we are generally not monocular blinkers. We're binocular blinkers. We blink together, right, unless you're intentionally trying to send some sort of a signal. And so when we saw like, you know, 400 blinks per minute on one side and 10 per minute on the other side, that didn't seem right at all. So we did the same experiment, but pointed a really high speed camera at people's faces and, you know, could watch it in slow motion, so we found graduate students to watch it in slow motion. But there's a reason. Hang on. This is good. This is not a misuse of graduate students, because if we just had text to it, the next thing might not have happened, and, you know, a day or two into these poor kids watching these blinks, blinking faces, I walked into the dark room where we had stashed them, and one of them was sitting in front of a notepad, and it was like walking in on Darwin looking at finches. She had sketched like a dozen different blinks that she was seeing in the slow motion footage. And I thought, oh, no. If a blink's not a blink's not a blink, which of our phenomenology, which of the things that we call signs and symptoms are unitary? This is troubling, right? And so what that led us down the path of is the idea that, and this is why I started doing this measurement thing, that so often the things that we build our diagnoses on and the things that we're talking about measuring here are necessarily not what we say they are. We name them, and then we start to believe in them. We name a thing a blink, and now we think it's a unitary phenomenon. We name a thing anhedonia, and now we start to think it's a thing. And so when we actually drill down into measuring things in a really close way, all those models fall apart. And it's kind of a scary thing, because we build our way of interacting with the world on these models. These things have to be things. Otherwise, what means anything, right? And so that led us in two directions. And I think this is, you know, I'm going to, I don't know how much time I'm going to give you doing this. It led us in two directions. We need to look more closely. Every chance we get, we need to dive in to the phenomenology. Not just, psychiatry gets a bad rap, right? It's, you know, oh, your diseases are all symptom-based. Congestive heart failure? Come on. Like, there's, it's symptom-based, right? That's totally assessed by symptoms, and yeah, we have a little more biology to it, but those don't plug in very well. And we're in the same situation as a lot, I mean, essential hypertension? Essential tremor? Come on, these are all phenomenological illnesses, right? In fact, any time you see essential before a disease name, just consider it psychiatric until further notice. Pseudo, also, for some reason, because, you know, we gave away diagnoses when we found like biological underpinnings, and when you give away a diagnosis, like we gave the neurologist seizures once we found EEG measure, and they accidentally got psychogenic, I know they're not called that anymore, but they accidentally got psychogenic seizures, and now they have to manage the ones that are psychiatric, too. Anyway. So the reality is we can dive in deeply. We can measure these phenomena closely, and we have to do that with a willingness to break our existing models that help us understand the world. But we can't just, like, stop psychiatry in the meantime. We can't stop drug development in the meantime. People are out there suffering, right? They need you to do the best treatment you can do. They need me to try to build drugs that help them more than the drugs we've got today. We all need to maintain an orientation toward helping people recover. So we need to keep, on the one hand, the summit of these integrative measures, right? We have integration and derivation. So we need to keep doing the integrative measures, because fundamentally, people's experience of themselves, of the world, tends to be pretty integrative, right? People don't just complain about not sleeping well, and then not think about how they feel the next day. That's just one integrated experience. You can attribute it to sleep, or you can attribute the non-sleep to how they feel during the day, right? That's a bidirectional feedback loop. But people experience themselves largely as integrated beings. That's why we have stories that we tell ourselves about ourselves. We experience ourselves longitudinally as integrated, and that's great. So we need to make the effort to try to measure those things repeatedly in the same way, in a way that's recordable, in a way that's comparable within a person and between people, so that we can learn more about how we're affecting them, while at the same time, and this is where things like item-level analyses, and I will not say the P word. How about like the HamD? Not developed by industry, but used by industry to try to show that drugs work. Item-level analysis between patients, if you actually are able to capture that in things like an EHR or a patient iPad for PROs, start to allow us to break these things down and look at how we change different symptom clusters and how those relate to diagnoses or trans-diagnostic phenomena, right? On the one hand, and on the other hand, keep measuring the integrative things, because it helps us to actually understand in a repeatable way that crosses over from one provider to the next, that crosses from a care environment to the next, that crosses from patient to the next, how we're affecting people's lives. And we can do both of them. We can accept that today's measures and today's diagnoses aren't as close to reality as we'd like them to be, and still use them to try to help people. Thanks. Eric? No, we'll share it. We'll share it. Can you give me an example of what you imagine is an integrative measure and how that's imputed into clinical practice? Well, there's one we use in drug development all the time, which is the global clinical impression, right? So like in the same study, I might have someone wear an Apple Watch, and we're measuring their daily activity rates and time spent vigorous, moderate, and sedentary activity. And at the same time, when they go into a site, oh, that's the neat thing about research, by the way. Unlike this thing where we have to influence you all to try to use measurement, when I'm doing a research study, I just tell people what they have to measure, and they have to measure it. So it makes it an awful lot easier to get the measurement done. So at the same time as we're measuring something incredibly detailed, we might be getting a vocal biomarker. Read this text into a microphone, and then here's a free text prompt. Respond into the microphone. We have incredibly sophisticated ways of analyzing both the motorics of speech and the words people use. Those are two separate domains. They're interrelated, but they both have a ton of data in them, and if we're good, we can turn those data into meaning. And then I also ask the PI of the study, on a scale of one to five, how are they? Right? And those are both important things to know, and we use both of them when we go back to you and say, hey, here's what we think this drug might do. Well, I guess I'm reminded of an expression I actually hate, although Voltaire, right? Not letting the perfect be the enemy of the good. I mean, it's apt, I just don't like it. Aesthetics. So I guess I just think that the population health problem that we're confronted with is not the margin between these measures and better measures, existing measures and better measures. It's the margin between no measures and existing measures. So I guess, well, okay, so the evidence that... By the way, this is the debate we want to be having right now. Also with you all. I mean, with you all contributing and not just us talking to each other. So, okay. Carol summarized the literature of research, right? Using scientific methods, establishing that repeated use of valid standard measures, right? So there's a presumption of what does this valid mean, standard, we probably know what that means, right? That repeated use of valid standard measures in some scientifically credible way is predictive of patients or with the right design can be understood as causal, causing patients to have better outcomes than if you don't use those measures. So unless your intent is to throw every baby out with the bathwater and say, well, those things that are valid aren't valid, those things aren't standard. I mean, I guess we're probably not arguing about the standard part. And then additionally challenging the findings from the design, right? I mean, as an ad hominem critique, I don't think it's very compelling. If one is going to attack the evidence Carol was summarizing as showing us a better path relative to current usual practice of not using standard measures, I think one actually needs to do it in the concrete. And say, well, this, you know, because it also isn't just one study that did this. I mean, this is also reproducible across multiple studies. So look, I think what I heard, where I heard you end was yes and basically, right? Is we can and maybe should do as much as we can with existing measures, recognizing that there will always be better measures. And that's, I guess, what I wanted to highlight too, which is there's the margin of not measuring versus measuring. And clearly I think, I mean, again, people can come back and tell us if they dispute the finding from what Carol summarized that pretty consistently in real world practice, if one measures compared to not measuring, patients do better. And then there's the, if we are, what? I need to ask what the sham condition is. Oh, the sham condition is usual care. People aren't blinded, but. Yeah, but I mean, so we're saying that if you don't, so any intervention causes things to get better. We know that, right? Hawthorne effect is universal. So of course, a study condition versus nothing shows the study condition works better. But usual care isn't nothing. Usual care is what patients currently receive in standard practice, right? So you're saying standard practice plus an intervention works better than standard practice. So any intervention causes improvement. That's the Hawthorne effect. Well, okay. Except not, no. Well, that's not the Hawthorne. The Hawthorne effect does not predict that any intervention plus standard practice improves things. And in fact, demonstrably, empirically, it is not generally true that any intervention, that standard practice plus any intervention improves things. The world is actually full of interventions that you add to standard practice and they make things worse. Right. So that's a refutable statement. So I want to get back to kind of y'all and really kind of, this yes and, this yes and is I think is important. Is that, and the idea that we often let the perfect be the enemy of the good. I think that what I heard you say, Dan, which was I think really important for us to think about is that measuring, you know, data actually does drive decision making and can drive improvement. And the better we measure, the more we allow ourselves to be open to data, and that can be lots of different things, right, that we actually can get to a different place. So, you know, I wonder, what do you all think? You know, I'm curious about, well, I'm curious about how people say, well, that's the future. I don't have a graduate student who's going to sit and measure, eye blink, and re-describe it for me. So in this qualitative way, which we can then turn into a quantitative measure, right? But I wonder, where do you all sit with that? Is there something compelling about the idea that we can actually become the future simply by asking, you know, the PHQ-9, for instance? Sorry, I'm not sure if this will align to that or not, but I'm just curious, right, with a lot of these instruments that you're saying validated, a lot of them, I don't know if they're now medical devices, right? If they're digital, I would assume if they're making recommendations, they become medical devices, which especially with the recent changes from the FDA in September 2022 for clinical decision support tools, right? I wonder what your thoughts are on that, because I think some of the importance is that, like you said, the measurement is really important, but if you, you know, my iPhone will have an update tomorrow that says, hey, I got to update you. What happens, you know, to your patient if there was a bug and, you know, there was an update the next day and then that patient was diagnosed and you put them on a medicine? So I wonder where, I haven't heard much about that and what the thoughts are on the panel regarding that change. So the medical device regulation is a moving target at best and unenforced at worst. So in general, I think be cautious of algorithmic transformation of data into actionable knowledge. Much of that is living in a very murky place. In general, with PRO, patient reported outcomes, and Clinro, clinician reported outcome measures, which one is the patient fills out a form, the other is you fill out a form while talking to the patient or looking at the patient, are generally not regulated. And so your interaction with it is based on what you believe to be true about the thing. And so if it's something that's in common use, like the PHQ or AMA or whatever, then, you know, you use it in the way it's used because you've seen it in common use and it's been used in a lot of different settings and you believe it to be valid. For algorithmic transformation into knowledge and any sort of active decision support tool so that the instrument says, and here's what you should do, that does fall under the current regulation. Yeah, I want to take the comment, but I want to just ask the audience, since we've got about 10 minutes left, who has a gnawing sense of nihilism right now with regards to measurement? Does anybody? You mean in general? I mean, I think this is partly what I was hoping we could arrive, I don't want to arrive with a complete sense of nihilism, but I think the general workforce is stuck in ambivalence around use of measurement-based tools right now, and that's why we don't do it more. I'm taking a very motivational interviewing type approach to this. And I think that in the end, the true answer is kind of what Michael is saying, is that some measurement and consistency in the process of gathering a structured assessment of somebody is better than nothing. And I think the true of what Dan is saying is also good in that the measures that we have and the phenomenology that we have today don't accurately, at times, portray the true clinical work being done and the clinical status of the individual. And we're stuck in between a little bit of that, and that's part of the frustration here. And yet, I think there's still opportunities to engage in more measurement-based care, and I think in general, the process of doing that offers and affords a lot of dividends in clinical care. Let's take... Yeah, the guy at the back with the glasses on, you're up. Hi, I'm Owen Muir, I'm a child, adolescent, and adult psychiatrist, and full disclosure, friends with some of the people on the stage. In my mind, the empirically right answer for the treatment is what the patient should get. We don't happen to know in advance right now what that is, but we would want them to get the best treatment for their problem that acknowledges and alleviates their suffering. And at the same time, we want to get it paid for, and we can't. And so I'm curious, and this is also a call to action, if from the panel you think the new technology add-on payment, or NTAP, which now has an open comment period for CMS, because as a result of a law passed in 2024, we can all advocate for inpatient psychiatry to have additional payments for using technology. And so when I translate what I just heard was, wow, there's an opportunity for our inpatient colleagues who've been laboring under a different fee schedule than the rest of medicine to have adjunctive med device in the form of algorithms paid for in a new way. And so everybody should go look at the CMS website and submit an open comment that you support or don't, whatever it is, and that would be great, because anything new getting paid for in inpatient psychiatry sets the precedent that we can pay for new things in inpatient psychiatry. But this measurement-based care and algorithmization will lead to something maybe, question mark, but not if it doesn't get done because we don't pay for it. And so is this an opportunity to leverage a regulatory change around the new technology add-on payment code, question mark, which you can all advocate for as you see fit, to deploy med device in inpatient, question mark? We'll do that. Up at the front. Is there data to support or to tell us whether or not using a standardized instrument is better than using a nonstandardized instrument? Is it more the attitude and the measurement focus that's important, or do the instruments actually contribute because they're tested? That is a great question. In general, I think the data suggests right now that the administration of an instrument is helpful. Yeah, I don't know the answer. There are people in the room who actually may know the answer about whether people have done trials of using nonstandardized instruments as opposed to whatever it is that patients do if they're not using an instrument at all. Most of the research that I'm aware of has tested the use of standard instruments inpatient care and found it to be better than usual care, whatever that is. And I actually think that there are a bunch of reasons for that. I mean, some of them may actually relate to the kind of political economy of research and the incentives on researchers. Another is because standardized instruments can do something nonstandardized instruments couldn't do, even if it turned out that they had similar effects on improving patient outcomes, which is that they can be aggregated. So I think, and the fact that they can be aggregated, right, which means that you can roll up information across multiple patients at whatever level of aggregation you want, means certainly in the research world, but maybe also in the practice world, there's just more momentum behind using standard instruments than nonstandard instruments. In the back. You. Okay. Hello. So I'm Bryn. I am a grad student, but I have never had to measure blinks. So that's good. And I'm curious as we talk about the standardization of these different tools and we talk about the integration of kind of PHQ and how it's been integrated and how we use it as this tool that we're leveraging, but then we also consider potentially more nonstandard measurement tools that are starting to make its way into the field like digital biomarkers and that kind of realm of measurement of physiological responses or chemical responses. Where do you think that kind of has a place in this conversation? I will caveat, I missed the first little bit of this conversation, so maybe that was already discussed, but I feel like a lot of our discussions have been about questionnaires or about these more formulaic responses. And I heard Dan say, you have to be careful as you take these insights and bring them into clinical practice. So I'm curious on where that has a part here in this conversation. From the perspective of research, the idea that we can digitally measure the behavioral outflow of someone's internal state is very much there, but it's been hanging a little bit out of reach. And that's because what I said before, which is that when we look really closely at behavioral outflow, it doesn't usually fall into the boxes we expect it to. So there's an example outside of psychiatry and other systemic illness, just psychiatry with weird lab values, rheumatology. Thank you, Alan. And in that case, we're looking at actigraphy, like how much someone moves around. And one participant in this study moved way, way less when she said she was feeling better. And it made no sense until we got some context and it turned out that what she really loves to do is knit. And she couldn't knit when she was feeling badly. And so she sat and knitted more when she was feeling well. There's no way to capture that in walking actigraphy. When you look closely, you have to be able to look at nuance and context and all these other things that we're only now starting to learn how to do. Because for a long time, this all sort of happened in not a very unified way. And it still tends to not. Because there are individual stakeholders who are motivated to measure in a certain way for proprietary reasons. But we're getting better. I guess I'd say it a different way. Someday, maybe we'll have clinically useful biomarkers in psychiatry. But now we don't. So this is the best thing that we have, I guess. And, you know, Dan, all of your comments actually remind me of an analogy. I mean, yes, everything you say, I agree with. And we should learn to do all of those things. But it kind of reminds me of if you measure the coast of Denmark with a yardstick, you get an answer. If you measure the coast of Denmark with an electron microscope, it's almost infinitely long. But for lots of practical purposes, the yardstick thing is enough to be useful. I mean, I hope that came through. I really do hope this came through, which is I'm in no way discouraging the use of measures. And, in fact, the better we are at collecting those measures at an item level rather than aggregate. So if you collect the measure in the EHR rather than transcribing a single summative value into the EHR, they're even more useful. So I think we can do both. So I'm going to use Chair's prerogative. And thank you all, because we're nearing end. I think that while someone challenged the audience and said, are people feeling measurement nihilistic? Was that the thing? I'm hoping that if nothing else, you feel thoughtful about it. I think one of the things that I heard was that I think that if it's the right incentivization, it would help people. I know that as much as we find this idea of quality measurement and policy in CMS to be annoying at best, what's happened is that because of things like the measurement of cholesterol across populations, that cholesterol across populations in this country have actually gone down and that we've seen some improvement in care for that as well as for diabetes. So I think that these are really rough incentivizations and it would be nice if we both could get the science of measurement right so that it really led us to improve care and improve questions about our communities and how we think about them, treat them, et cetera. But I think in the short term that there's a lot to be said about sort of paying more attention in different ways and really helping, giving us the incentives to do that. So I know Dan had one thing. Quick. Anybody who wants to come up and talk, that's great. I know that the folks who are at home cannot actually come to the front of the room which is a good reason to come to the meeting next year. So anyway, thank you all for being here. Appreciate it.
Video Summary
The session, led by Carol Alter, chair of the APA Council on Clinical Quality, explores the importance and challenges of implementing measurement-based care in psychiatric practice. Carol introduces the topic by emphasizing the significance of measuring outcomes to ensure quality and value in psychiatric care. The discussion points out that while studies show measurement-based care improves patient outcomes significantly, it isn't widely adopted due to perceived difficulties, including lack of reimbursement and potential disruption of clinical workflows.<br /><br />Eric Vanderlip highlights relational challenges between doctors and patients with measurement-based care, noting patient refusal of assessment tools and clinicians' skepticism of standardized measures over clinical judgment. Michael Schoenbaum presents an economical perspective, addressing barriers such as the costs of health IT and the transition to measurement-based practices. He stresses the long-term benefit of integrating measurement-based care into psychiatric training. Dan Carlin provides a nuanced view on the accuracy and utility of current measurement tools, advocating for a balance between using current measures and improving measurement methods for more precise care.<br /><br />The panel and audience exchange ideas and concerns, suggesting incentives for both providers and patients to adopt measurement-based care. They explore how standardized versus non-standardized instruments affect practice. There is discussion on the need for feasible financial and systemic structures to facilitate a shift towards objective, repeated measurements in psychiatric care, pointing out the importance of both current and improved measurement techniques in enhancing patient outcomes.
Keywords
measurement-based care
psychiatric practice
patient outcomes
clinical workflows
reimbursement challenges
assessment tools
standardized measures
health IT costs
psychiatric training
measurement accuracy
incentives
systemic structures
×
Please select your language
1
English