false
Catalog
“Should I have my patient complete this scale?”: S ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
»» Thanks for coming. It's great to see you all here. Of course, this being the last day of the APA, it's great to see anyone here. So thank you very much. But I take it that you're as passionate as we are about measurement-based care. Who here is actually doing that at their hospitals or practice or elsewhere? Great. Well, we're hoping to leave time afterward for some discussion and questions. And it would be just wonderful to hear your experiences as well. So I'm Bob Boland. I'm the Chief of Staff at the Menninger Clinic. And I'm a Professor of Psychiatry at the Baylor College of Medicine. And I want to introduce our other speakers talking today. To my right is Marcel Sanchez. He is a Professor of Psychiatry at Baylor as well. He is also an inpatient psychiatrist there. He is our Director of Education and Research. And he is also our Clerkship Director at the Menninger Clinic in Baylor. In addition, we have Michelle Patricklin who unfortunately cannot be here today, but let me tell you about her. She is our Director of Research. She really is the acolyte for measurement-based care. So I'm sorry she can't be here. I'll be presenting her part. And she's also an expert in wearables and technology and psychiatry. As I said, she's a Professor as well at Baylor College of Medicine. And finally, because we don't want everyone to be from Baylor, we also have Vikas Gupta. Dr. Gupta is a Child Adolescent Psychiatrist, formerly an Attending Psychiatrist at the Mass General Hospital and Harvard Medical School in Boston. He's interested in psychiatry education and is, wow, what's going on over there? And he is a former Child Prite Fellow. His interests include mood disorders and integrative psychiatry. So we're going to start with Dr. Sanchez, who's going to tee us up, Dr. Sanchez. Thank you for the introduction, Dr. Boland. I hope I can keep you guys' attention. Looks like something really fun is going next door. You know, I heard a few clapping, a few noises. So let's hope that we can keep our attention here. So my part of the presentation has actually to do with the screening instruments, more specifically with regards to their role in the initial assessment of patients and whether or not they help us when it comes to diagnosis. Here are my disclosures quickly. We are going to start with a small case vignette that is going to illustrate how screenings they might be able to help us, but sometimes they might create extra problems for us. So we are going to start with that vignette. So we are talking about this 24-year-old man who has an outpatient appointment scheduled with you. And this happened to me actually in real life, you know, when I used to do outpatient. But now I only do inpatient, but I used to be an outpatient psychiatrist. So this patient had an intake scheduled with me for the next day, you know, or two days later. I mean, it wasn't the same day that this happened. And then the clinic used to email this patient a package of screening instruments that they had to fill out in advance. And that was to fulfill a series of regulations and requirements. So this patient completes the PHQ-9 as part of this routine check-in process. And then I am there in the middle of my clinic, and the clinic administrator calls me saying that the patient, they were reviewing the data, and the patient scored a 3 on question 9 of the PHQ-9. For those who don't remember that, question 9 is the question about death thoughts or suicidal ideation. So basically, the clinic manager wanted to know what she should be doing, you know. And then she comes to me, and I have all my patients that are waiting, and I have to decide what am I going to do, you know. So here are some of the options that we have in that particular situation. Stop what you are doing and call the future patient, a patient that is not established with you yet, and call them to assess for acute suicidality. Option B is tell the clinic administrator to call law enforcement for a welfare check based on that answer that the patient gave. Question C, option C is just sit tight, cross your fingers, and hope the patient will show up two days later for the appointment. And option D is just calling risk management and tell them that is the reason you don't like screening instruments to be completed remotely. You guys don't need to tell me what you guys would do, you know, just write down your answer and then at the end of this presentation I'm going to tell you guys what I did. There is no, this is definitely Sophia's choice, there is not a right, the perfect thing to do, you know, but I can tell you guys what I did. So this is the outline of our presentation. We are basically going to start with the clinical vignette, it happened already, so cross that part, and then we are going to be talking about screening in psychiatry, some general aspects of screening, some history, some of the controversies behind the use of screenings in psychiatry, third, we are going to be talking about the practical recommendations for screening, some practical tips regarding what screening to use and some of the pros and cons of certain screenings, and then there are a couple of slides with the conclusions. So let's get started. As you all know, psychiatry is not an easy specialty, you know, and psychiatry diagnosis is not an easy thing to achieve, you know, it is very time-consuming, no matter how many DSMs and the criterias and the standardized guidelines we have, it is still very hard to reach a proper diagnosis in psychiatry, and the main reason for that is that psychopathology is subjective. We deal with something that is very subjective, that is very individual, still, when you look at the shortage of mental health services, how many issues we have with counties that don't have a single psychiatrist, and how many places are underserved, even with regards to other mental health providers, you know, the idea, when you consider that shortage, and you also consider the scenario that we currently have to practice, you know, how high patient volume, limited time with the patients, when you consider all that, the idea of a questionnaire, the idea of an instrument that the patient could fill out and that could make our lives easier, you know, could optimize the process of diagnosis, sounds very appealing, sounds like very, very attractive, right, and this idea is not exactly new, and actually, looking back in the 1940s, you know, Harry Stax Sullivan, who is actually probably one of the biggest names in the history of psychiatry, you know, the creator of the Interpersonal School of Psychotherapy, basically, he created this questionnaire that was called the Fitness for Duty Screening, to be used by the US military to screen potential soldiers, you know, conscripted people who were being either volunteering or being volunteered to join the army, and they wanted to make sure that people with mental illness were not included, were not recruited, first, because they were afraid that could create problems, you know, in the line of duty, and second, because there was already a concern that these people, if they developed mental illnesses during the war, they would become a problem after their return for purposes of prevention, and not very different from the issues that we face nowadays with the skyrocketing numbers of PTSD in the military, but anyway, nowadays, this is considered a very dark piece of psychiatry, you know, several analysis later, and one of them was actually done by Dr. William Menninger, several aftermath analysis of this data and of this process and of these questionnaires indicated that these questionnaires were highly biased, there was a certain moralistic aspect involved in these questionnaires, but what Dr. Menninger says here in this slide is actually something that we already face, we already know, we still face with the questionnaires that we currently use, the problem with the false positives, you know, you have a lot of people who score positive in these screening instruments, and actually, they don't have the disease, and then instead of making your job easier, it makes your job a lot harder, you know, depending on the situation where these screenings are being used. So, over the last few decades, there has been an explosion in the number of screening instruments in psychiatry. Just for depression, we have more than 280 instruments, and they are extremely variable in terms of different features, like you have self-assessments, you have interviewer-based assessments, you have some of them that are paper-based, we have some instruments that are electronic, there are some clinics that incorporate that in the check-in process. Nowadays, I think that we are starting to see more and more app-based screenings. So, there is no shortage of modalities in terms of what comes to screening. Even though, as we are going to see, we don't need to use screenings necessarily, there is a trend towards incorporation of screening instruments into standard of care, because of a series of reasons that we should be able to discuss. Specifically here, in the United States, the U.S. Preventive Services Task Force issues, every now and then, new positions about the use of screening instruments. And while some of them have received this status of a recommended step, there are still a few that are considered, as of now, in terms of insufficient evidence. So, you can use them, but it's not formally recommended, it's not necessarily clear that these instruments are going to help you, it's not clear necessarily that they could cause potentially negative effects on the patient that you are screening. Let's talk a little bit about the ideal screening tool. So, the ideal screening tool, the ideal instrument that we would be using to screen our patients, is an instrument that is brief, that can be self-completed perfectly, right? If you have to ask the question, it fits the purpose of doing the screening. It needs to be an instrument that is easy to score. Many of the screens that we have, they have these very complicated algorithms to score, and you have to reverse items, and that is not feasible when you are in a busy practice. And the screeners need to have a good evaluation, so they need to be able to measure what they are supposed to measure. We are going to be discussing that in one of our next slides. They need to have good psychometric properties, sensitivity, specificity, and so forth. And the most important, they need, preferably, they need to be free, you know? We don't want to have to pay for screenings. Everybody who creates a new screening usually doesn't do it necessarily out of love, you know, for the profession. They do it because they hope at least to make some money, if not for themselves, it's for the institution that they work. So we usually try to stick with the screenings that are in public domain. Of course, this idea of screening, they are like leprechauns, they don't exist. So we have to be very careful when we come to picking what screenings we are using in our practice, if we are using how many, and try to come up with some way to optimize the administration of these screenings. And that is because there are a few caveats associated with that. So time spent is the most important thing, you know? There are places that they have a beautiful package that the patient takes 40 minutes to complete. The patients don't do it, or they do part of it, or they do it and leave it at home if it's paper-based. And believe me, I mean, I used to work in a clinic where we were really invested in trying to make this work, and there is not an easy solution. I mean, we try to mail the screenings, or who is going to mail them? We try to have the patients complete right before the appointment. The patients are either late or right on time, or they take too long and the doctor wants the patient to complete it soon. So the time is a serious issue. There are the validity issues. More and more we see that screenings, they don't necessarily measure what they are supposed to measure. And we are going to be talking about psychometric properties of the instruments as well, you know, and how sometimes we can misinterpret them. And a serious concern involving screenings is self-diagnosis. If you go online and you type any kind of a mental illness or any kind of a mental disorder, they are going to pop up thousands of pages where the patients can complete the different types of instruments for free. And some of them, they are posted by non-governmental organizations, others are posted by private clinics. Usually they are not the clinics run by psychiatrists. But you find lots of those. There is usually, to be fair, there is usually a disclaimer saying, this screening is not a diagnostic and blah, blah, blah, blah, blah, blah. But the patient doesn't read that disclaimer. The patient comes to you and says, doctor, I filled out this screening and the screening said that I have dissociative identity disorder. So I need you to treat me for my dissociative identity disorder. So your trouble, yeah. And ADHD is another one that often falls into that, you know. Bipolar disorder, I'm not even talking about it. We had a panel a couple of days before yesterday where we discussed it, you know. So self-diagnosis is a serious issue because all the subjectivity that is involved in our diagnostic process, the patients already come with this preconception and you need to do almost a work that the forensic psychiatrists are more used to doing than us who are mere mortals, you know, non-forensic psychiatrists that you have to deconstruct what the patient is trying to sell you, trying to present to you that they have and that is not a easy thing. Potential negative impact of screenings. This tends to be a certain overreaction, you know, even though there have been reports of patients that were screened positive in suicide screenings and then someone got desperate and called the law enforcement and then the outcome of that was not the best possible, but there was a concern in the past about the negative impact that the screening for depression could have in patients and like in terms of activating the patients. Nowadays we know that this is more like a myth, this is not something that we see in real life. Probably the most important caveat is the lack of action plan for positive screening. So okay, a patient screens positive, what do I do with that information? You know, too much information sometimes can be worse than too little information if you don't know exactly what to do with that information that you got. And we were talking about validity and this slide, I know it looks a little polluted, but it's not really that complicated. And this is a matrix of symptoms. These authors, they did a very nice job. They got a lot of screening questionnaires, 126 actually, I wonder how long they took to do this, but they got 126 questionnaires. They looked at the questions that these questionnaires included and they tried to run a very complicated statistical analysis to look at comparisons between these items, what these items seemed to be measuring, what the dimensions of the conditions of the psychopathology, they seemed to be measures. Then once they did that, they compared these different symptoms across the different instruments, not only with regards to different mental disorders, but also within the same mental disorder. And the results, they are a little alarming, you know. The similarity rate across different instruments for the same condition, because you expect, right, the colors, they indicate the strength of the similarity. So you expect a screening for depression be different from a screening for schizophrenia because the dimensions, they are different. But what is alarming is that within screens that are designed to measure the same condition, there wasn't as much agreement as we would hope there were, you know. Actually, the highest similarity was for OCD instruments. That was 58%. Bipolar had the lowest, 29%. Depression, that is, you guys saw all those instruments that we have, 40% only. So this is a little concerning because depending on what instrument you are picking, your patient might screen positive or might not screen positive. And if you decide to be over zealous and use more than one instrument, you might end up with a mess of data because the patient is gonna score positive in some instruments and not in the others. And then what do we do with all these, you know. Another question regarding validity is related to the gold standard that you are using. So basically, how a screening instrument is validated, usually we compare it against a gold standard. And the gold standard, most of the time, is a standardized interview that is designed basically for research purposes. These standardized interviews, they are largely based on DSM constructs. So when a screening scores positive, many times it's saying is that this patient has X, Y, or Z chance of meeting criteria, meeting DSM criteria for that particular condition. So by doing this, I'm missing a lot of the sub-threshold forms of the diseases, you know. Diseases that don't meet the full criteria for DSM. So I think that we have a hard time explaining to residents and the medical students. The DSM criteria, they are criteria. There are patients who do not meet criteria for schizophrenia or for bipolar, that I still think they have schizophrenia or bipolar. And the screenings, they are gonna miss that kind of nuance, that kind of detail. So, give me a show of hands here. Which one of these options here says, so we are talking about psychometric properties, which one of these psychometric properties is the most important for clinicians? We are not talking about the researchers. We are not even talking here about the primary care providers. I'm talking about psychiatrists or mental health providers. Which one of these is most important? So, sensitivity, specificity, positive predictive value, face validity. I'm not gonna go into this thing that we study for USM, okay? I know this is very confusing and we are not gonna waste our time doing that. But who would go with A? Who thinks it's sensitivity? Okay. B, who thinks it's specificity? Good. C, positive predictive value. D, face validity. Good, okay. So, all of them are important, but depending on the context, if I had to pick two, I would go with sensitivity and the positive predictive value. And that would depend on the context that you are considering. If you are screening people in a primary care setting or in the community, I want high sensitivity. If I am in my office with the patient and I wanna know if that screening is helping or not in my diagnosis, positive predictive value is the best psychometric property for those purposes. And what is, just to refresh our memory, what is positive predictive value? So, in a few words, it's the likelihood of those people who score positive in the instrument actually have the condition that you are screening for. It's similar to specificity, almost, but it's not exactly the same as specificity because positive predictive value depends largely on the prevalence of the condition that you are screening for in that particular population. So, this is the problem, and a very good example of this is an experience that I had once with the MFAST. You guys might know what this scale is. You know the forensic folks for sure know, but many clinicians use it. But I used to work in a hospital where there was a high proportion of patients who were homeless, who wanted to find a place to sleep, and then they would show up and they stated that they were suicidal, and then they would be admitted by the resident on call until the attendant could see them the next day. And then there was a high concern about malingering. So, there was one attendant who would request for this scale, the MFAST that is the high scale that is designed to assess for malingering, and if the patient is scored positive in this scale, that would largely impact his management in terms of discharging the patient or in terms of enforcing a diagnosis of malingering for patients who had that positive result. The problem here is that the MFAST was initially validated in forensic settings. Nowadays, we know there are studies in non-forensic settings, but initially, it was validated in forensic settings. And in forensic settings, the prevalence of malingering is significantly higher than in other settings. So, in a forensic setting, the positive predictive value of the FAST scale is 86%. What is that? It's a decent number, right? However, in a non-forensic setting, this falls to 57%. So, I have 43% of patients who are gonna score positive in the MFAST who don't actually have malingering. So, just something to keep in mind when we are thinking about any kind of instrument. So, let's move to the third part of our presentation, practical recommendations for screening. And the first question is, do I have to use a screening tool? Yeah, technically, no. Technically, you don't. If you are in private practice, caching only, you don't need it at all. And you spend one hour with your patients, you don't need to do screenings. However, institutionally speaking, there is more and more movement towards we having to do screening instruments at some point in our assessment of the patient. And it's good to keep in mind some practical points there. Pick instruments you are familiar with and that you feel comfortable interpreting the results. Second, choose just one instrument for each condition. I have seen places that they screen, especially for substance use disorder. Some instruments, they capture things better than others. I have seen places that screen with four or five different screenings for substance use. Just getting a hold of all those screening results becomes an extra task for you, and that is not very efficient. Don't be overambitious about how many screenings you incorporate in your practice. Pick a few of them. Pick the ones that you feel like truly are gonna make a difference. And if at all possible, make sure patients complete screenings on the same day of your assessment. This is particularly important for those patients who are gonna complete these screenings remotely because of the situation that we described at the beginning of the lecture. And in terms of reliability, in terms of concerns, it's less of a concern if the patient has their appointment on the same day and the patient completed the screening and gave you an answer that could raise potential safety concerns. What instruments to pick? These are my favorite ones. You don't have to pick all of them. You see that they are all relatively short. Even though if you look at their positive predictive value, it's not that big, all of them have a pretty good negative predictive value that for me as a psychiatrist is the most important thing. I wanna know if the patient scored negative in that screening, if the, what is the likelihood that the patient not have the condition? And these numbers, they are good. Like if my patient scored negative in the MEDQ, in the bipolar disorder screening, there is a 90% chance that my patient will not have bipolar. What is that? It's a data that I find attractive. It's not 100% of course, but it's a data that helps me. But you see, the positive predictive values, they are not that robust for almost all of the instruments that we introduced here. Screening tool for psychosis. This is something that we often get asked about. And there is a whole discussion about whether or not we should use standardized screening instruments for psychosis. And that is related not only to the paucity of instruments that are easy to administer, but also because how of the nature of psychosis as a condition, you know? The patient's judgment could, that is by definition impaired, it could impact the quality of the answers that they are gonna give. Patients with psychosis could potentially have not exacerbation of their symptoms, but they could already develop an extra layer of guardians if they see this kind of a question in the package that they have to fill out prior to the appointment. So, these are some of the instruments available, and you guys see that most of them are very long. Not all of these, they are in public domain. The Basis24, that is actually a very nice instrument. You have to pay to use that one. And the question for psychosis in these three last instruments, they are mixed up with other questions for other dimensions of psychopathology. So, it starts overlapping with other things that you might be doing. But there is one particular instrument, and I'm not gonna invade Dr. Gupta's presentation, you know, but there is one particular instrument that was validated for adolescents, that is this one, the Adolescent Psychotic Symptom Scale, that is not too extensive, and it's basically seven questions where the patient answers yes, maybe, or no, never. They get one point for yes, they get 0.5 points for maybe, and they get zero for no, never. This instrument has been tested in adolescents at high risk for psychosis, and it showed some promising results, you know. Interestingly, these values here for PPV and for positive predictive value, for negative predictive value, they are not for that particular symptoms, they are for psychosis in general. So, something for us to keep in mind, you know, if I were using, I don't use screening for psychosis, if I were using one, this is probably the one that I would pick, you know, with all the reservations about it being validated for kids and not for adults, but still, it might help us in a certain way. So, this is my last slide, because we are moving to the conclusions. So, screening tools are what their name indicate. They are just screening tools, and should not be used for diagnostic purposes, you know. I think this is the takeaway message that if you guys remember it, that's mission accomplished. And the second, have a game plan for patients who score positive for screening tools. As we were saying earlier, too much information might be causing more problem than too little information. So, if you are doing screenings, make sure that you know what to do when they screen positive. Going back to my situation, in that particular case, I ended up stopping what I'm doing, and calling the patient. I was really worried about the patient, and I don't think I would be able to sleep the next two nights if I had not done that. I called, the patient said, no, no, doctor, those are just passive thoughts. I haven't had them for 20 years. Yeah, 20 years, though, because he's four, but he's 24, but he hadn't had them for a long time. And then, we had, he showed up for his appointment two days later, and we had a very nice discussion about his case. Thank you all very much, you guys. Thank you. of course that's a perfect that's a very appropriate question you know these instrument was validated in 2011 or 2009 and at that time there was no TV and no radio you know so we probably should create a new instrument saying you think your podcast is talking with you or do you think your cell phone is speaking with you but that would be probably demand a new validation because we we cannot modify the an instrument and use it no but it's a very very appropriate question thanks good point so I'm just gonna tell you a little bit about what we're doing at the menagerie clinic around measurement-based care and once again this talk should be given by Michelle Patrick Quinn who's our director of research at the menagerie clinic but she could not make it today so I'm happy to do it in her stead and I'll do my best just to you know just in case you don't know the menagerie clinic is a affiliate of the Baylor College of Medicine it's a it's a small relatively small a psychiatric only private hospital we have we can we right now our census is probably about 80 something today across several inpatient units in one residential unit we also have outpatients as well and various other programs and of course we're in Houston Texas if some of the older people here might recall that the menagerie was once in Topeka Kansas and then it moved and that's a long story that I won't go into now but it's because the short answer is it's before my time anyway not before my life just before my time at the menagerie all right so this talk is kind of built and sort of our philosophy out measurement based care is built on several premises one that you know mental health is a universal human right and that really everyone has a right to available accessible acceptable and good quality care particularly quality care is what we're trying to concentrate on also that lived experiences are important and that we should find ways to understand the patient's experience which meaning that you know expertise that comes from living experience is as valid as that from the standpoints of working with or caring for researching that experience in other words we need to hear from patients about what they're experiencing which is why and this is our justification for the fact that you know we use lots of patient reported outcome measures or you know in other words self-report measures not a big surprise right because most scales are that even the clinician guided ones like the Hamilton even the skid and things like that ultimately our patient report measures we ask them questions and they give us answers to that and we sort of from that sort of make our judgments about how they're doing so that's kind of our premises what I'm going to talk about today is what is first break briefly what is measurement based care I think Marcel did a great job describing that so I won't spend a lot of time whether you know about us and measurement based care what does it look like at the menagerie now and what is next here's a fancy definition of menagerie of how menagerie sees measure measurement based care based on the work of Connors if you don't want to read off of all of that you can tell that Michelle is younger than me because she uses lots of acronyms that I have to look up but if the above is too long and you're not going to read it basically from start to finish we should use outcomes to inform and adjust mental health care yeah too long didn't read it apparently is what that stands for at any rate we do feel that it's you know in short that really you can't really say you're doing evidence-based mental health practice if you're not measuring the outcomes of what you do and yet you know we all know that very few psychiatrists actually do that in fact according to one study only about 18 percent of psychiatrists and 11 percent of psychologists reported routine administration of patient reported outcome measures what are they doing instead clinical judgment but as we all know clinical judgment can be unreliable I mean we're biased people in fact at least another state shows that clinic clinicians who relied on clinical judgment alone were able to detect only about a quarter of their patients who experienced increased symptom severity it kind of has facility to that when you think about it right that we still want our patients to get better that we might have a blind spot sometimes to the fact that they're actually getting worse so why aren't it why isn't everyone actually doing this then and why aren't we all you know measuring our patients outcomes well I mean practically speaking we have to admit that there are real barriers to measurement based care it takes time it takes effort it takes personnel to do this and what does that all mean that means money money and resources it takes a certain amount of knowledge and skills I mean we can all download scales but to use them properly actually does take training and and to interpret what they mean to collate the data and to be able to kind of use the scales meaning meaningfully you need some experts there to be able to do that or people with relative expertise and the fact is you know it's sad but true I think that there still is a kind of reluctance in our field to embrace measurement based care I think there's a general sense that our patients are very complex and yes they are and trying to reduce them to these short sometimes one-page scales with simple questions and you know numbers that come out of it just seems very reductionist to us and kind of distasteful as well that said there are a lot of benefits to this as well I mean clinically it is going to help us I mean you know I don't think anyone's saying that it's all you should be doing but it does give you data that you can work with and help you make your clinical judgments it's good for research right you can't report results of things and report you know what you're doing from a research point of view without actually measuring what you do it's good for quality reporting all hospitals all practices should be doing quality improvement and this gives you the data to be able to do that how can you say that what you're doing is actually helping patients if you aren't properly measuring it and of course you know the fact is it's also good for marketing we're not the only hospital we're selling one that sort of publishes our results on our website to kind of say look you come here it's particularly important to us because those of us who know us know that our length of stays are an on average you know significantly longer than most hospitals how do you justify doing that you know coming to a place and spending say a month instead of a few days or a few weeks at a hospital well because we can show that you know we have good outcomes if people do so I'm not going to go into great detail about the individual scales as Marcel said there's an explosion of scales and we're making use of a lot of them for different things the point of the next very busy slide is that we use scales at various different points so it during admissions we give weekly scales to patients we give them at discharge and then we do them at follow-up and we're lucky to because we have a very devoted research team that we're pretty good at following up on patients and getting follow-ups for at least up to about one year after their discharge to see you know what their outcomes were because obviously how they do in the hospital is important but even more important is how they do when they leave and as Marcel implied there's a scale for just about everything so you can see here that for a variety of different things we can we can measure it or at least get some useful information be it about depression about sleep quality suicide quality of life personality functioning trauma and various other things as well these are adult the adult scales that we use and honestly you know this it's published as you can see one of our researchers Ramirez published this a while back and though I don't think we uploaded the slides to our site or did we and see them but I'm not sure we did but you know we're happy to we're happy to read if you reach out we're happy to share what we're doing as well and we also do the same for adolescents and as Marcel said you know you need to sort of adjust sometimes and there are different scales for adolescence care as well yes we are do we know why it's I mean I can only speak for Michelle her belief is that no one scale is all that good and that we need to get different measurements to sort of kind of you know get a more accurate sense of what's going on don't ask me hard questions yes for a while they were using a lot of the cognitive portion of it as well I have to say we have pared down on that some and we're and we're kind of relying more on the executive functioning portions of it to do that yeah and you know we'll definitely have time for a question Q&A later on and so I like I said there's also adolescent scales as well and I'm just going to give you a really sort of quick over you know sense of what we're doing and how we're doing this so we use Qualtrics as our database for where all this information goes we have all the scales on iPads and yeah I like that she adds it's also there we have you know these very tough cases that they're in from Otterbox so because we're giving them to patients and the patients are great but they may drop them or throw them and so you know they're usually pretty safe and we haven't really broken any iPads to date this has been going on for many years we also have as you can see in the bottom a pen and paper option if for some reason a patient doesn't want to use the iPad and the in our method of getting the information is we have groups so and I each on each unit there's a weekly group the outcomes group where all patients come to and and hopefully they all come to their via to they're obviously not forced to but most patients do and they all get iPads and they fill out these various scales and you know it takes a while but that's why we give them time to do that and don't just sort of hand them to him at the end of the day expecting them to take it from their own time you know it takes a lot of encouragement of course and our experience like it's you know we can talk more about like what works for you all is that this works better when it's not seen as just a research endeavor it's the researchers and the research assistants who are coming to the unit and doing this but all the same it works much better if the clinicians are supporting it and letting the patients know that it's really part of their care and not some other weird thing that we're doing in addition for like our own benefit that has nothing to do with their own care and stuff because you can imagine if you're a very sick person on a unit going through a lot that's why you're in the hospital and someone comes along and says hey you want to be part of a research study well you might not want to be because you know a lot going on and you'd rather volunteer for other things maybe at a different time but if someone presents it to you and says this is an important part of your care we need to track how you're doing and I need to get objective measures well then yeah I think a patient is much more likely to do that so we do depend a lot on our clinicians to make the case for why patients should be participating and we get good participation really in the area of around three quarters of patients at any given time are usually participating in our studies and it works even better if we incorporate that into our discussions with patients during our clinical care so as the doctors you know myself and others are meeting with patients each week we're going over what we've discovered so far from their outcome measures and that's useful to them so they can see it being implemented and coming in to play and it can also be useful for the patients I mean I'm sure you've all that had that experience of a patient who you think is getting less depressed but they don't really feel it yet to them they're still depressed and they're still kind of mired in that and they don't see it and stuff and so it can be useful to them I mean it's not like I'm gonna tell them you're wrong you're better but at the same time it can be encouraging them sometimes like yes I know you feel depressed but you know what you're actually scoring better on certain measures I understand that it doesn't feel good to you just yet because there's still significant things going on but we can take some hope is and that it seems that some of our treatments are starting to work and yet and we get sort of easy visualizations that are pretty intuitive both from clinicians you know Qualtrics spits this data out and we can sort of like present it both to clinicians and the patients to let them know really like where they stand now and how they're doing on various scales this is translated into lots of studies both presented by dr. Patrick Lin and by her team at various meetings this is not you know this would be a whole different talk if I went into the results of every individual study that we've done on different sorts of measures and treatments I'll just summarize a few of them there's that TLDR again so in other words rather than tell you all the studies I'm just going to give tell you the conclusions you know we find that a majority of our patients do make consistent improvements and outcomes and it takes about four to six weeks we find that sort of start to see some significant improvements there's also a boost often right in the beginning but to see sustained improvement takes a while which once again is a just our justification for keeping complex patients in our hospital longer than it might be traditional these days however we've also found caveats to that for instance we found that we have to be careful particularly with patients and I'm sure a lot of people have had this experience or intuitively know this that but it's good to have the data that some patients who make very rapid improvements in their suicidal ideation during the hospitalization are actually the patients who are at highest risk of dying by suicide after discharge particularly follow them out for a year it's often within the first few months but it could be longer fortunately a rare event but it is statistically significant in that group and we have published that you can see some of the references down below a frightening thing and the question of course remains well how do we know which ones those are because some people improve and they just improve and some people improve and and they actually are at great risk of suicide and and I'm sad to say that's something we're still working on to sort of like tease out what those risk factors are that can identify those different groups since most patients get better in their suicidal ideation but it does you know one trigger does seem to be the like the most rapid ones in the first couple weeks and finally and finally something that always is like that's very important and near and dear to us and anyone working in inpatient hospitalization I think is that good sleep is critical I mean come on you know we're you know our next president of the APA is gonna be talking to us about lifestyle and stuff like that and what's really more important than sleep really nothing and so it's true here as well good sleep is critical for positive mental health outcomes not just from I mean really from everything but certainly from inpatient you know for inpatients what's the problem with that well you can imagine and I'm gonna say more about that in a second when I talk about kind of where we're going this is we're trying to find you know turn our own findings about these things and incorporate them into sort of the lived experiences of patients to kind of come up with real interventions for them for instance with sleep so you know I mean we just know that patients don't sleep well in hospitals and certainly that's very true in psychiatric hospitals right one you know one writer about this describes sleep disruption inpatient psychiatry is quiet torture like we know what happens and we cause it through our nighttime observations because what's every hospital doing they're checking on patients at night to make sure that they're okay and that they're not in the midst of trying to hurt themselves so there's the you know ubiquitous 15-minute checks that are going on there and I don't know about you but if I knew that someone was going to come into my bedroom every 15 minutes or at least open the door and look at me and stuff I mean how well am I going to sleep you know Lord knows that the hotel I've been at that's pretty noisy just just hearing other doors open is bad enough much less my own any rate you know then we get a lot of that I mean if there's any consistent comment I get from patients when I go to units and ask them what their experience is and yes say yes you're shaking your head and I think we all know that then it's about the sleep quality that they get there and how awful it is this is like a quote from one patient you can see below I hate them they're talking about the in-person nighttime checks what's the point of observing me they're loud they disturb me from sleeping the slightest little noise would wake me up but I don't get much privacy of course our staff try to be careful they try not to disturb them they often put a towel in the door you know kind of thing so that they don't have to click open the door to look and stuff like that but the fact is is that you know we're all pretty aware when someone's looking at us and opening doors when we're sleeping other things we're working on both to help sleep and the other other things as well are you know our basic we're trying to make our you know come up with scales and they're more culturally sensitive you know use bias-free language translate things popular you know properly you know we're in Texas we have a lot of different languages spoken there and we want to make that you know appropriate for all patients both not just the language itself but the cultural sensitivity of it and appropriateness as I mentioned beginning that dr. Patrick Quinn is an expert on wearables and we already use them some and we're trying to incorporate them more both for sleep and for other measurements to get you know some objective measures as well like you know and we're trying out different things you know usually some kind of like watch type things but we're using other stuff as well I have one of these crazy like smart watch rings and I'm wearing that told me that my sleep wasn't too good last night and that it gave me the advice that I should really take it easy today yeah thanks thanks ring but you know we're trying to incorporate it all and the patients have you know really and actually enjoy that I've been pretty um you know willing to comply with wearing different sorts of devices as well as we can get other measurements and we all know the notion that like you know patients reports of sleep is often quite different than their actual you know than actually what's happening we have you know we're trying to do much more interdisciplinary work so we have lots of work groups including interdepartmental ones we're developing more and more real-world trials to see how patients do across time both in the hospital and outside we have a lived experience board to get more patient input which is very important of course we're incorporating AI I think it's a rule of both here and every meeting I seem to be at is that you can't have a talk without mentioning AI so this is me mentioning AI as to what we're doing with AI right now a lot of is more back-end stuff helping us interpret you know the you know voluminous data that we have that we're collecting on patients and sort of find patterns and stuff that we may not recognize but I imagine we're going to be using it more in the future also for making predictions about patients and about outcomes that would not be intuitive to us and and of course we're always trying to come up with new and updated measures because no measure is perfect so there's that thing again if nothing if you hear nothing else from this you know we all should be using measurement based care why because quality mental health is a human right and what does it mean quality how who defines the quality really it should ultimately be the patients who define quality and even though we might see these sorts of scales as a reductionist it is one way to get patient input into how they think they're doing I did want to acknowledge the whole research department on the far left there you can see dr. Patrick Quinn who's the director of research again and is responsible for all this work oh look there's Marcel as well there as part of the research department and a bunch of other people who you may or may not recognize and many research assistants and other staff that help us as well so thanks very much and I'm going to turn it over thanks I'm gonna turn over to dr. Gupta Good morning all, really pleased to be presenting this morning with the highly distinguished and esteemed colleagues, Dr. Boland. How many of you have heard of the Kaplan Textbook of Psychiatry, Kaplan's Synopsis of Psychiatry? And how many of you know Dr. Boland is one of the main editors for that book? Okay, great. So, Dr. Boland is actually one of the new editors, along with Dr. Marcia Verdun, for the Kaplan Textbook and the Synopsis of Psychiatry. So really, really pleased to be presenting with Bob. And Dr. Sanchez, thank you so much for putting this together and conceptualizing this talk. My name is Vikas Gupta, I'll be presenting and talking more about the child side of things. Dr. Sanchez and Dr. Boland already talked much in detail about the utility of the rating scales and why we should be using them. I will be having a bit of an overlap, but I'll try to nuance towards child side of things. I do not have any relevant conflicts of interest to disclose. My learning objectives are to identify the common rating scales that we use in child psychiatry and to evaluate these benefits associated with these scales, and also to look at some of the potential risks and challenges that go with this. Also, Dr. Sanchez talked a lot about pragmatics of using these scales, so I will also touch briefly on utility and pragmatism of using these scales in clinical practice as it pertains to child psychiatry. So starting off, I like to talk about why is assessment important in child psychiatry? All of us know that assessment is significantly important because it helps to detect early and lead to early interventions, which do improve outcomes. Assessment is also important for an accurate diagnosis and treatment planning in child psychiatry, as like adult psychiatry. It also helps us monitor progress as well as understand the nature of the impairments and kind of plan treatment planning and ensure safety while looking at the risk management and also helps sometimes with research and evaluation. Now what is the role of rating scales and standardized instruments in child psychiatry? So as you may know, many children have trouble talking about their behavior, and rating scales can often help the clinicians to fill in the blanks. Rating scales also play a significant role by providing a structured, reliable, and a valid method for evaluating various aspects of mental health and well-being. They do let us do some kind of a systematic screening of psychiatric symptoms and behavioral problems. They provide a standardized framework for the clinicians to look at a wide range of symptoms from anxiety, depression, ADHD, conduct, and so forth. Also the standardized instruments help in the diagnostic assessment by providing criteria-based measures for identifying these symptoms and determining the diagnostic thresholds. They also provide a baseline measurement of the symptoms and serve as a reference point on how to monitor changes over time. As I said before, they help us with treatment planning by identifying the specific challenges and impairments that we note initially. They also help us with progress monitoring and also help us with outcomes. A lot of leading centers do perform a lot of rating scales, as Dr. Boland talked about, and that helps in outcome evaluation as well in terms of how the impact of treatment goes, what difference did it make, what changes were noted, and also improves patient compliance when those results are shared with them. Dr. Boland already talked about the research that goes into these, so often these scales can be utilized in research. So overall, these scales play a critical role in the assessment by providing objective, reliable, and somewhat comprehensive measures for screening, diagnosis, and planning. They help us with outcome evaluation and thus enhance the quality of assessments by ensuring objectivity and validity as well as consistency. So these are some of the commonly used rating scales in child and adolescent psychiatry. In the interest of time, I would not be going in detail for the details of these scales, but I just put them on here so that for those of you who are not in child and wish to do adolescent child work, this could serve as a reminder of what scales to possibly look at and consider. Child behavior checklist is something I'd like to talk about briefly. It's a widely used parent report questionnaire that assesses a broad range of emotional and behavioral problems in children and adolescents. The advantage of such kind of scales is that they're broad stroke and they can cover multiple challenges with one assessment versus like narrow band rating scales, which just focus on one diagnosis or one clinical problem. So there are several other scales, especially talking about ADHD, Connor's rating, Vanderbilt, SNAP, and ADHD rating scales are critically important as it pertains to ADHD. And as you talk about autism, then the CARS and SRS, which is now with version two and ADOS become important. For anxiety disorders, which are the most common condition that you will notice, scared is a very commonly used scale. It was developed at University of Pittsburgh by Dr. Boris Bermaher and his colleagues. And several other scales of that nature, you back to youth inventories, a bunch of inventories that, you know, look at different domains and different challenges, including depression, anxiety, anger, disruption, and self-concept. Now CAS-2 is a standardized assessment tool that was developed by American Academy of Child and Adolescent Psychiatry to provide a determination of appropriate level of service intensity that is needed by a child and adolescent patient and his or her family. Similarly, ECS-2 is another instrument that was developed by ACAP used to determine the intensity of services in younger kids between the ages of zero to five. ACAP has a very useful repository of some of these scales called ACAP toolbox of forms, which has some of the more commonly used and recommended scales. Some of these I already talked about, but the others you can take a peek. Of note, there is a depression scale, there is a child mania rating scale, which is somewhat analogous to the young mania rating scale. And then there are several other scales, including the mood and feelings questionnaire and different forms of ADHD assessments as well. Here are some additional assessments looking at PTSD, the PCL-5, and also the OCD, wherein there is a child-young Yale-Brown scale. So going to briefly touch about the different comparisons between these scales, especially as it pertains to ADHD, because ADHD is a very common diagnosis seen in younger children and adolescents. So there are a couple of scales that are very commonly used, the SNAP and the Vanderbilt, and oftentimes there is this discussion, which is better, what should I use? So all these scales have their specific utilities. Some of them are easier to use, some of them are more flexible. So in terms of the comparative analysis, like SNAP versus Vanderbilt, so SNAP is believed to be somewhat comprehensive and looks at a broader range of ADHD symptoms and associated behaviors compared to the Vanderbilt. It also allows for more clinician flexibility. It does have teacher and parent versions. And it has been tested in trials and has established validity and reliability. It's also pretty easy to use. Now, the Vanderbilt also has kind of advantages as well, in terms of briefness and simplicity, practicality for use in primary care, flexibility in formant reporting, and specific focus in ADHD symptoms. This is a very busy slide. I don't think you can see it, but it also talks about different other measures that have been used in child psychiatry. So now what are the benefits of rating scales and standardized instruments? Dr. Sanchez and Dr. Boland already talked much about the benefits that we can see in clinical care. Some benefits I will touch again would be they do provide a standardized framework for assessing symptoms and behaviors across different individuals and settings. They offer consistent criteria and guidelines for rating specific symptoms and constructs. They promote objectivity in assessment, which can be a challenge in our field of work by minimizing subjective biases and interpretations. They enhance reliability and producing consistent and stable results over time. They also enhance validity. They're easy to use and can be cost and time efficient, especially if carefully chosen, especially the scales that you can think you can practically use in your clinical office, especially the ones that don't take too much time or are not too cost intensive. So here are some of the challenges in using rating scales, especially when it comes to child psychiatry. There can be many rating instruments and scales may not adequately capture the diversity of experiences, behaviors, and backgrounds present in the child and adolescent populations. Items and norms may be based predominantly on Western or majority culture perspectives leading to potential biases in assessment outcomes. Also there could be developmental differences in language comprehension and emotional expression as well as cognitive functioning leading to inaccuracies in self-report measures in the younger age groups. There is also somewhat, it is somewhat difficult to capture some of the contextual factors, including environmental stresses, family dynamics, peer relationships, and cultural influences. These contextual factors can significantly impact a child's mental health and these scales may not totally capture that. The other piece is the subjectivity and bias that can be sometimes found in these ratings. These interpretations can sometimes be subjective and influenced by factors like cultural background, personal factors, personal biases, and the like. Also sometimes clinicians and caregivers may have different perceptions of a child's behavior leading to discrepancy in ratings and inaccurate assessments. There is also this concept wherein, you know, children and adolescents might provide more socially desirable responses, under-report their symptoms, or overemphasize problems which can skew the results. And there is also this problem with rater variability. As I said before, the rating scales administered by different individuals, parents, teachers, clinicians, may yield different results because of differences in observational skills, familiarity with the child, and interpretation of these items. So while they have a lot of benefits, the rating scales could lead to misuse or over-reliance could cause misdiagnosis, inappropriate interventions, and poor outcomes if we do not take care of all these different factors that I talked about. Additionally, there can be resource constraints that can cause problems with, you know, using some of these scales. Some of these scales are expensive and they take a lot of time in the clinician office to interpret and analyze and to complete. So they could be time and resource intensive for a busy practice. Also some of these scales do not provide, you know, these scales do not necessarily provide more information on how a particular behavior may impact daily functioning. So important piece about how to kind of overcome these challenges that we talked about. Some of this might be already covered, but obviously careful selection, administration, and interpretation of these scales can be very helpful. Clinicians should be mindful of the limitations of these scales and the potential biases that go into these interpretations and ratings, while also leveraging the benefits that these scales provide and enhancing our diagnostic accuracy, treatment planning, and outcome evaluations. Clinicians should also be aiming to incorporate multiple sources of information, like clinical interviews, observations, other collateral reports that can help triangulate or quadrilaterate the assessment data and mitigate some of the challenges that can be with the use of some of these rating scales. Obviously combining multiple sources of information can limit some of these challenges. It can also provide a very more comprehensive assessment and understanding of a child's functioning. Also using culturally sensitive instruments. So while selecting, looking at the patient demographic, their background, and coming up with a scale that kind of meets that background, if possible. So looking at scales that have been validated for those cultural backgrounds and normed for diverse populations that can be helpful to bridge some of these gaps. Also tailoring the child psychiatry assessments to the developmental stage. It's very important to use a rating scale appropriate for the developmental age because a lot of these scales have certain age groups that are recommended for use. And if you use these scales less or more than that recommended age group, the results may be faulty and erratic. Also provide clear instructions to the family members or the other informants whosoever are completing the scales, including the children and adolescents if they're doing the self-reports. If your staff is reading these scales, obviously providing them the adequate training to provide a correct interpretation. Also looking to monitor for response biases. For example, as I talked about socially desirable responding or sometimes there is an acquiescence bias. For example, once or twice I've noticed in my clinic while using these scales, children would often say yes, yes, yes to pretty much all the symptoms. And that has been kind of seen and noted in a lot of clinical studies as well. So do not take everything at face value. While looking at all these other variables that might be at play. Also staying informed about assessment and assessment tools because sometimes many of these tools are revised over time. For example, SRS, which is a social responsiveness scale for autism. It's come up with a newer version based on more data and better constructs and better validity results. So looking for the more updated scales that are feasible. Obviously looking at the cost factors in mind because that may be a limitation. And obviously providing feedback to caregivers, teachers and other raters. After completion of these scales, highlighting their strengths and areas of improvement can go a long way in furthering continued use of these scales as well as the patients and their families to understand their utility. In conclusion, rating scales play an important role in child and adolescent psychiatry assessments. It's important to understand the benefits and limitations of these scales in our assessments. It's critical that they do not replace clinical assessments but rather be used in conjunction for a comprehensive assessment. We don't want to use these scales as a primary basis for diagnosing a condition as Dr. Sanchez pointed out and skimp on other important pieces of the diagnostic and the assessment process. And we should also look at strategies to overcome limitations of these rating scales to enhance their utility, reliability and validity. With this, I will end and I'll open the floor for any questions. Thank you. Thanks. So we would love to hear your comments. We would ask that you please use the mics. It would be hard for us all to hear each other otherwise. Hi, this might be a question for Dr. Sanchez, I guess. I love the idea of rating scales. I'm an outpatient psychiatrist and I feel this incredible sense of dread thinking that I'm gonna have to pull out a rating scale with a patient during the valuable time of our session. I mean, what's a good way to sort of wrap our head around incorporating these scales into our practice and either in terms of just a good practice way, do you do at the beginning of the session, at the end of the session, in between using some sort of device or a form or how can we integrate this in a way that isn't gonna slow us down too much and also that the patients are gonna be on board to do it? Oh, thank you. Hello? Hello, it's working. Thank you for your question. I think it's a great question and it's the million dollar question. I think there is not a very easy solution for that and I think the answer would depend a lot on the context where you are practicing. In some clinics, especially the ones that involve big systems, big healthcare systems, they are able, they have the financial means to come up with some kind of online tool that the patient could complete as part of their pre-screening or pre-checking process and that is not only for mental health. The other day, I was trying to set up an appointment with my own PCP to have my cholesterol checked and I had to fill out the PHQ-9 online. If I did not do it, it would not allow me to check in, basically. Yeah, I was forced to do it myself. But then you have those downsides of what happens to that information, what if the patient fills that out too much in advance and doesn't show up. That particular healthcare system has some kind of work flow in place for that particular situation. So if the patient answers two or three for question nine of the PHQ-9, they automatically trigger a response and there is a team that their only job is calling these patients and screening. So you as the doctor would not even be contacted. How many places have that kind of structure? It's the ideal scenario. Another place I work at, we used to give iPads to the patients. So the patients, we would beg them to arrive at least 15 minutes earlier and they would complete the iPads in the waiting area and then they would return the iPad to the administrator. And I think when I think about the screening, the way it shouldn't be done would be us doing it. I don't think we have time for that. And then we might end up giving too much importance to the screening. I have had patients that I gave them the screening after the first, the appointment, and I said, fill this out for me, bring it back to me the time of your next appointment. That works as well. So you have to be a little creative to try to make it work. Okay, thank you. I'd like to add that some clinics have a more streamlined process. For example, a call from my residency, our patients knew to come in like 15 minutes early and they had a computer kiosk on which once they enter their name, they'll be able to see what screening rating scales they need to complete. And once they complete, those scales would be, the results will be extrapolated to the patient's chart automatically without us having to look at anything. And when we start seeing the patients, we already have the results. So I think some clinics have moved in that direction as well. Go ahead. Hi, thank you all for a wonderful talk. My question is, and I think you all touched upon this a little bit, but what is your take on using maybe the clinician rated rating scales like Madress? Because I feel like with using PHQ-9, like sometimes those could be kind of subjective and I'm asking those same questions during an interview with a patient anyway. So just wanted to get your take on using some more of the clinician rated scales. Thank you. Anyway, I would just start by saying, I mean, as Marcel talked about, maybe you could say a little bit more. I mean, all scales have their strengths and weaknesses. That's why we try to use different ones. And I think the problem is really all scales have a certain amount of subjectivity to them, be it the Madress, the Hamilton, the SCID, anything else. You're still relying to a point in patient reports. Some are more sort of clinician led, some are more you just hand them the patients they do, but there's always that measure of subjectivity. If you wanna say more. Yeah, no, I think you put it really well. I do think these scales have a role, you know, the clinician based ones. Again, depending on the context that you are considering, like at Manninger, we had a very busy ECT practice, you know, and the patients who are doing ECT, they continue to complete the PHQ-9 weekly, but they also completed the Madress that is administered by the ECT doctor. And then there are patients where you see something really odd. You see like the PHQ-9 scores, they are staying the same, or they are even increasing, and the Madress scores, they are going down, you know. And that is a very valuable data, you know, like there is an objective measure that is saying that you are improving, but you are not feeling good, you are still feeling pretty depressed, you know. And for a couple of particular cases where I saw that, that added a lot to our understanding of what was going on with the case. To keep in mind, the Madress, the HEMED, many of these clinician based ones, they are not diagnostic. So like the Madress doesn't give you a diagnosis of depression, it only measures severity. So it's just another, not a caveat, but just a reservation about that. I'm Dr. Sandeep Vodha from New Delhi, India. Thank you for the excellent presentation by all three of you. My question is to Dr. Voland. Sir, what I saw was excellent, what work you're doing in measurement based care. So on one hand, we have heard, and we have also felt that there are lots of logistical challenges when we want to give, you know, rating scales to our patients. Me being a fan of measurement based care, I was just wondering about the possibility of your department taking it further by actually using a clinician monitored, generative AI based, measurement based care, utilizing the rating scales required to your inpatients so that multiple issues can, challenging issues at this juncture can be actually addressed in the longer run. What is your take on that, sir? Yeah, I mean, I agree, and it is something we're kind of working toward, but I'd love to hear your experience. It sounds like you have experience. Yes, I have, sir. So I would love to collaborate with you. Wonderful. I look forward to coming and meeting you. That'd be great. Let's exchange information. Has anyone ever looked at using contingency management or sort of positive reinforcement to help with getting self-report scales? What have you used? I've not, but I'm, you know, I'm mostly, I'm a big fan of contingency management and sort of other ways of reinforcing valuable behavior. And I'm, my, one of my criticisms of research is that a lot of it is done with financial incentives in the studies. And I've often been curious, like, is that even relevant to my practice? Because I don't offer, we don't offer that sort of reinforcement. And so in Long, I'm kind of asking, has anyone, to your knowledge in the community, implemented this sort of incentive structure to have consistency around self-report screens? I'll just say my bias. I mean, for things that we do that are pure research studies, yes, we do give incentives of various sorts, be it often small and sort of tokens of appreciation, but we do. I feel strongly about not doing it, about sort of these kind of clinical measures that we discussed today, because these aren't just for like, so we can write papers and stuff. It is something that we consider to be part of their treatment. And we try to reinforce that with them so it's not seen as something external to what they're doing. So I guess we make a differentiation between the two. And, you know, and, you know, our compliance with that and stuff, inheritance has been pretty good. So I think it, you know, patients get it, but that's just my opinion. I think the idea to use contingency management is a good one. To my knowledge, I don't know of any programs that have used it, but it's certainly a very valid argument to kind of consider that as a possibility. Yeah, it's a great idea, I think. I don't know anyone who does that, you know. And in our particular case, that would be problematic because even though this data looks like this, the instruments that Dr. Boland was presenting, they look like research, they are actually part of the standard of care over there. So it would be very hard to give the patients a reward for completing something that they are supposed to be doing there for their own health, for their own benefit, you know. But I would be curious to know how that would work, especially in a research context. I just, I want to add to that. I think it's a brilliant question, and I would love to see a presentation in a year or two titled, you know, Contingency Management for Measurement-Based Care in the Community, you know, for adherence to measurement-based care in the community. I think we would love to see that. And it could be, you know, it doesn't have to be money. It could be like, and it could be a requirement that, you know, if you, over the course of the next year, if you complete however many of these scales for us, for our office, our outpatient clinic, we'll give you a gift card, right? I mean, like, I work in addiction, and there's a lot of contingency management there. And, you know, in New York City, we give them like Metro cards, and, you know, sometimes some other types of cards, a movie theater card, or something like that. And I think this is a brilliant idea with this gentleman, what this doctor mentioned. And it should, someone should do it. It would be good. Fair enough. All right, we're at the end of the hour, so we're going to end on time. Thank you very much for coming. Thank you.
Video Summary
The presentation at the APA conference, led by Dr. Bob Boland and his colleagues, focused on the impact and methodologies of measurement-based care in psychiatry. Dr. Boland, from the Menninger Clinic, along with Dr. Marcel Sanchez and Dr. Vikas Gupta, explored the use of screening tools and rating scales in psychiatric practice. These instruments help in the initial patient assessments, aiding diagnosis but sometimes presenting challenges, like false positives. The speakers highlighted the historical context of psychiatric screening, citing its evolution since the 1940s. They discussed the ideal features of screening tools: brevity, ease of scoring, validity, and accessibility, emphasizing their importance regardless of their limitations. Dr. Sanchez pointed out the subjective nature of psychopathology, necessitating a robust framework for diagnosis amidst high patient volumes and time constraints.<br /><br />A key segment was devoted to the practical aspects of implementing screenings in clinical settings. Dr. Boland noted the barriers to widespread adoption of measurement-based care, such as resource constraints and a traditional reluctance in the field. Yet, they emphasized the benefits for clinical practice, research, and quality reporting. The Menninger Clinic's approach to measurement-based care involves using multiple scales at various patient care stages to gather comprehensive data. Although expensive and requiring considerable effort, these methodologies improve patient care quality by incorporating objective data alongside clinical judgment.<br /><br />Dr. Gupta discussed the specific applications of rating scales in child psychiatry, underlining their importance for early diagnosis and treatment planning. He acknowledged potential challenges, such as cultural biases and children's varying degrees of comprehension and expression affecting the scales' accuracy. Overall, the session promoted an integrated approach to psychiatric assessment, combining subjective clinical insights with objective data from standardized tools.
Keywords
APA conference
Dr. Bob Boland
measurement-based care
psychiatry
screening tools
rating scales
patient assessments
Menninger Clinic
psychiatric practice
diagnosis
clinical settings
child psychiatry
integrated approach
×
Please select your language
1
English