false
Catalog
APA Annual Meeting 2022 On-Demand Package
Advancing Health Equity Through Crisis Services - ...
Advancing Health Equity Through Crisis Services - Focus on Quality and Data
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Thank you, everybody, for being here. Welcome to our presentation, Advancing Health Equity Through Crisis Services, a focus on data and quality. My name is Siva Sundaram. I am a third year psychiatry resident at UCSF. We have a wonderful cast of speakers for you, including Matt Goldman from UCSF and San Francisco Department of Public Health, Margie Balfour from Connections Health Solutions and University of Arizona, and Deb Pinals from Michigan Department of Health and Human Services and Michigan Medical School and Law School. Just to give you a sense of what we're here to do today, we thought of the idea for this panel in the context of what's been going on nationally and in mental health services, especially after the death of George Floyd two years ago now. There's a lot of attention to alternatives to policing and various kinds of crisis services. And in the rush to implement programs, I think we've noticed that there's not always a sense of clarity about what actually works and what should be done. So that's why we're here. Give you a sense of the agenda. Right now, we're getting through a little bit of introduction. We'll hear from Matt Goldman about the current state of crisis services equity research. I'll present on some helpful concepts from implementation science that Matt and I are currently using in an evaluation of one of these novel programs in San Francisco. Margie will tell us about her work in Arizona translating values into quality metrics and crisis services. We'll have a brief interactive where we'll ask you all as audience members to think about some of the concepts and apply them that we're working on. And then we'll hear from Deb Pinals, who will draw on her extensive experience actually making decisions about program implementation. We have no relevant financial relationships to disclose. And I'll turn it over to Matt. Thank you, Siva. Really appreciate your chairing this. Wonderful to have trainees coming up in the pipeline and taking on this kind of work. So again, Matt Goldman. I'm the medical director for crisis services for the San Francisco Department of Public Health and on volunteer faculty at UCSF. And I'll be talking about the current state of crisis services research with a particular focus on equity. To cut to the punchline, there's not a lot of work here to present on. There's not a lot of work in crisis services research in general. And there is even much less that actually has any relevant data or sub-analyses that relate to racial ethnic breakdowns, to looking for inequities in how crisis services are delivered. There's many more questions than answers, as I'll mention in a moment. But there is some to explore and some good examples that I think we can learn from. So first of all, to set the frame for this whole talk and for the research piece in particular, our behavioral health systems are failing us in many regards. And they're doing so inequitably. We're all familiar with the annual increases in suicide and overdose deaths. But despite many efforts to try to decrease those rates, extensive criminal, legal, carceral involvement of people with mental illness and substance use disorders, inadequate access and capacity to our outpatient, mid-level, acute, inpatient, and crisis services, and then emergency department boarding is a really important additional issue. And there's disparities across all of these areas as well. So it's not just that these are huge mental health service system-level problems. There are also problems that are experienced inequitably within certain populations. So for example, in Maryland, suicides during the COVID-19 pandemic halved in the white population but doubled in the Black population. Black people account for 41% of those who are receiving mental health services in LA County jails, even though they represent only 30% of the overall jail population. There was a study that found that longer ED wait times are experienced for non-Black Hispanic, I'm sorry, for non-Hispanic Black population compared to non-Hispanic White population. And between 2004 and 2011, a study examined disparities in accessing any mental health care, outpatient care, and any psychotropic medication care in the last year and found that there were disparities among people who identified as Black, Latinx, and Asian compared with Whites. So major disparities across the board here and also these systems all are experiencing major strains in our current environment. And society is catching on to this. People are starting to get it. There's been headline news articles, major opinion pieces in New York Times, recognizing things like mental health crisis is not a crime, doing really nice news pieces on when New Mexico, a new mobile crisis program was launched. In the middle here, you see that nice shiny red car. That's San Francisco that we're representing from where NPR did a nice story on the new street crisis response team program that Civ is going to comment on. And pictured there is Chief Simon Pang, who is with the San Francisco Fire Department and has been a really important partner in designing that program. ABC News doing a focused article by a psychiatrist, Divya Shabra, who's involved with the APA, focusing in particular on how sending mental health responders instead of police could save Black lives. So there's been attention to these issues. The question is, what do we actually know about solutions to these issues? And that's what we're going to try to get into a bit today. This quote by Drs. Vincent and Dennis that was published in Psych Services last year, I think, helps set the frame. Understanding a problem is a prerequisite to addressing it. For the mental health care system to play its role in remedying the incarceration of a population that is disproportionately Black and Latinx, the extent of racial inequities in this population's mental health treatment must be fully characterized. However, the system's current functioning does not support such understanding. We lack the data. We lack the basics to be able to actually understand these disparities in the first place. And we must understand the current baseline lay of the land to then know if anything that we're doing to try to address these issues is actually working. A couple of comments on how to evaluate if crisis services impact equity. So broadly speaking, an approach to this is you can think about equity analyses in various different ways. So one is you can stratify metrics by race and ethnicity. So basically, you take your overall outcomes and you have racial ethnic data, and you divide those data by racial ethnic categories. And you can do comparisons and do statistical tests. Program administrators and researchers must ensure that disparities in equity are tracked as part of all quality and evaluation efforts. And to do so, that sometimes means providing concerted trainings on best practices to collect this type of sociodemographic data. You can't do the analysis if you don't have the data. And often, that is a challenge in crisis services. There's a lot going on when you're responding to somebody who's in an immediate crisis. And you might be asking somebody's self-identified race and ethnicity might be the last thing that I'm interested in talking with them about when they're floridly psychotic and not able to otherwise participate in an interview. And that might be true in some cases. Certainly, it's unrealistic to expect that 100% of the time we would get race ethnicity data. But it can be trained to ask it routinely and whenever possible by basically prioritizing that these kind of data are really important for us to understand disparities. And you're only going to understand it if you ask. So I think there is a certain training element that can get us part of the way. Another strategy can be the linked data sets. So as much as we can get identified data, that then connects to other data. So for example, Medicaid enrollment. Medicaid enrollment, at the time of enrollment, often people respond to a self-identified race ethnicity question. And those data can then be incorporated into crisis services type data as well. So there are strategies to be able to actually look at these breakdowns and do these analyses. The question is, how much is it happening? So what I'm going to focus on here is a bit about the crisis continuum and disparities research in particular. So the crisis continuum, Margie is going to talk a little more about this. And this is, actually, I think it's like a version 1.0 of her Mona Lisa slide. I think she has version 2.0 that she'll share in her talk. But this slide just so beautifully lays out how a crisis system can function. And this is what actually exists in many parts of Arizona. So broadly speaking, the premise is that, does this pointer thing work? Oh, wow, nice. So we've got a person in crisis. They then can place a phone call to a crisis line. Ideally, that in near future state is going to be 988. That's going to be that three digit number to be able to call when somebody is in a behavioral health crisis. Also importantly, there's going to be coordination between 911 and that crisis line. And in Arizona, they found that about 80% of those calls can be resolved telephonically, that they don't require a further in-person assessment. For those that do require in-person, ideally, that in-person response would come in the form of a mobile crisis team. So rather than police responding, if police are involved in the first place, they could request mobile crisis, or mobile crisis could just be dispatched in the first place. And mobile teams are able to resolve about 70% of calls in the field in Arizona, meaning that somebody does not then need to go to a higher level of in-person care. And they can be in their own environment, a least restrictive environment, a place that's going to be most comfortable to them and connected with the services they need. For that further subset who then needs transportation to a higher level of stabilization, ideally, they would go to a crisis facility, which is a custom-built environment that's specifically designed to help stabilize people who are in acute mental health crisis. Again, ideally, having police be able to directly drop off in an environment like that. And about 70% of those guests are discharged to the community. And then for those who exit from crisis facility, or any of those other types of levels, ideally, having post-crisis wraparound, where 85% in Arizona are able to remain stable in the community. And all of this is aimed at decreasing the use of jail, ED, and inpatient-type services. So doing this in close collaboration with law enforcement means pre-arrest diversion. The more folks you can keep on the left side of this diagram, the less restrictive and less costly the care. So this really outlines the vision of how crisis services can be a solution to many of the challenges that I described early on. And so what I'm going to focus on here is, well, how do we understand about these different types of programs? So for call centers, there are big changes ahead. I mentioned 988. The National Suicide Prevention Lifeline is currently reachable by 1-800-273-TALK. And on July 16, nationwide, everybody's going to be able to call 988 to access that line, which is basically a network of 200-plus crisis centers. Since 2005, this program has existed. They get about 3,000 calls annually. Plus, there's also a crisis chat and text line. The transition to 988 is anticipated to expand volume and caller type. So there's going to be changes to how this line operates in many localities across the country. I'm sure some of you are familiar with the challenges of putting this in place. The types of calls are going to diversify. So people who are needing immediate rescue, people who need what I might call complex crisis triage, which goes beyond the suicide hotline support. That's really the bread and butter of the National Suicide Prevention Lifeline, but requires a higher level of clinical complexity to understand how to triage those calls. Peer support is another component of these, and then some people requesting information only. And so the evidence for crisis call centers, what's been done on these crisis call centers are actually fairly well studied. The National Suicide Prevention Lifeline has been studied that it does decrease suicidality during calls. A third to a half of callers can be connected with mental health referrals. The chat function has also been studied and found to be helpful. And a follow-up study found that the Suicide Prevention Lifeline stopped 80% of people from killing themselves and kept 90% safe, especially among people who identify as Hispanic, female, younger, and then people with high school or lower level of education and prior experience of homelessness. So that was one study that did actually examine race, ethnicity as part of it. Also, a qualitative study of participant perspectives shared, if Tanisha Anderson Flynn looked more like you than me, she may still be alive. There's a lot of ignorance. There's structural racism and unequal allocation of resources. So in a qualitative study, racism as a theme clearly came up in the study findings. But otherwise, there are not any specific outcomes reported on equity of National Suicide Prevention Lifeline response. As far as mobile crisis evidence goes, so mobile crisis, just in general, is a team of two dispatched by 9-1-1, 9-8-8, or other sources, often a combination of behavioral health clinicians plus minus a technician or peer. There are co-responder models where a police officer might be involved, or the CAHOOTS model, made famous from Eugene, Oregon, where an EMT or paramedic might be paired with a behavioral health clinician. And most single site studies that have been done are quasi-experimental, meaning that they're not randomized trials. They're more observational, examining existing service data. And they find that post-crisis service utilization outcomes have been significant, so decreased hospitalizations, increased community-based mental health services, reducing ED use among youth, and ability to overall reduce some short-term arrests but not long-term jail bookings. They've also found some cost savings due to mobile crisis. And there's one study in particular that I want to highlight, which is a recent study of co-responder teams. So this is a clinician and a police officer. So they found that there was no overall long-term risk of justice involvement, no change in overall long-term risk. But their incarceration was significantly reduced among black co-responder team recipients. They also found that it was more likely to have any EMS contact, emergency medical services contact, at 6 and 12 months, and that these trends were likely driven by white co-responder team recipients, but not observed in the black population. So there were differences in how this team's outcomes resulted across the white and the black population that were studied. And what's actually really important here is, for this first finding, somebody might read that and say, well, so there's no change overall in justice involvement. If you didn't have the race-ethnicity breakdown, you would just say, the program didn't really work. People are still getting arrested at the same rates. But by doing the breakdown by white and black recipients of the service, they found that, actually, black recipients did have significant reductions in their longer-term incarceration. And thinking about anti-racist policies, thinking about what are ways to undo some of the structural racism that exists in our systems, this actually highlights that by knowing that black recipients of this team did have reduced rates of incarceration, that could then actually, I think, motivate a system to go ahead and continue implementing that model, or sustain that model, or expand that model, because it's a targeted intervention that is working for a priority population that otherwise is experiencing major disparities in the outcome that we're observing. So that's the richness of doing race-ethnicity breakdowns as part of a study model. So I found that finding really compelling. There's also work that's been done on peer services. I'm way behind, as our trustee chair is highlighting for me. So just to mention, there's definitely been work that's done showing how peer specialists in crisis services can be a very important component of the workforce. Involuntary treatment in crisis services is also a major issue, very poorly understood. There have been some risk factors observed, like previous involuntary hospitalization, psychotic disorder diagnosis, and important to hear, economic deprivation on an individual and population level, which of course tracks with racial-ethnic breakdowns. But the broad point here is, we actually understand very little about involuntary treatment, and there's a lot more to know there. And where I'll end here is just a couple thoughts on future directions. So first of all, so this is at the top here, you can see all of our different types of settings. And on the side here, different categories of studies that might be conducted. And there are many huge questions here that are still unanswered about crisis services. And then importantly, there are cross-cutting topics where we could ask these sub-questions of every one of the boxes that are in this chart, which I'm not gonna walk through each one. But disparities equity, I think, being the most important of all of these. Every one of these questions that we don't understand needs to be answered at the same time with a focus on how are these different types of interventions actually reducing disparities in care. There's also a need for more sophisticated approaches to measurement. Margie's gonna get a bit more into this. But basically, right now, most measurement is focused on things that I would consider descriptive measures, almost like structure measures, like the number of teams in beds, the number of calls and visits that are occurring. Some get into performance measures, so more like call visit response times and call visit duration. But fewer programs are able to have the data that's merged and can be examined for things like process measures, where they're more looking at reutilization of crisis services or linkages to routine care. Or true outcomes, so suicide rates, overdose rates, symptom reduction, client satisfaction, other patient-reported outcome measures. So I think we do wanna move towards more meaningful measures although they are more difficult to measure. And then importantly, as we do all of that, by having race-ethnicity data, we're then able to stratify and examine impact of programs on disparities. So a couple takeaways for you guys as clinicians, maybe system administrators, maybe researchers, building programs based on evidence based where possible is of course desirable. That said, there's clearly a need to draw on clinical best practices from other settings to inform our protocols and procedures given the lack of a robust evidence base at the current time. There's a need to track data on quality of care, including by race and ethnicity, and certainly encourage all of you to partner with academic colleagues to evaluate implementation of new crisis programs as they're going out into the wild and engaging with communities served to understand their needs and inform both quality and I would add equity goals. So with that, thank you, here's my email and I'll hand it off to Siva. Thank you, Matt. So I will now build on what Matt gave you in terms of the 30,000 foot view and zoom in on one specific project that he and I have been working on together, which is to evaluate one of these new programs. Now, although Matt is affiliated with SFBPH and I'm working on this project, our views here reflect only our own perspectives. So I'll give you a little bit of an overview of this new service. We will talk a little bit about some concepts from research science that we're using to do this evaluation. And then we will try to understand together how researchers and clinicians or people who are implementing programs can cooperate to advance crisis service delivery quality. Basic theme of which is that research doesn't have to be separate from or an obstacle to delivering good clinical care. So what is a street crisis response team or SCRT? It was envisioned as a part of a package for mental health programs for people experiencing homelessness in San Francisco. It was pretty high profile package called Mental Health SF. You can see there, this is what the team looks like. It's a rig, that's what they like to call it, furnished by the San Francisco Fire Department. But it's a cross-agency collaboration and the team includes a behavioral health clinician, a paramedic and a peer specialist. It's 911 dispatched, so when the 911 dispatcher gets a call, no matter who it comes from, they do a sort of triage process, try to send it out to a team who tries to respond to a call. Very importantly, the program includes not just the initial encounter, but if the person is amenable, linkage to what I would think about as a kind of high touch case management in the short term with the goal of getting somebody linked to a particular kind of service. And this launched in multiple stages starting in December 2020. So high profile program, new funding structure, timing in 2020 when avoiding police response was really on top of mind. A lot of reasons to ask, does it work? So you're thinking about how to evaluate a program like this. One obvious place to start is to consider what is the program trying to achieve or what are the people who designed it saying it's trying to achieve? So they're looking to provide a rapid, trauma-informed response to calls for service about people experiencing crisis, specifically in public spaces. And the stated goals are to reduce law enforcement encounters and unnecessary emergency room use. So the question becomes how do you evaluate objectives like those? This is a room full of psychiatrists. We have medical training. The first thing I was trying to think about when I hear the word effectiveness is a randomized control trial. But the randomized control trials that I was trying to think about involve separating out people into a control group and a treatment group and you have a very standardized treatment and then you have a very specific set of outcomes. And I had a hard time imagining how you can do that when you're rolling out a program to a whole city. So it's not feasible. We need a different framework to work in. So that is where the large body of work in implementation science comes in. Implementation science started as a way to handle the issue of the gap between what we know from randomized control trials to be supported practices, evidence-based practices and what people are actually doing in the real world. Like why, you know, we know X works, why aren't more people doing it? As time has gone on, folks have started to understand that actually it makes more sense to study implementation as in, you know, if it works, why it works and things like uptake alongside effectiveness of the original intervention. One influential framework for thinking about implementation science is the Consolidated Framework for Implementation Research or CIFR. And I won't go into too much detail but just this diagram, which is admittedly a rather busy diagram, highlights the key domains that people who are doing implementation science want to think about. What's going on with the intervention itself? What's going on in the different sorts of contexts? What's going on with individuals who are being served and about the way that the interventions are being delivered? And in general, the outcomes of a study like this is to identify factors that are barriers and facilitators to implementation of the program. So our approach in evaluating the Street Crisis Response Team based on the objectives that the team itself identified and also what we think is important in crisis research, we designed a mixed methods study in the sort of framework of quant to qual where the quantitative study informed the questions we were asking in the qualitative arm. We had a chance to build data collection into the program rollout. And this helped us do a little bit more of measuring outcomes and I guess some of the process measures that Matt mentioned in addition to really descriptive measures. Our quantitative arm is an interrupted time series analysis of outcomes of interest. And I'll get to what that means for those of you who may be less familiar. And our qualitative arm, we did semi-structured interviews with people who received the service. And importantly, we had input, we continue to have input from various groups that represented people who lived experience on our project and the methodologies we're using. So what is an interrupted time series? Matt alluded to this earlier. It's in a group of what we would call quasi-experimental methods where we're capturing something about an intervention that's been changed and we're making measurements but we're not doing it in a randomized way. This particular method is good for studying sudden changes in context, things that affect the overall environment that a population is interacting with. So new laws, new policies or new services like the Street Crisis Response Team. And here what we do is we compare trends in the same group, the same population before and after the intervention. This is a schematic here. You have an outcome measure on the y-axis and then you can see that there's a certain underlying trend, in this case, a stable trend. The intervention happens and you suddenly see, this is what you hope to see, a significant change immediately and then another trend change afterwards. Advantages of an interrupted time series. It takes place in the real world. You don't have to necessarily change how you do the intervention in order to measure it. And I guess, although it's difficult, all you have to do, really, is if you get the right variables at the beginning, everything else is just statistical analysis. In our project, these are the post-crisis episode outcomes that we ended up deciding to track. I think it's important to note, to flag, that some of this is based on what we are able to capture in the existing data infrastructure. What we know from other literature is important. But I think one of my takeaways from this project is that this might not be a complete list of things that matter. So there are things like service linkage, acute service reutilization, which I think are familiar to folks. I think one of the more innovative things we were able to do is track whether people were actually getting assessed for housing and whether they're having jail entry. This is sort of what we, this is not real data, but it's sort of imagining what we might look like just specifically in the project we're doing. Originally, as we envisioned it, we were gonna be able to do a two-stage analysis because the program is rolling out in certain jurisdictions first, and then others later, which have been really nice, like cross-comparison. Unfortunately, the location data was a little bit too messy to be able to do that. So it's not gonna be quite as robust in that way. And then there will be an equity analysis, stratifying the ITS by race and ethnicity. For our qualitative interviews, first task is to choose who to interview. And originally, again, we had hoped to do something where we would create buckets of folks who had had either sort of positive or negative outcomes in the four categories that we just looked at, and then interview folks strategically within those groups to see if we could find any factors that might explain why they ended up in that particular outcome group. Unfortunately, given the population that we were trying to serve and to study, it was very difficult to actually get a name and then try to locate that person. So we ended up doing more of a convenience sample where folks who interacted with the team were recruited by various members of either the encounter team or the linkage team and asked if they wanted to participate in the study. We developed a semi-structured interview guide that covered things like their experience with the team, what their service engagement looked like before and after, their goals, and any experiences they had with discrimination. Right now we're in the process of finishing coding our transcripts and discussing the themes as a group. Some very, very preliminary results. We're waiting to do, getting the stuff that we need for the ITS, but we do have some basic descriptive statistics. About 3,000 encounters, 1,500 unique individual IDs, which tells you that there are many individuals who are having repeated encounters with the crisis team. I won't get too much into the race and gender data here except to note that 25% with no entry, but I think 75% capture is not bad for folks who are in crisis. And roughly speaking, the breakdown mirrors kind of the patient population that certainly I've seen in my residency training in crisis settings in San Francisco. I thought this was really interesting in terms of reason for call, and this lines up with what we've seen in chart review and meeting with folks. It's not necessarily what I would have defined as a mental health or behavioral health crisis before doing this project. And that speaks to actually how often 911 services are being brought in when there's sort of a disturbance in a public space and someone says, hey, there's someone who looks unwell who's in front of my store and I don't know what to do. So 43% are related to that kind of thing. And a relatively small number related to things like suicidal ideation or psychosis. And then in terms of where people go after the street crisis team meets with them, 30% are ending up in places where you would expect a typical EMS team to be able to take someone. So again, we wanna see what happens when we look at our overall data to see like, is this really something that we can attribute to this specific intervention? But I think it's promising that at least something is happening. And that checks out with what we hear on the ground. In terms of our qualitative findings, just a smattering of the themes that are coming up, people have a wide range of goals and needs. And sometimes they align with what this crisis team is able to do for them and sometimes not at all. One of the things that comes as a result of that, this is more kind of my sense, is that the crisis team ends up doing lots of things that might not technically have been envisioned at the beginning. They're doing a lot of outreach. They're handing out food and water. They're just like saying hello to somebody and then planning to come back later. And that's not what I, again, what I would have envisioned from a crisis service, but it says a lot about what the actual needs of folks are. Importantly, relationships are central. One of the most consistent takeaways is people describing how they felt treated like a human by the crisis team, how people listened to them, how people partnered with them in deciding where they wanted to go. And it wasn't just a binary come with us. And that people really identify their relationships with that kind of case management arm as contributing to their ability to have good outcomes in the weeks and months after the encounter. And then as I alluded to earlier, I think there's, in part because so much of it is human centered and it's, we're like capturing a small section of someone's overall trajectory. Our traditional metrics may not really do a great job of capturing the value that the service is having. Overall takeaways, evaluation of crisis services, crucial. So many reasons we need to know whether it's working, but it's also complex. Some principles from implementation science can help us think beyond RCTs to tackle those challenges. And there are ways that you can integrate research into a program rollout to do evaluation in a realistic and effective way. Just end with some acknowledgments. This is a core team on the eval project. You can see Matt there. A lot of folks at San Francisco Department of Public Health crucially involved. These are our funders. Other folks at the UCSF Center for Implementation Science, some fellow trainees at UCSF, and then the folks who helped us with their input from lived experience. I'll turn it over to Margie. Thank you all for being here this early and in the furthest room that can possibly be. So you all clearly wanna be here. So I'm gonna talk about, I'm gonna switch gears and kind of talk about a different way of using data in terms of using data for quality and quality improvement. And I'll be talking about a lot of the work that we do at our crisis centers in Arizona. This is our crisis center in Tucson, the Crisis Response Center. And we like data. And you know, why do we like data? And you know, why do people like data? Why should we care about quality metrics? And oftentimes, the kind of straightforward answer is, well, because somebody said we have to. Or somebody says we have to because that's how we're gonna get paid. You know, like all the MIPS and IPS and all the different acronyms. But really, if you're a nerdy like me and love data, quality metrics can really be an incarnation of our values that shows the value of our work and shows whether we're living up to our values. And there's kind of this play on words that around the word value because one definition of values is that it's the common beliefs of a group of people. What do you believe in? What do you think is good and right? But then there's also the kind of financial definition, which is the outcomes that you're achieving over, you know, divided by how much they cost. And so then there's this question of, are you providing value? And when there's disagreement about value, it's usually a disagreement about what are the outcomes that are most important? And so it's important to think about, well, who decides what are the important outcomes? And do those outcomes align up with our core values? And who's at the table? Do you have inclusion and representation at the table to, you know, to be defining what are the values that you're trying to achieve? And so, you know, when we ask, well, how does the care we provide reflect our values? How would you know? And so that's where, you know, every presentation around data and quality has to have an old dead white guy in it. So this is Lord Kelvin who famously said, to measure is to know. If you can't measure it, then you can't improve it. So that is why we look at quality data. What we used at our crisis centers in Arizona, which has since then been disseminated and is in the SAMHSA guidelines and other reports, is we use a quality improvement tool called a critical to quality tree, which is a way of taking, of sort of articulating what it is that you value and the values that you're trying to bring, defining what those key attributes are that make up those values, and then defining measures that reflect those attributes. So another term for this is it's translating the quote voice of the customer. So we have many customers in our services. We have the people that we serve. We have their families. We have law enforcement. We have our community. And so how do you translate that voice of what the customer needs from you into your metrics? And so what we're trying to achieve is our high quality crisis services. And so these are the core values that we came up with back in 2014 when we created this. We wanted care to be timely and safe and accessible, least restrictive, which I'll talk about in a minute, effective, consumer and family centered, and done in partnership with our community partners. Now you notice equity is not on here, so shame on us. If I had to do this over again, I would have put that on there. And then what you do is then you try to come up with metrics that reflect these attributes. And just a quick aside, I don't wanna go over time, but how do you choose metrics? There's various models of choosing metrics. Many of you, if you're here and into quality improvement, may have heard of the Donabedian model where there's structure metrics, which is what do you have. Matt mentioned some of these, like how many beds do you have? What staffing do you have? Do you have a psychiatrist there 24-7? If you're a level, the way emergency medicine does trauma centers. There's level one, two, three, four. A lot of those are structure metrics that say, well, you have this many of this kind of doctor and this many of this kind of MRI machines and whatnot. Then there's process metrics, which are measuring what do you do? These are measures that usually there's some correlation between we know that if people do this process in this way, it will lead to better outcomes, but it's easier to measure the process. So for example, when people come into the ER with a heart attack, they measure the door to balloon time because they know that that results in better outcomes for people who have cardiac issues. For us, we look at door to doctor time was one of our biggest things, or what percent patients are you screening for social determinants of health, things like that. Then there's outcome metrics, which are the hardest to measure, but the most meaningful. And that is, does it work? Did whatever you did in your processes, did it actually lead to improved outcomes? So those are things like readmissions or patient satisfaction, improvement on rating scales, or looking at injuries or seclusion restraint rates or deaths or suicides, things like that. Then there's some other practical considerations, and there's all these different frameworks with all these different lists of things. I like this one that was published way back in like the 90s or back in 2000 that was in psych services that was three things, which I can remember three things. Crisis people, we don't have that much of a detention span to remember long lists of things. But is it meaningful? Are you measuring something that's clinically important? Is it feasible? Is it, because you don't want your staff spending all day abstracting charts and putting things into spreadsheets and not actually fixing anything. So especially with EHRs, you wanna do things that, build thing where it's easy to measure and so that you can get the data quickly and quickly enough to act upon it. And then is it actionable? So if you get a result that says we're not doing so good on this, is it something that's within your sphere of influence to actually fix? And so this is the metric framework that we came up with for our crisis centers in Arizona. So timeliness measures. So we went to, didn't wanna reinvent the wheel, emergency medicine has been measuring timeliness and throughput for years. So we kind of just borrowed those. Like door to doctor time, how long does it take to go through the, from door to door to get your needs met? If we're gonna admit you to the hospital, how long do you wait until you get sent to the inpatient unit? And then for safety measures, we look at things like injuries and assaults and making sure that both our staff and the patients that we serve are safe. For accessibility, we look at who's coming through our services, how many people, who's bringing them, things like that. I'm gonna talk about least restrictive in a minute. And for effectiveness, we look at our readmissions. We look at how well we partner with our various system partners. So we look at our patient satisfaction. We look at, are we involving families in crisis care as much as possible? And then we also look at partnership with law enforcement. How long does it take them to get in and out? Because that's what they like. And how long are we on, are we full where we say we won't take emergency room transfers anymore? Things like that. So I'm gonna talk about least restrictive for a minute. Because from what you heard with Matt, people of color are more likely to end up in hospitals, end up in jails. And so this is a kind of a core measure around this question of equity that deserves some more digging into. And there's a clinical rationale for looking at this issue of care in the least restrictive setting. What we want as clinicians is we want, I think most of us want our patients to be doing well in the community versus being locked up in hospitals or jails or ERs. The Supreme Court agrees, and the Olmstead decision says that people with disabilities have a right to care in the least restrictive setting, in the most community integrated setting. So the takeaway from that for us, for crisis services, is that we should be striving to connect people to community-based care. And we should do things that avoid people having to be boarding in emergency rooms, or being admitted to hospitals, or being treated involuntarily, if you can engage with them and avoid that. And so that's where our metrics came from, where one of our most important metrics is community disposition, which is the percentage of people that we are discharging to levels of care other than inpatient and hospital, or sending people to jail or emergency rooms. Another one is the conversion to voluntary. We get a lot of people who come in on involuntary commitment. Or whatever, every state has their own different names for it, but like an emergency detention type thing where police say you're imminently at risk for harm to self or other, and they are able to bring them in against their will. Well, we try to convert those folks to voluntary care. We try to engage with them. Even if they do end up going to the hospital, do they go voluntarily versus still staying under that voluntary commitment? There's also a social justice rationale for this metric. Because in our country, and I think a lot of places, the police are the default first responders for mental health and substance use related emergencies. You call 911 for chest pain, you get an ambulance, you call 911 because you're suicidal, and you get the police. And so it's not surprising that you've, that just sets up for bad situations. So a quarter of officer-involved shooting deaths are linked to mental illness, of jails, the prevalence of people with mental illness in our jails and prisons is three to four times out of the general population. And all of these things are magnified for people of color. And our crisis services should be designed to minimize police involvement. And they should make it easy, like officers are getting all kinds of wonderful training, crisis intervention training you may have heard of, which is that 40-hour training for them to have the skills to deescalate people and bring them to treatment. But if you don't make it easy for them to do the right thing, then jail remains that path of least resistance. And so, and when we look at, well, how well are we doing that? Those things like our volume trends, how many patients are we seeing and where are they coming from? And then we know that one of the strongest indicators of police wanting to use our services versus taking someone to jail is getting them out and back on the street. So our police turnaround time is five to 10 minutes is the target, which we're usually around like six. So studies show that that model works with police. I like memes. So there's a lot of words on the slide or you can just look at the Drake meme. But basically it's, yeah, I mentioned crisis intervention training. A lot of people think of CIT, which was developed in Memphis back in the 80s in response to the shooting of an unarmed black man with mental illness, or he had a knife, but he didn't have a gun. And they created this training program as a result of that, which has become kind of the gold standard for law enforcement across the nation. But the people who are the CIT International Organization will be the first to remind you that CIT is not just the training, it's meant to be a community response. And part of the thing that makes that work is having somewhere to bring people and having easy access to the crisis system. And these were, they laid out their ideal, what they call a receiving center. And those two items in the middle, I highlighted the no clinical barriers to care during the turnaround time, because those are the hardest to do well. But those are the ones that are most correlated with people, for officers not bringing people to jail and bringing them to treatment instead. Here, even in Tucson, there was a study that Vera did of the calls that police were responding to and saw that mental health people were much less likely to be arrested when they were taken to the crisis center instead. So you saw this slide, Matt did a great job of explaining my version one of the slide. Every time I show it, someone's like, well, you need to add this and add that and add that. So I'm trying to add things to it and keep it simple. But it basically shows what Matt, what that measure of community disposition, this illustrates how the whole entire system can be aligned around certain measures and measure something in a similar way. So you can see how the whole system is aligned towards that goal. And so every one of these services are measuring their community disposition rate and trying to keep people stable in the community. But then from an equity standpoint, the question is, well, how are these decisions made? The decision to not send someone to a higher level of care and resolve that in the community, is there bias in those decisions? And that is an excellent research question, which we are starting to work on. So we just started with our quality data. We started to cut it by race and ethnicity, and it's generating questions. So by looking at your data like this, that generates questions for you to chase further. And so one of the first things we took a look at is so the population that we see, how does that mirror the community? And so this is our adult population at the Crisis Response Center and the dark bars. And then the light bars are the adult population of Pima County, which is where Tucson is. And so just some questions that might arise just from taking just a first pass look at this is, well, are we underserving the Hispanic population? Because it looks like we're seeing less than what the general population is. If you look at black and indigenous folks, looks like maybe we're over serving those. So is that a good thing or is that a bad thing? Like, what does that mean? Well, we compared it to the jail population, and maybe does the answer change? If we're serving more black and indigenous folks, but those folks are overrepresented in the jail, then maybe that's a good thing that they're coming to jail rather than, or coming to us, rather than being taken to jail. But then you look at the Hispanic population, which is way overrepresented in the jail, and it's telling me I have to hurry up. And maybe we need to be doing more. This is the same data for youth compared to the population and compared to the jail. This was really surprising to me when I first saw it, that there are huge disparities in our youth population in the jail. And it looks like we have a lot of work to do in getting, how many of those kids have mental illness, and how many of those should be diverted to programs like ours or others. That's a ripe area for investigation. Another way to look at stuff is to compare outcomes, stratify your outcomes by race and ethnicity. So this is the percent of people that are brought by police for our youth. So are black youth more likely to be brought by police? That's something to poke around and investigate further. Are they less likely to bring indigenous youth to the CRC instead of jail? And then to Matt's point about collecting the data, we have this opted out category, and maybe we need to do a better job of collecting that data when people are brought by police. And if you can't get it then, go back and try to get it. As far as future direction, so to answer some of these questions, we don't have all the data to answer these questions within our crisis system or crisis center, but the police do. And there is more and more drive for transparency with justice systems and police. This is actually freely available on the web. This is Tucson Police's analysis thing where you can actually go download person-level data from the police. There's a national initiative to have more police agencies doing this. And the police recently came to us because they have their version of electronic health record. They call it a record management system. And they're upgrading theirs. And they're like, can you help us design the screen that comes up when we go to mental health calls? And what a great opportunity to be able to collect data on how these decisions are being made to divert people to treatment versus jail and how race and ethnicity factors into that. We looked at a couple of other of these least restrictive measures stratified by race and ethnicity and didn't, I'm glad to see, we didn't see a bunch of huge disparities. So maybe the place to be most focused on is who gets to our facility. Because once we're in the facility, our community disposition rates don't seem to be that much different by race and ethnicity. Seclusion restraints, that's another least restrictive measure. And there's evidence to show that people of color are more likely to be restrained in emergency settings. So I was like, well, what about us? We have this huge database that has every single seclusion and restraint that we have these reports that get run automatically. And so when I went, because I'm a last minute kind of person, so of course I was doing this analysis like a SIVA is sending me emails going, where are your slides? Because APA wants them. And so I'm like, well, I can just do this analysis really quickly. Well, turns out in our giant database of like all the minutiae that you could ever imagine with seclusion and restraint, there was no race and ethnicity in there. And I was like, how could that be? So this was a great lesson learned about, well, what's measured, what's required is what gets measured. We have this behemoth seven page report that we have to do for every single seclusion and restraint and send it to the state. And like, if one minute doesn't match, like, I mean, they get all on that. And nowhere on the seven page form is race and ethnicity. Good news is I emailed them and this morning or yesterday morning, they emailed me back and said, they're gonna add it to there so that people can start doing that. So, doing these APA presentations, you can make meaningful change in your communities. And our measure that we actually like, we take all that data and then the measures that we calculate are based on the Joint Commission and CMS measures, the Hospital-Based Inpatient Psych Measures, HBIPs. And they stratify, so the feds, they require you to stratify this data by age, but not by race and ethnicity. So, that's another question of who's at the table can really decide what gets measured, what gets looked at and what's getting approved upon. In the future, I think there's gonna be a lot more stratifying of measures. HEDIS, which is the measure set that insurance companies have to use. They put out their requests or their draft measures for the following year. And they're starting to require a lot of their outcome measures to be stratified by race and ethnicity. So, that's kind of the future. And then just some take home questions to ask yourself is, you know, in your own organizations, are there other populations that you might be underserved in crisis settings that you might wanna start to cut the data and look at? Do you collect the data that we need to identify these populations? Stratify it? Do we collaborate with outside stakeholders to understand what the data means? And then what do you do if you detect disparities? And then finally, because of kind of those last couple slides I made the point of, who's determining what's being measured? Who determines it at your organization? Because you may not be able to affect what the state does with the snarky email, but you can certainly affect what happens in your organization perhaps. And it's their inclusive representation. A lot of organizations are starting DEI type work groups, we are too. And that's gonna be one of the things is who's represented on there and how does that inform the data that we look at? So with that, I will hand it over to Siva to do his next thing. Appreciate that you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Appreciate that you ended with those take home questions because actually what we would like to have you all do right now is to consider some of those while we have you in the room. So if you can, find a neighbor. We'll give it about 10 minutes. With the neighbor, describe a crisis service in your local health system and think about the population, what the goals of the program are, who is involved. Identify two high priority quality measures, either ones that are already tracked or ones that you would want to track. And then think together about how you might partner with researchers to extend those measures into a more rigorous implementation science study. All right. Wow, that was super fun listening to all of you. So now I got to call us back to the group and we'll have some opportunity to share. So again, I'm Deb Pinals. I've had the privilege of working in a couple of states in a leadership role for state government, thinking about these things, as one of my colleagues said, from the 10,000 foot perspective and thinking about these forms that get written and that we're asking people to do. We're in the process right now in Michigan of developing our rule set for certification for our crisis stabilization units and how we're gonna measure the people that are gonna be trying to build those units and then bill for the services and make sure that there's safety and quality. So, so let me just say these, anything I say here are my opinions, not those of the states or governments or any entities that I represent. I get the great good fortune of consulting to the National Association of State Mental Health Program Directors, who's the executive director of that organization is in the audience listening. So these are my opinions. Let me just, thank you, thank you. Let me just say a couple things. First of all, it was super interesting to hear the discussion. I got to listen to just a couple, but one of the things that's very clear is we are building a bridge as we walk on it. And it's very interesting because I met with a colleague last night who I was with when I was working at the NIMH doing research on the treatments and pathophysiology of schizophrenia. And when I think about what we do for psychopharmacology, we measure, we collect, we get evidence before we implement the medication, before the FDA will approve the medication. In the world that we're talking about, it's flipped. There's a societal need for something. CIT was established because there was a bad outcome. All of a sudden, there was a community cry. Similarly, there's a suicide crisis. The National Suicide Prevention Lifeline is a complicated number to remember. There's a national need. We identify a need. And so people like us go charging forward and build services, and then we start to collect the evidence to improve the practice. So we're actually doing things kind of in reverse of what an EBP sometimes requires. So we're trying to build an evidence while we're building a service based on what we think from our knowledge and collective experience and political pressures tell us we need in our community. So I think that's a very interesting thing. The voice of the customer, as Margie was saying, is an interesting part of that, because we want to meet so many people's needs, and yet we're all looking at this from different angles. And that makes me think about the concept of generalizability as we look at disparities. You may do a wonderful research project in San Francisco. I have no doubt you will do an excellent research project in San Francisco, but does San Francisco relate to Detroit, where I sit? Or does San Francisco relate to the Upper Peninsula, another part of Michigan, which doesn't really relate to Detroit at all either? And when we talk about, okay, well, let's look at this. This applies to a rural community. This applies to an urban community. What does that even mean? Are rural communities in Utah the same as a rural community in the Upper Peninsula? And is an urban community the same, for example, when we look at inner city urban, where negative social determinants of health are high, versus a gentrified urban community, where we don't have the negative social determinants of health? So how are we gonna really look at these disparities in these micro and macro ways for policymakers like me to continue to make policy that sets rules that will be good for everyone? Or am I supposed to just do utilitarian principles? We get into these debates all the time, where we're trying to set rule standards so that they meet the greatest good for the greatest number, and understanding that we're gonna have to do data collection that's gonna tell us more and help us keep improving services as we go. So I think back in the day when I was a resident in psychiatry, and I kept listening to this administrator and part of my, as our public sector leader of our, I was in a community mental health center kind of residency, very, very public sector oriented. And I always thought he spoke administrative speak, which didn't really have word, like they had words, but I could never really understand what he was saying. But I really get it now, this idea of vision and mission. So our vision is that we have an equitable system that offers just services, that's accessible, and therefore our policies around that have to frame intentionality. And as I heard in this discussion, everybody's building this bridge. So how do we build these policies right out of the gate so that the forms that Margie has to complete and the data that you have to collect has some intentionality to the equitable nature of the work that we do? So I think that's something that we really have to think about. I was really excited to hear people saying that they were calling home already to say, we've got to make sure we're collecting race, ethnicity data. Race, ethnicity data is really difficult to collect. I had this great opportunity to work in New Zealand for a while as a psychiatrist where they were required to have cultural consultants in every service to make sure that the Maori population was not sort of left out in terms of how things were thought about, how policies were written about whether patients got passes into the community, and all sorts of how we did diagnostic formulations. They were just embedded into the work. So how do we make sure that we have diversity voices in the work that we're doing, sitting around those tables, looking at the forms, reading the policies to make sure that we're looking at it from this lens? I think that's going to be another part of this that's going to be necessary. Also, in a crisis, how do you collect that data? I'm sorry you're suicidal, but can you tell me what your race and ethnicity is? Or am I going to be the one, the responder, judging based on what I see? We know that that's not really the right way to go about it. So what can we do about data harmonization? And I think some of you mentioned this. Are there other data sets that can be leveraged over time that will be able to tell the story of how responses are delivered, and how we can maximize the ability to deliver responses equitably? And delivering responses equitably isn't the same as delivering good responses. We want to deliver the right response equitably, not bad responses equitably. So we also have to make sure that our responses are good. One of the things that came up in the discussions that I was listening to is even the use of psychopharmacology. Are we using psychopharmacology equitably? We know there's disparities, as I think you pointed out in one of your studies, to that. So what do we as psychiatrists have to be thinking about, not only in the delivery of medications, injectable medications or oral medications, but even in what medications are being chosen for which populations in a crisis? So I think those are some other things that came to mind. I love the slide that Margie showed about the choice points. There is a whole literature in juvenile justice on what's called disproportionate minority contact that looks at every decision that's made for a youth who ends up in the juvenile justice system, from arrest to detention, from detention to arraignment, to a delinquency commitment and to release. And there's just a large literature that shows that the deeper end you go into juvenile justice, the greater the disproportionate minority contact is. And it's a simple ratio of compared to a white population, what's the ratio of blacks, Latinx, whatever your group is that's getting silted deeper and deeper into the wrong direction, if we think of JJ as the wrong direction for most situations where we could divert into alternatives. So I think developing data along decision points is another way to look at that disproportionate contact. Maybe minority isn't the right word, but disproportionate contact is something that I think we could pay attention to to understand. That doesn't help us understand the issues of all the factors that crisis services can't address, really, or maybe can. I don't know. But the things that get people to need crisis services in the first place, the whole social determinants of health, which this APA conference is really focusing on, poverty, access to education, environments that are in disrepair so that people are living under traumatized conditions, all of that that might also make people less trusting of the responders, less willing to access some data emerging about black populations not wanting to call for help, 911, for example, because of the responses that they're afraid to receive. So how do we look at the social determinants of health issues that might make the crisis services so far downstream that you're not going to be able to affect the change that you need to see in your community, other than how you link people to those services in the future? And then finally, I want to talk a little bit about evidence base. And I have to find some quotes here that I was pulling as I was listening that I thought were relevant. If I can find them, here they are. So CIT has been around since 1988 when there was a bad outcome of a shooting of a black male. CIT is considered the gold standard. There's been a couple of studies looking at CIT. There's been tons of studies on CIT. But the whole question of CIT being evidence-based is still not answered, really. One study that was published by Peterson and colleagues in 2017 in Behavioral Sciences and the Law really said it was a review, 25 empirical research articles, examined the impact of CIT training over 10 years. Overall, little can be said about the effectiveness of CIT training due to varying outcomes, a reliance on self-report data, lack of comparison control groups, and inadequate follow-up data. And then in another study by Amy Watson, who's a leading researcher on CIT, she said, we conclude that CIT can be designated as an evidence-based practice for officer-level cognitive and attitudinal outcomes, but more research is needed to determine if it is designated as an evidence-based practice for other outcomes. Meaning even though we think CIT is the gold standard of practice in terms of training and officer responses, whether it meets the standard of evidence-based practice, I think is a debatable point. And maybe it does on some issues, but not on other issues. So as we're rolling out these crisis services, which in a talk 10 years into the future will look extremely different because we're just building a lot of this now, this whole community out of hospital responding, we have to think about, we want quality, we want gold standard, but we also want to build an evidence base that supports that. And so we're going to need people like you guys to help build that research base, and maybe some of you, that informs the literature, that allows policymakers like me to pull from the evidence and the quality data so that when I write out the policies and literally am saying, I am expecting these programs to deliver this kind of service, like, what is the staffing pattern that I am going to require? Not I, Deborah Pinals, but I, the state, or I, the policymaker. We're literally writing out, what is the staffing plan expectation? Well, what is that going to be based on? It should be based on wanting the best outcomes for the greatest number of people and making sure that there's safety and equity in that. So I'll just stop there with those remarks, and then I think we want to open it up to discussion since there was so much robust conversation. And I guess I'm going to be emceeing that, right? So any thoughts about all of that and comments from your work groups? And please use the microphone, because I believe this is being recorded for future use. Love to hear, too, if people want to speak to their small groups and what is your takeaway, because I know there were a couple of takeaways that I heard, not to put people on the spot. So I'm not sure if this is really a question, but just kind of my mind processing this. So it's really interesting. I'm glad that we're here, but it also feels like we were here 10, 12 years ago, and then managed care came along, and we lost so much of these services in the years following that in my state. And so it really goes, my mind is thinking about the crisis services. We lost our crisis stabilization service that covered 17 counties for an actual residential alternative. And our therapeutic day programs were really cut big time, serving this population as well. And also, we've lost so many alternative housing situations, because stabilization with the SMI population is not as billable as it was prior to managed care either. And so that's kind of where my brain has been going. As we're talking about all of this, I'm glad it's coming back, but I really hope that the policymakers are thinking about funding streams other than billability, because it's very difficult to bill the services in the quantity needed for a lot of these mobile outreach programs across the board. So that's just kind of where my brain was going. Yeah, I think that's a really, obviously, funding is going to be a key here, because none of this will happen without the right funding. And then the funders are going to want to ensure that the programs actually work, which is why this data is really important. One of the hardest things, too, for a funder, I will say, is that once there's a community pet program, it's really hard to say this isn't really worth the money that we're spending. Because people will have anecdotal data that will make it look like it's doing a community good. But whether it's doing the community good enough, or the best that it could, or even that there's other models that could be better, is really hard. So I know that there's a lot of discussion about what these funding streams will look like and setting standards. But it's going to be very important, I think, to ensure that we're not spending money on things that aren't as good as we would like them to be. And so by pushing the goal of saying we need data as we build these, every program that gets built should have a data collection component so that we can look at it, I think, as a way to help the funders make sure that what they're funding is going to give them the bang for the buck, so to speak. And I don't know if there's other thoughts on that. Managed care isn't inherently evil, either. The stuff that I showed about Arizona and the Arizona system, which people are looking to as a model, is pretty much one of, if not the most robust, crisis system in the nation was built by managed care. And our managed care department has the words cost containment in it. And yet there's been this huge investment in crisis services because they collect the data in a way that shows that investing in crisis care is actually cost effective. So it's not so much the payment mechanism, but the way it's being looked at, the way the, your point about combining funding streams is key. One of the things that makes the Arizona system work is they combine the Medicaid funds, the SAMHSA funds, the state line item funds for intergenic care all together so that there's economies of scale. So there's not like, well, this is the program for the county funded people. And this is the program for the Medicaid funded people. And then we have duplicated back end offices that just make more inefficiencies and costs. So it makes sense to invest in crisis services because those least restrictive levels of care are less costly, but you have to have it structured so that you're showing that return on investment. And your position in and of itself Margie speaks to that because having somebody whose job is to focus on that quality and quality improvement then makes the point. Hi, Stephanie Snyder. I work with organizations, hospitals usually, who they recognize the clogs in the ED, they recognize the diversion to jail, they recognize the homeless problem. Can you talk a little bit about how to kind of bring organizations together? I'm thinking Margie's work in Tucson, how to bring organizations together, right? You mentioned these different funding streams that can come together to create something that's very financially viable, but I think hospitals and anyone else, they're kind of talking two different kinds of math, right? So like hospitals see money in, money out. They don't really look at the cost of a behavioral health person clogging an ED bed, right? And the tremendous cost of that versus how much it would cost to have that same person in a crisis stabilization center. It's gotta be a fraction of the amount of cost, right? So I guess my question is how do we start to bring these organizations together to kind of speak the same math language to say this is financially viable for our county? Oh, and P.S. it helps patients as well. You know, I think that those kind of community conversations are critical and everybody has to have a stake in the game and something to gain from the solutions. You know, there are requirements for hospitals to go through community need processes. You know, they have to establish why they should be a hospital in that community and what community good they're serving. And so there's ways to bring them in as part of that where profit margins may be hurt by ED boarding. That can be another driver for hospitals to participate. And so having those kinds of community conversations I think is a way to build that. And then again, just having at the policy level, at the state level, and working with the insurance divisions on parity issues and network adequacy and those kinds of policy drivers are other ways to bring these conversations forward. And one of the things I learned in my quality training is one of the first questions you ask before embarking on a project is where's the pain? Because that's how you can get people motivated to work on a problem. And I guarantee you all ERs feel pain over behavioral health patients. And there are costs that, I used to work at Parkland Hospital which is this huge giant level one trauma center with like 20,000 patients a month going through there. And they were spending $700,000 a month on sitters, like that's huge. Not to mention injuries to staff. There's projections around lost revenue for someone taking up boarding. I mean, and then risk to the organization. There's a cost to that as well. Even if it's harder to put that in dollars and cents, the administrators of the hospitals feel that pain. So those are all things that can be used to help make a business case. As far as convening people, every community is kind of different in terms of who's the best convener. Sometimes it's the county, sometimes there's a hospital association. For us, it's centered around our managed Medicaid agency. There's a report that I worked on and the guy sitting behind you did as well, and Matt, called the Roadmap to the Ideal Crisis System that it complements the SAMHSA guidelines because it has info in there about the service continuum and all that kind of stuff. But it also talks about that, setting up a governance structure and getting community collaboration. And there's a website, CrisisRoadmap.com, where you can download it. And with that, we are at the end of our time. So thank you all for joining us so early in the morning. We hope you have an excellent day. Feel free, I think some of us will stick around if you wanna come up and ask more questions. Cool. Thank you all. Thank you. Thank you all for coming.
Video Summary
The video transcript discusses the importance of measuring the effectiveness of crisis services using process and outcome metrics. The speaker emphasizes the need to focus on preventing unnecessary hospitalizations and reducing suicide rates. Specific metrics, such as the percentage of stabilizations without admission and the number of people kept safe over 30 days, are mentioned. The use of data to identify gaps and improve community crisis responses is highlighted. Additionally, the importance of customer satisfaction and safety in tracking metrics is discussed, including the use of the Net Promoter Score. The speaker also mentions the need for collaboration with external stakeholders and the inclusion of diverse voices in decision-making. Lastly, the video stresses the need for creating an evidence base for crisis services and securing funding and policy support. Unfortunately, no credits were mentioned in the transcript.
Keywords
measuring effectiveness
crisis services
process metrics
outcome metrics
preventing hospitalizations
reducing suicide rates
stabilizations without admission
data-driven decision-making
customer satisfaction
Net Promoter Score
collaboration with stakeholders
×
Please select your language
1
English