false
Catalog
Treatment Resistant Depression from Multiple Persp ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Welcome to a panel discussion. So my name is Dr. Karl Marcy. I am the Chief Psychiatrist and Managing Director of the Mental Health and Neuroscience Specialty Area at OM1. OM1 is a health tech data company, and I'll describe what we do briefly in a moment. But today, I'm excited to present treatment-resistant depression from multiple perspectives, does it exist? And you can tell from the subtitle, we're being purposely provocative. Essentially the way this will run, I'm gonna do a brief overview of some research we did with a company hopefully by now you're all familiar with, Compass Pathways. And then I'm going to introduce our panelists one at a time and they're each going to give some remarks from their perspective on the topic. And then we're gonna have a little panel discussion and then we'll open up to you all to ask questions. So with that in mind, let's get started. First, the relevant disclosures. So I'll put those up for you to read, but we each have something to disclose. And getting more to the point here. So I mentioned OM1 as a health data technology company. We are in the business of taking data from multiple sources around the US. And interestingly, not focused just on medical and pharmacy claims or just on EHR, but we combine those two elements with government data, social determinants data and other data from around the country, organize it in the cloud, enrich it and then mine it for insights. And I am very privileged to look over a population of over 5 million patients in the mental health arena that are taken care of by close to 15,000 mental health specialists that represent about 2,500 clinics in the US. So we're nationally represented. And today what we're gonna do is talk about major depressive disorder and a subtype of major depressive disorder, treatment resistance. And I put this slide up because this is the cohort or the population that the data I'm gonna show you in a couple of slides is derived from. So we have close to a half a million patients who have been diagnosed with major depressive disorder. And that's two coded diagnosis greater than 30 days apart. They're all adults over 18 years of age. And we have on average close to seven years of data and 97% linked claims. Importantly, every single one of these patients has an electronic health record. And that's a full chart from a mental health specialist tied to the diagnosis. And what I'm gonna describe a little bit as we go through and how we use all of this data in various ways. There are a number of highlights to the data set. I'm happy to talk to anyone afterward, but we'll get on to the study at hand. And the study at hand was really a comparison study to look at the application of different definitions of treatment resistant depression in real world data. And to be clear, this is a real world data set. And I'm gonna walk through the three different definitions we used because this is really the core of the discussion. So the first definition is probably the one you all are most familiar with. We're referring to it here as the regulatory definition. This is the idea of having had two failed sequences of antidepressant treatment of adequate dose and duration within an episode of major depressive disorder. And as simple as that sounds, and as commonsensical as it sounds, it turns out to be a very hard definition to apply in the real world because the devil gets in the details. What is an adequate duration? What is an adequate dose? What is the time between med errors that is sufficient to represent whether someone's getting better or whether it's a failed treatment? So we spent a lot of time with our sponsor and experts going back and forth and trying to really mirror clinical trials data as much as possible because the goal was to be able to create essentially a virtual comparator. The next definition we refer to as the data-driven definition. And this comes from work published by Janssen a number of years ago that looked at claims-based data. And they used a machine learning approach to this data and they set what we call in artificial intelligence, and Joe will talk about this, a gold standard. Because when you're building these models, you need to have a gold standard. And the gold standard in our data-driven definition were patients who had identified in their medical record or in the claims data in this case of having had bagel nerve stimulation, deep brain stimulation, or ECT. So you can tell that's a very clear standard and then they took a whole lot of data and modeled it and I'll show you some results from that. What they also did is took all the data and reduced it down to essentially what we thought would be a very easy definition to apply to a real-world dataset. And that's part of the reason why we liked this definition because it's very simple. Three antidepressants, different ones in a given year, 12-month period, doesn't matter whether they're the same class or not, they just had to be different, or one antidepressant and one second-generation antipsychotic. So very simple, easy definition to apply. So this is taking an AI data-driven approach but then reducing it to a very simple list, or in this case, a simple definition to apply. And then the third one is probably gonna be the hardest one to get your head around because it is a full AI model and Joe will describe this in a few minutes in some detail and we're happy to describe it more after. But this is where we are using a different gold standard and this is where the electronic health record is important. We go into the records and we identify a clinician attesting that the patient has treatment-resistant depression. What do I mean by that? The actual words are written down in various varieties. We have confirmed through various approaches that it is an affirmation, so they're not saying rule out treatment-resistant depression or does not have treatment-resistant depression. It's an affirmation and then we take that cohort and we build a model based on it and then we apply it to the larger dataset. And what comes out of that is another application of the definition that we can apply. And so if you think about it here, the regulatory definition has a common sense gold standards, failure of two so-called adequate trials of antidepressants. There's no time limit importantly to that. It's whenever we identify the first medication in the longitudinal medical record. As long as that med error is continuous and never broken for more than 90 days, it's considered a continuous treatment. When they switch to a different antidepressant, that's considered the second line of therapy and then they are not qualified as treatment-resistant until a third treatment occurs and that's the so-called index date. And that's how we're looking to see if we've had continuous two episodes, we use the dose in the medical record to make sure it's adequate and we apply that definition. So we did this on over half a million patients and I had the privilege last night flying over of sitting next to Maurizio Fava. For those of you who don't know Maurizio, he's the head of the department at Massachusetts General Hospital and is one of the, I don't know if he'd like this, one of the grandfathers of the, certainly father of the treatment-resistant depression idea and notion and also the leader of the Massachusetts General Hospital approach to classifying treatment resistance. And I was telling him about this talk and I said, you know, so Maurizio, we have these three definitions, we're looking at it in the real world, what do you think the overlap is across these three populations? And I'm inviting you to answer this question in your head. And he answered it without hesitation. 40%, okay. Well, here's the actual overlap we found. So we identified close to 73,000 patients who met the criteria for one of the three definitions. And if you do the math, that represents about 15% of the population, which is in the range of what we've seen in the published literature between five and 50%. But if you look closely, we see that 61,000 of them only met one definition, 11,000 or 16% met two of them, and only 776 patients, about 1%, met all three. So it's a little less than 40%. It's a lot less than 40%. And that inspired us to have this panel. For those who like visuals, this is another way of looking at the same data. I direct your attention to the Venn diagram and what you can see here, so the Ds represent the data-driven definition. And you can see by far, that is the largest cohort. And we'll talk a little bit about that. But we've got in that close to 60,000 patients. And then the RD is the regulatory definition. And you can see that one straddles the fence and has about a 50% overlap with the data-driven definition. But also there's about half of it that sits outside that circle. And then patient finder is the OM1AI model, which Joe will explain in a moment, sort of somewhere in between. So with that all in mind, I'm going to transition now and introduce our panelists one at a time. And then they're gonna give some commentary on these results from their perspective. And then we'll have a little chat. All right, so I'm gonna start with Dr. Steve Levine. He's a board-certified psychiatrist, internationally recognized for his contributions to advancements in mental health care. He currently serves as the Senior Vice President of Patient Access and Medical Affairs for Compass Pathways. Dr. Levine completed internship and residency in psychiatry at New York Presbyterian Hospital. And he then completed his fellowship subspecializing in the training of psychosomatic medicine and psycho-oncology at Memorial Sloan Kettering in New York. He has published extensively in both peer-reviewed journals and popular media, presented to both professional and lay audiences around the world, served in leadership roles for professional societies and not-for-profit entities, and received numerous awards for leadership and service. And we're very pleased to have him. And so, Steve, do you wanna make some comments? Thank you very much. Am I doing that from here or there? You can do it from wherever you like. I'll do it from, eh, I'll do it from there. Okay. I'm a restless person, I like to stand. Good afternoon. Good, now, you know, that's better than most audiences do. I like to start off with warmly greeting each other. You know, one of the fun things about speaking in a setting like this is that you write your own bio and then somebody else reads it and then it becomes true. So thank you for so nicely reading that. Yeah. So I am going to focus on the regulatory definition. And speaking here not as a regulator but as a manufacturer who is attempting to develop a new treatment for treatment-resistant depression through the regulatory pathway. And thus, we are subject to a regulatory definition and meeting that standard in enrolling patients. So as Dr. Marcy introduced, we intentionally applied a provocative title to this talk, does it exist? Obviously, it exists, of course. And the challenge is how do we define it and how do we recognize it? In 1964, Supreme Court Justice Potter Stewart was asked for his test for obscenity. And his response famously was, I know it when I see it. And in the real world, in clinical practice, I think that's essentially how we wind up defining treatment-resistant depression. And my colleague, Dr. Lisa Harding, who'll be speaking after me, who'll be presenting the clinician perspective, will go into that in some more detail. But I think it's relatively uncommon that a clinician in practice is really rigidly thinking of any one definition when they are clinically caring for a patient so much as applying some kind of pattern recognition, they know it when they see it. This is somebody who's difficult to treat, which is another term that's been applied more recently to this population. And in that way, making some treatment decisions from there. Now, one of the reasons why a definition is important from a regulatory perspective, important to a manufacturer, is that if you are pursuing an FDA approval, then you don't just get to know it when you see it. You don't get to proceed based upon a more vague notion of this population that you're treating because you have to demonstrate safety, efficacy, and quality for a defined indication, for a defined population. And so whatever challenges there may be, and we're gonna go into those in some detail today, the bottom line is you need some kind of definition. Now, what's emerged over the past couple of decades is this regulatory definition. And just to recap that, it's at least two treatments of adequate dose and duration within a current episode. And there is some lack of agreement as to what constitute inadequate dose or inadequate duration depending upon the line of treatment that we're talking about. Now, if we think about standard treatments like SSRIs, I think most would agree, at least on the duration side, that it's something on the order of weeks going into months, four weeks, six weeks, eight weeks, some might say 10 or 12, but not six months or a year. Although, when we look at the typical duration of a trial of a therapy, typically it is much longer than what we might consider to be adequate prior to making a decision to cease that line of therapy and proceed to another. And so ultimately, when you look at these patients within trials, in most cases, these patients have had fairly long trials, and even though they might meet the minimum standard of that regulatory definition in just two or three months, in most cases, these episodes have lasted for a year, two years, three years. And in most cases also, this is not the patient's first episode of depression, although that is the minimum criterion. This could be within a first episode. In most cases, these patients have had four or five, six or more prior episodes of depression. And on top of that, the difficulty that in the real world, there is not necessarily a bright line difference between an episode of depression and a longstanding remission. So that being said, one of the ways that we see this play out when we're recruiting patients into trials is, of course, there are both inclusion and exclusion criteria. With the inclusion criteria, despite the fact that this is relatively clear-cut, if you look at the rate of patients who are screened for trials versus those who are actually enrolled, not even considering the exclusion criteria, it's a fairly small percentage. And that's because when you really start to quantify the length of these trials or the maximum dose of a given intervention, in many cases, they don't meet the necessary standard in order for that to be considered an adequate trial. And so, you know, these are some of the challenges that we face in drug development and new treatment development, that even with a relatively clear-cut standard for a definition, as imperfect as it may be, it still does not actually, in many cases, align with the perception that the person, typically the clinician referring the patient into the trial may have when they're thinking of, you know, what is a patient with a treatment-resistant depression? So I've covered some of the issues from a high level here. We're going to dive into these in more depth as we proceed first through the introductory remarks, and then we move on to the Q&A session with Dr. Marcy. So thank you. Thank you, Dr. Moore. You know, Steve, I'm really glad you brought up time, right? Because I think that's a critical component here, and particularly in a regulatory environment, and you guys are doing clinical trials. You know, you've got budgets, and you want to get to market as quickly as possible. You're thinking about time a little differently than, say, a clinician in their office. And the thing I want to point out, and for those of you coming in a little bit late and didn't see the opening remarks, we're talking about three different definitions of treatment-resistant depression, and all three of them have very different timelines. So the regulatory definition, the two antidepressants of adequate dose and duration, we didn't put a time constraint on it. And I will tell you from the data that the average length of that episode was over two years. In fact, it was close to three years. That's a long time. The data-driven definition, three antidepressants, different ones, one antidepressant plus an antipsychotic, we constrain that to within a year. And then, as you'll hear in a moment, the AI model looks at the entire record and doesn't care about time at all. So I think the time perspective is really good. Okay, so our next speaker and panelist is Dr. Lisa Harding, who I've gotten to know recently and have enjoyed not only her charisma, but her brilliance. Dr. Harding is a board-certified psychiatrist and depression expert, is specially trained in providing all treatment types for depression. She completed her residency in psychiatry as the Chief of Interventional Psychiatry at the Yale School of Medicine. She has completed over 4,000 procedures in electric convulsive therapy, IV ketamine, spravato, and transcranial magnetic stimulation. She is also experienced in managing complex medication regimens and skilled in various psychotherapy methods. She is also very active in the American Psychiatric Association as a foundation ambassador and was once a Diversity Leadership Fellow, as well as an American Academy of Psychiatry and the Law Rapoport Fellow, both of which are distinguished national fellowship awards offered to only the top 1% of resident psychiatrists. So I welcome Dr. Harding. Thank you. Like Steve, I also wrote that for myself. Thanks a lot for coming. I think we were all thinking about who would turn out to hear us talk about this. So as you guys would have recognized, I'm the resident black sheep of the, pun intended, of this panel because I'm a clinician at heart. I have done research, but I'm a clinician at heart. When we talk about intervention psychiatric service, what are we talking about? It's a made up specialty, right? And so we basically take treatment resistant depressive patients and figure out where to triage them into. How many of you here went to residency to figure out insurance criteria of what to choose for your patients? All of us here, right? And so in thinking about what I wanted to present to you, I wanted to bring you back to your patients and what you think about when you see your patients. These clinical definitions that we learn and we're taught in our residency, we come back to the start clinical trial data. And we start our patients off on an oral antidepressant, 4-6 weeks we're augmenting that therapy. Then we add on psychotherapy or we add on an NDRI or we switch classes. For the residents in the room there's actually a paper called the key takeaways from the STAR-D clinical trial. I actually encourage everybody to keep revisiting this because we learned that a third of the patients that we treat do not respond to oral antidepressants and there's this little thing of narcissism in medicine. It is the medication that isn't working. How many of us go back to the table when a patient doesn't respond and say, huh, did I get this definition? You know, did I get this diagnosis correct for this patient? So we're going down our treatment algorithms. Just of note, the FACE article was written in 2010 and the APA ratified it in 2015 and since then newer treatments have become available and the APA still has not. I see some head shaking in the audience. Yes, the APA has not really revisited those guidelines. And so when we think of our patients with a consultation, a person comes to us in this intervention psychiatric model and they say, you know, hey Dr. Harding, I have tried and not responded to these antidepressants for this long. We have on our end of the table, I have a question, I do have a question. How many of you pull out the clinical criteria for the clinical trials and try to match your patients based on that inclusion-exclusion criteria and decide, ha, this is the treatment for the patients? Sorry Steve. And so this is how you and I practice medicine. We see a patient, we go through an initial diagnostic consultation, we get an ATRQ of those medications the patient has tried and not responded to. I encourage us to look at the NNCI data of saying patients have failed the medication, the patients didn't do anything wrong. So the patients don't respond to these treatments and we go down the algorithm. When it comes to choosing that treatment, we look at three things. We look at the patient, the patient's insurance, what will they get covered, what not. Because so many of us would like to recommend TMS for our patients and they have not had an adequate trial and either non-response to psychotherapy. And then we we look at patient preference. We talk about treatments a lot but we don't talk about how patients feel about getting our treatments that we are recommending to them. There's a paternalism in our medicine that we often don't talk about. So patient preference. So we talked about the insurance, the patient preference, and then us as clinicians. After practicing for a couple of years, we each figure out I see this patient profile. I've been treating patients like this for a while. I feel like you know you may have been touched by a little bit of borderline personality disorder. You might respond to this kind of treatment better. And that's how we think about it. The World Journal of Psychiatry in 2020 wrote a consensus definition for treatment resistant depression and it is the adequate trial and non-response to two or more medications for an adequate dose and duration. You know to answer their question earlier as in how long is the depressive episode, the average time to relapse from the STAR-D clinical trial was 3.1 months. And when we look at that, you and I, when we're seeing patients in front of us, when the patient comes back to us and their PHQ-9 is starting to increase again, some of us term that as a relapse. And so we are then looking for higher care for our patients. How many times have patients come to you and said, I thought I was better, I could stop this medication. Brings me to the second part of this, our treatments. We all talk about an induction phase and then each of our medications or treatment paradigms then go into a maintenance phase. I think it's really important in all of these discussion of treatment resistant depression to talk about the chronicity of the illness that we are trying to treat. We don't, we often don't remind ourselves that depression is chronic and I, you know, we were having a discussion earlier about using analogies. I think diabetes is an okay one because it's a chronic illness that can be managed and patients sometimes quote-unquote keep doing the right thing and they still have a relapse. So it's important not only to talk about the induction phase but maintenance. And we're in that maintenance phase, does a patient become treatment resistant? Is it in that one episode or is it in that maintenance episode? Because we still haven't cleared up any of those definitions. When we talk about maintenance, I think it lies in education. In ourselves, for ourselves, thinking about our limitations as physicians and defining it for our patients. We talk about models, we talk about definitions but we can't lose the humanness of what we're trying to do. When a patient is in front of us and we're looking for them to fit criteria to have things happen, how are you measuring relapse? What are the criteria that a person reaches a stage of unhappiness all over again? Are we arbitrarily looking at their medication trials and failures? And that is how we're looking at getting them to the next level of treatment. Or are we looking at these patients as a whole? We have some other questions we'll get into so I'll let Carl take over. Thank you. You know, I think it's great that you bring up the patient perspective, right? Which is so important in our work. And I think, you know, if you're in the audience and you're thinking about these three very different definitions that turn out to have less than or nearly 1% overlap and they're all, in theory, treatment resistant, it does make you wonder who are these patients, right? And so I'll give you a little bit of color commentary on who they are and we can talk more as we do the panel. We know, for example, the patients in that large bubble, this is the data-driven bubble, so it's three antidepressants or one antidepressant plus an antipsychotic within a 12-month period, have slightly higher comorbid anxiety, right? And so that might get some of the wheels turning clinically. We know that the patients in the regulatory definition actually did not have the most severe levels of depression. So it's possible over these two-year periods, as you said, they're coming in and out of it. They get a little bit better. Well, let's keep going on this medication. They get a little worse. Well, maybe it'll come back. That could be life event. We're kind of about to ride that out. I know I've certainly had those conversations with patients. And then when we get to the third definition, the AI definition of patient finder, where the gold standard is a clinician saying you have treatment resistant, they have the highest PHQ-9 scores and they have the most complex utilization of services. And so you can start to think, well, maybe these are patients that have been at this for some time and clinicians are reacting to that complexity. All right. Our third panelist, in full disclosure, is not a clinician, but that doesn't mean he isn't brilliant. So this is Joseph Zbinski. Dr. Zbinski is the managing director of AI and personalized medicine at OM1, and he's a colleague of mine, where he oversees development and deployment of AI products across a range of use cases in partnership with pharmaceutical and medical device companies, providers, and health plans. Prior to joining OM1, Dr. Zbinski was a consultant in the pharmaceutical and medical products practice at McKinsey and Company. He specialized in advising life science and healthcare clients and using AI and advanced analytics, including identification of unmet medical needs, portfolio optimization, and AI strategy. His academic research focused on applications of Bayesian network modeling methods to predicting and stratifying environmental human health risk. Dr. Zbinski holds a master's degree in engineering management from Dartmouth College and a doctorate with a focus in health analytics from the School of Public Health at the University of North Carolina at Chapel Hill. Dr. Zbinski? Thank you, Carl, and I also wrote that bio, of course. I can zhuzh it up next time, though I didn't realize how it sounds now that I hear it read back to me. But thank you again for your attendance today and for listening to us talk about this, to us, interesting topic, and hopefully to you guys as well. As Carl mentioned, I'm very far from a clinician and certainly not an expert in mental health either, but I do spend my time on applications of AI across health fields, both in mental health but also in a number of other fields. And so I wanted to use this time to just introduce a few AI concepts that are practical, of course, we all hear about AI these days and have been hearing about it for some time, but a few practical concepts for why it can actually add some value and in particular add some value to this question of identifying treatment-resistant depression patients. So I'll mention a couple things first that underpin why we came up with this third model. So the data driven model, the regulatory model that my colleagues here have spoken about are in the literature, they're accepted, they're familiar. We had a hypothesis that because of some of the strengths of AI, we could use it to find patients who weren't otherwise being found by existing definitions. And that is the question I think that's important to keep in mind. We've heard repeated over and over again, I'm sure it's true for many of you clinicians in the audience that you know it when you see it with TRD. If there exists such a patient that you know is a treatment-resistant patient, but that would fail either of those other two definitions, then it's important we come up with some other way to find them. And that was our sort of motivation for attempting this third way using AI technology. So two things about AI that make it useful for this kind of problem. The first one is AI is very good at picking up subtle patterns in large datasets. You can brute force your way through this if you sit down and review a chart or review a patient's record, but that's not scalable. And it's also not always even observable, right? The ways in which kind of the different aspects of a patient's history combine, you know, the sequence of events that happen. These things don't pop out naturally unless you spend an enormous amount of time sort of poring over a single set of records. AI can do that quickly and well across a bunch of people. So that's one thing to keep in mind. The other strength that's important is that AI is very good at weighting and balancing different factors to kind of come up with a synthetic understanding of what's going on with a patient sitting in front of you. I like to say that AI models cannot be reduced to checklists. And what I mean by that is if you have a checklist that is sufficient to sort of capture everybody that you're looking to capture, you don't need an AI model. The AI is good at sort of saying we have some intuition about these different factors that are contributing. Very hard to nail down how they balance against each other. AI can help us do that. And so if we put those strengths together, the technology that we've used to define that third cohort that you saw, this patient finder technology, it relies on both of those aspects of AI. And the process we use, whether it's, you know, here in the TRD context or any context, is always the same. First thing we define this sort of gold standard cohort, right? This incontrovertible group of patients who have the condition we're looking for without any question. Then we use the AI system to say what do those patients share in common in their medical history? You know, if we look at their diagnoses, their procedures, medications, labs, history of their visits with different providers, what patterns can we observe and extract that are similar to that gold standard and that are distinct from other patients? We call that sort of composite set of signals, a phenotypic profile. I think of it as like a fingerprint. We derive this sort of database fingerprint using this AI technology. And then once we have that, we can compare other patients to it. And we can say how similar are other patients to that fingerprint and therefore to the gold standard that we chose. Of course, in this context, the challenge is we don't necessarily want to rely on either of the other two definitions to be our gold standard. Not that they're not, you know, doing their own good work, but we wanted to start here with something that, as I said, was sort of incontrovertibly positive. And that's why we chose the sort of physician or psychiatrist attestation diagnosis that we went through for that gold standard. So we ended up with, I think, about 3,200 patients for whom we had in the narrative, in the clinician's narrative, positive attestation of the presence of TRD. So this is as close as we can get in data to what we've all discussed, it sounds like you clinicians are all familiar with, which is this notion of I know when I see it, right? Then you write it down, and then we can read it. And now we have our gold standard. Knowing that, we came up with this phenotypic profile. And of course, we do all sorts of statistical testing. And folks are familiar with AUC metrics, for example. I'm looking for nodding. Thank you. Thank you so much. The long story short of those is a standard metric we use as a first pass to say how well is this model doing. 0.5 is a coin flip, 1 is a perfect model. We never get one. I get excited above 0.7 and above 0.8 is very good. That's sort of your quick orientation to AUCs. In this case, our AUC was 0.87, which was quite good. That means in a depressive population, this model can pull out those people that the psychiatrists have attested are TRD positive with very, very, very strong performance. And when we sort of beat up the model as we do, when we said, does it perform well in women? Does it perform well in men, older people, younger people? It held up quite well. And when we looked at why the model was calling out these patients, we noticed a number of interesting things. So first of all, it's always a good confirmation when models find things that you know are true, right? So for example, if we see TMS in the patient's history, the model will light up and say that's an obvious treatment-resistant patient because someone who progressed to that treatment pretty incontrovertibly, you know, was under the supervision of a clinician who made that judgment. So that makes sense. There was a lot of other subtler but important signaling information from the patient's mental health history. So things like, you know, evidence of more complex visits. For example, you know, a visit with a psychiatrist that was followed by, you know, an inpatient stay or something like that. A very clear trigger that shows up in the model. And beneath that level, there's even a substratum of physical health factors that themselves are not super predictive. We tested this. It had an AUC of about 0.67, I think, which is not terrible, but you wouldn't want to, you know, rely on it too heavily. But things that are, you know, separate from the patient's clear mental health history records, but that provide additional predictive power. The point of all of this is because of the performance, because we trust the gold standard, we believe that this patient finder cohort that we showed you a couple minutes ago is a real cohort. That, you know, this is a way of operationalizing some of this notion of we know these patients when we see them. Because we can turn that into this phenotypic profile and then look for patients similar to it. We have a way of connecting the dots back to the clinician's evaluation, which is quite powerful. We can discuss, you know, in the subsequent portion of our talk as a group, why we don't think these three circles in our Venn diagram overlap. I will confess when we got started, I think we were expecting them to overlap much more closely. But as we sort of study this system and this pattern more and more closely, it becomes clearer and clearer that there's some notion of what TRD is, but none of these different definitions are capturing it perfectly. The reason I'm very proud of the work that we've done with AI here is because of what I said at the beginning of these remarks. I've become convinced, you know, reviewing these records myself and talking with the clinicians who know these patients best, that there are people who would be helped by treatments, but who would not qualify or would be missed if we only rely on these sort of mechanical checklist-y types of definitions for what TRD is. And I do think that, you know, unlike some of the hype you might hear around this town sometimes, this is one of those times when AI can actually be clear and practically useful. So thank you for your attention. So the irony, Joe, of AI is not a checklist to an audience of psychiatrists who study the DSM and is full of checklists, it's not lost on me. See, I'm not a psychiatrist. But to that point, and for the duration here, I'm going to leave this slide up, because this is really what we're talking about, the lack of overlap of these three bubbles. Going back to the data definition just for a moment, remember, it was a machine learning model that was reduced to a checklist. And what I think this is a great example of is if you do that, you're going to end up with, for those of you who remember med school statistics, either a type 1 or a type 2 error. And in this case, and Joe and I looked it up, it's a type 1 error. You've over-included in that big bubble too many people who really probably shouldn't be there because you used a checklist. And the example Joe gave of TMS in the patient finder bubble, that green one there, clearly in the model it ranked very high. So if you had TMS, the model is going to say, yep, you've got TRD. But the performance of the model in the absence of TMS was just as high, which means there are other factors that can go into it to get you where you want to go. Okay. So now we're going to open it up to a bit of discussion. And, you know, we're going to start with, I think, a very simple question, which is, you know, this is a pretty crowded room. Thank you all for coming. And we're talking about, you know, whether treatment-resistant depression exists, and we kind of know when we see it. But we're obviously not doing very good as a field in defining it. And I don't think I'm the first person to say that. So let's start with why does it matter? Steve, why don't you start? Yeah, thank you. So really important implications here, and I'm going to try and stay in my lane on this panel in presenting the regulatory perspective, again, from the perspective of a manufacturer and not as a regulator. But so language matters, right? In many contexts, language matters. And as we talk about different definitions, of course, those are disagreements in language. This is a slight tangent, but while you all are listening, hopefully a worthwhile one, which is, you know, just thinking about language. We often use the language of patients having failed treatments, which isn't intended to, but nonetheless has a connotation of blaming the patient they've failed. So I encourage us all to maybe flip that on its head to patients have been failed by their treatments. Stepping off the soapbox now, you know, some of the implications of this are if we all agree that there are big unmet needs in the care of patients with major depressive disorder and by extension, and creating and by extension, treatment resistant depression, then that means that there's some urgency in creating new treatments that may be more effective and more acceptable for these patients. And one of the things that people often bemoan is how long it takes to develop new treatments and how expensive it is and how the expense of that development then, you know, translates to the health systems, you know, potentially to patients. And there's some urgency here, right? Because every 40 seconds somebody dies of suicide. So, you know, in the absence of having an ICD diagnosis, a code for treatment resistant depression or a biomarker, there's some emerging biomarkers for major depressive disorder, none accepted by regulators at this point, but certainly no, you know, distinctive biomarker for treatment resistant depression. In terms of being able to, in an efficient way, be able to recruit patient into trials, which is where the expense comes in, right? It's all the time it takes to be able to recruit appropriate patients into trials. The, I think, the more accurate we are and the more efficient we are in identifying the right patients, then the more likely we are to be able to recruit these patients into trials and be able to start to answer the important questions required for regulators to make a decision about approval. And then this flows through the whole cycle because once a treatment is approved, then really the critical step for patient access is a decision by a payer, an insurer, to provide coverage for that treatment, reimbursement for that treatment. We obviously don't have a perfect healthcare system in this country, but 90% of people have some kind of insurance. Most people depend upon that insurance coverage to afford their care. And naturally, payers are very interested in understanding, are the patients being referred for this treatment appropriate for the treatment? Are they the ones that were studied in these trials? Are they the ones that have been shown to be the appropriate ones who may benefit from this treatment? So I've been yammering on. I'm gonna pause there and pass it on, but lots more to say there. Yeah, Lisa, Joe, do you want to comment? I'm generally low, but I'll use the mic. Can you guys hear me? Yeah. This is good. So the reason why I think the question is provocative is because, and going back to, you know, when you see it, you know what it is. I don't want people to think that it's like a Rorschach card and tell me if you think this is depression, right? Like that's not what we're trying to do. I think the DSM got a whole lot of wheels that it never intended to get. I think classifications of subtypes of depression absolutely matter, because in the arc of training, and I'm an educator, I think, first, in the arc of training, I think we have to give people a framework from symptoms to syndromes in a really concise way of how to think of things. And the guardrails are on there in the classification simply for that. Where I draw the line is when it becomes binary and we become so boxed in is that when we see a patient and we want to treat a patient, and you know that that patient would be really at this stage, meet the criteria and probably respond very well to TMS. And then you get back a denial saying, oh, but this is the definition we're using. They have to try and fail five. And then another company takes that criteria or that classification and rolls with it. So it comes to what both Steve and Karl were saying. It's like, how are we able to treat patients using this classification? Because it's useful. It's very, very useful along the affective mood disorders. But how do we scale it back where there's not so much regulation that we can't get our patients care? Yeah, I think that's a great point. And Joe, maybe you can comment a little bit because you've applied these models in other condition areas. You know, when you think about the fact that this is a real world dataset, not a clinical trial where there's all kinds of other constraints and opportunities, you know, why do you think there's 1% overlap on these three definitions? Because they're not bad definitions. That's right. Well, if I'm playing my provocative role because I don't have a reputation to protect as a psychiatrist, it's because treatment-resistant depression doesn't exist. There, I've said it now. I can actually say it, I think. I think, you know, it's very interesting to work in this field and compare it to other fields that I've worked in. On the one hand, you know, the narratives that I was describing are richer in psychiatry than any other field of medicine I've ever seen. And I'm sure no folks here are surprised by that, but the descriptives of what's going on are tremendously rich. On the other hand- And just to interrupt there, so when you say narrative, you're talking about the clinical narratives in the electronic health records that we're using to do all kinds of things. They compare to, for example, what do you see in our dermatology notes? Oh, are there any dermatologists in the room? I'm guessing no. Let's just say less rich. And more, what does our dermatology colleagues say? Hand on the doorknob all the time, just ready to leave. Yeah, and I think they're, on average, one or two sentences. Right, very quick, very sort of cut and dry. So yes, when I talk about the narratives, I mean, you know, the clinical description that's in the patient's record, in their notes, in their chart. On the other hand, you know, as I've also come to learn, there's essentially no blood tests. There's no, you know, kind of, let's say, objective measurements that can be used to get at some of these definitional questions in ways that are more common in other fields. And I think that, you know, the other point you make about these datasets being real-world datasets, there are always problems with real-world datasets. We, you know, go to great lengths to make these datasets as accurate and clean and complete as possible, but it's not possible to make those perfect. On the other hand, they're real. You know, clinical trial datasets are functionally controlled to elicit a mechanistic relationship that's being tested, but the cost is you control away lots of things that are real about patients' experience. And when you sort of add up those factors, it doesn't surprise me that if there's something very complex about nailing down who these patients are, and we try to approach it from different directions, we end up with that very narrow diagrammatic overlap in the middle. Great. So I think most people in this room probably realize, but it's worth noting that, you know, after really, what, you know, two or three decades of incremental innovation in the field of depression and treatment-resistant depression, you know, great strides with TMS, you know, tweaks to ECT, you know, antipsychotic augmentation, STAR-D and all that, we're on the cusp, I think, in the next few years of some new treatments that are gonna have, you know, really exciting and novel mechanisms, including COM 360, one of the psilocybins. When you think about where we could be in the next one to five years with novel medications and treatments, does that change the urgency of getting this right? Steve, maybe you start. Yeah, I think it does, right? And I think, you know, this in some ways relates back to why don't we see more concordance amongst those circles up there? Why is there not more overlap? You know, if we consider that, you know, 80%, excuse me, of antidepressants are written by primary care physicians, so it's only a small subset of these patients that even make it to see a psychiatrist. But even when these patients are seeing, you know, an expert in the diagnosis and treatment of depression, because historically we've had treatments that take a fairly long time to work, I think there tends to be a relatively low degree of urgency. Also, you know, we have lots of medications, but only a few, right? How many SSRIs are there? And they're, you know, largely differentiated by their side effect profiles, not so much in terms of their mechanism of action, but it's very common that, you know, patients will be prescribed SSRI after SSRI, you know, perhaps expecting a different result. And so I think naturally on the part of a clinician, there's, you know, probably less of a need to classify a patient as treatment resistant because there's fairly little differentiation amongst their options and probably a low probability that something else that they select may be much more likely to work or to work more quickly. Now that we are on the cusp, we have some treatments that now do offer the promise of working more quickly and some in the pipeline that may work more quickly and more durably. Hopefully this will instill in the minds of clinicians a sense of, you know, I really want to be precise in my clinical management. I want to have data as quickly as possible about whether a patient is benefiting from this treatment, and if they're not, then to move on and move them into something that's evidence-based that may be more likely to quickly move this person to remission. And I go back to what I mentioned before with the sense of urgency that somebody dies of suicide every 40 seconds. So these are lethal conditions that we're treating. Yeah, and Lisa, you've been at the forefront of treatment in treatment resistant depression. You've used just about everything that exists today. Are you excited about what's coming down the pipeline? And does that chart up there make you nervous? I think it makes me nervous from many perspectives. You know, in 2019, esketamine got approval, and it's yet to take off. And it brings me back to, I don't think a field of, the field of psychiatry is not an urgent field. I used to be, full disclosure, I used to be an ER physician before I became a psychiatrist. And it's really interesting. It's probably why I went into intervention if there's any psychoanalysts in the room. That's what I've been in. And so when we think about a patient going to an ER with a gunshot wound, there's no situation in which I take a tool, Vicryl, stitch that up, and he's still 40% bleeding. And I'm like, hmm, hold on. And then I'll get prior authorization to then, before I open the second pack of tool, Vicryl, to then stitch that up, right? I get a two-star review on Yelp. We don't have that urgency in psychiatry. And I think it comes back to what Steve was talking about. We hadn't had tools before. And now we have tools, we're getting new ones, but that urgency is still not there. The statistics on depression, if it's one in five, and I ask each of you to stand in your rows and then the fifth person stay standing, that would be the number of patients in this room right now just with depression. But we're all well-dressed. We're here at a fantastic conference. We're all trudging along and doing well. So it comes back to the definition of how we each see our patients and what is the definition of wellness. Beyond what is treatment-resistant depression, what are you calling wellness to then look at the patient and say they're resistant? Is that, are all your patients potentially at that 40% still bleeding out? And you're like, all right, hold on. Is that where we are? Yeah, it's a great point. And I think there's been a couple of references to the biomarkers or the lack thereof and the proper tools, which I think would create that sense of urgency. Joe, maybe this is a question for you. One of the things you've educated me on is this idea of using large data sets, in particular in mental health, to create a phenotype. And that's essentially what the DSM tries to do with its checklist. But this is a far more sophisticated approach than a consensus committee in a room, not unlike this, sitting around nodding their head and saying, yep, let's go with that. Is AI the tool that psychiatry has needed? That's a very leading question. Of course I'm gonna say yes. But no, with actual thought, I would say that it is at least a useful tool for the problem that you all are describing your field has had, been wrestling with for quite some time. You mentioned that term biomarker and digital biomarkers have sort of a specific definition. But loosely, these phenotypic profiles that we work in that are drawn from data behave somewhat like biomarkers. And I think a checklist is an algorithm. There's nothing wrong with that statement. It's just that when we look at the, you can evaluate the performance of that checklist as an algorithm, and that's its performance, right? Like the overlap that you do or don't see is a manifestation of how good that checklist is in capturing the people we wanna capture and not capturing people who don't have the condition we're looking for. That's what we want in any algorithm, right? And I would probably state it a little bit even more strongly and say it is going to be necessary to figure out some way to operationalize this notion of intuition that clinicians have if we are going to help the people who are not captured with a checklist that can't get to everybody, but where payers and other entities are going to need something more than just saying, let's pull a panel of 10 psychiatrists for every patient. That's not possible. So we need some technology that can take intuition and turn it into something phenotypically repeatable, and that is what AI can do here. That's great. I wanna go back to something Steve, you said, which I really like, and I hope I can internalize it, but this idea that patients don't fail treatments, they're failed by treatments. And one of the things we discussed as a group in preparing for this panel was, sure, there's a ton of excitement about new treatments coming. We hope that new tools like artificial intelligence can help with this definitional issue, but the reality is we have two big problems in this country related to mental health. We do not have enough providers. And it's not so easy to get novel treatments paid for. Lisa, maybe you can start us off here. How are we gonna address health disparities? And does that change in any way the way you think about treatment-resistant depression in light of the fact that three very reasonable definitions led to almost zero overlap? It's hard. We have to talk about the way we train clinicians coming up. There's an article, there's a paper written by Loveboy that said that clinicians, when they have a patient of color or any minority patient in front of them, they don't even consider higher levels of care because they either think that they don't understand them or they don't think that they can afford them. So when you start off there, you talk about lower income patients. I think you have to start off by first addressing bias in medicine. And after that is sort of addressed and you bring in minority populations, I don't know how many people were in the talk at 10.30, but that's one of the questions I asked. How do you get minorities into clinical trials? I don't know. I am the only black psychiatrist in a 40-mile radius in Connecticut. And when I graduate, I'm like, all right, I'm going into the community to do the work. 60% of my practice is Medicaid and Medicare. And 60% of my practice is still white. I myself as a black person cannot figure out how to get patients through the door. And it's really interesting that we don't ever talk about that part of it. It's like, yes, we're complaining we can't get them into clinical trials, but how many of us are in the communities doing the work? I'll take 30 seconds and give you sort of like how hard it is. I did a study in 2016 or 17 in a faith-based community in Wichita, Kansas. And I said, I was going to the community to talk about mental health and survey clinicians as to how they were using the AP manual on faith-based approaches to psychiatric illness. So I held the first thing, nobody showed up. I had two people and they came because they were my friends. That's how that started. It took me two years working in that community. I did free screenings of movies and you always give away free food. And after two years of being in that community, the second time I held it, I had 40 participants. I tell you that story because we can talk and talk and talk about the problems of health disparities and we can have all the greatest treatments for free on this side of the table. But if we can't engage patients and demystify the treatments for them, it doesn't matter. Not to mention, I think everyone remembers the stress diathesis model for medical school, right? I mean, what is the impact of the chronic stress of low income and all the other burdens that I think social determinants has taught us. So not an easy thing to solve. Joe, you know, Elisa brought up bias. Is AI biased? Did you say, is AI biased? Complicated question. Certainly AI systems can be biased, but I like to say that they don't typically add bias, right? That's why it's so important to control and account for biases and misrepresentations in the data that they're being trained on. And also in how we ask questions. I mean, what you were just describing is it's such a necessary thing to think about, and it's so far upstream from when we're usually thinking about things, right? Which is, I think, you know, your point, but it's amazing, you know, if we zoom back out, how many more problems there are and how we're defining the questions that we're attempting to answer. I will say though, you know, AI is very much still working on this. This is not a solved problem, but there are sort of sparks of hope around correcting bias in areas where we have, you know, datasets that we can leverage that are sufficiently representative for the pattern analysis I was talking about to pick up on things that are not otherwise picked up on, because that is a nice place where the machine is not going to, machines are sort of greedy, you know, they'll pick up whatever patterns they can to achieve the objective that you set for them. And if there is under diagnosis or under attention paid to a certain group, the machine will look at that and say, these are sort of easy wins, you know, there's a lot of work to be done over there, so I'll point to that. And that can actually be quite useful. I've done some work and the team has in rare disease, and I won't go on a long discursion about that, but a lot of times in rare disease, I think clinicians have a certain profile in their mind of what a patient looks like. And it is patients who look like that who disproportionately get diagnosed and patients who don't. And then you have a spread out group of other patients have the condition, don't look like the typical profile. We have found AI to be very useful in solving some of those issues. So there's a lot more promise there, but certainly bias is something to worry about with AI. Steve, I'm gonna put you on the spot for a minute, because I have come to know, you know, Compass Pathways is really, you know, kind of a progressive forward thinking place, which you sort of have to be to have, I think the courage and maybe audacity to try to bring the first, you know, psilocybin that is going through a process that's pharmaceutical grade and can be trusted. But we also know from the way, you know, phase two trials went, we hope phase three group as well, if not better, that it's a fairly intensive single dose. Right, there's a lot of therapy around it. There's a lot of prep work that goes into it. How is your company thinking about some of the disparity issues and some of the bias issues that we know exist? Can you comment on that? Yeah, that's a great question. I think there's a couple of ways to answer that. I mean, first of all, I think, you know, not specific to our program, this has been applied to the program right now for MDMA therapy, for PTSD as well, as well as some of the other programs that are currently active in the emerging psychedelic therapy space. I think some ways there's a misinterpretation of a model within a clinical trial and the translation of that into eventual real-world practice. And there's been a lot of focus on the resource intensivity of psychological support accompanying the delivery of the psychedelic substance and that that occurring as preparation sessions, one or more administration sessions, integration sessions. But when we think about real-world practice, most of these patients already, I dare say almost all of them, are already having some kind of medication management. Most are already in psychotherapy of some sort. And if you think about any of our treatments, even in SSRI, we may not use the language of preparation and integration, but we're doing it, right? You're never going to prescribe somebody a treatment and not talk with them about the risks and the benefits, about the anticipated outcomes and the timeline of that and what the experience may be like of taking this drug. You wouldn't not follow up with that patient afterwards and understand the progress that they're making. And so, we may be using different language here, but it's not necessarily so different than what already happens in clinical practice. And so ultimately I think what's really more additive and non-duplicative in these models of care is really that administration day. And in most cases it's a single administration, maybe two, maybe three. And then ideally, and we'll see how this bears out with further trials, infrequently episodic. And so the total amount of resources, maybe a little bit less than people anticipate here. But probably the other way to answer this, maybe the bigger notion is, as I said before, we don't have a perfect healthcare system, but most people have some kind of insurance and are dependent upon the payer to pay for this treatment. And so what will it take for a payer to agree to cover a new and potentially at least somewhat more expensive treatment than a generic SSRI and see the value in covering that? Now I think it's a popular thing to talk about big, bad payers as a monolith and all they wanna do is deny our patients care. I'll do a shameless plug here. I have a physician education show on hcplive.com called Rethinking Psychiatry. And my latest episode, which just went up, please visit it, is with Dr. Markita Wills, who is a psychiatrist and also a payer. She's the chief medical officer of Johns Hopkins Health. And we talk about just this issue. And I know Markita, and she's not somebody who's looking to deny people care. I'm sympathetic to the notion that we need to present something to payers that is compelling, that has differentiated outcomes, that actually demonstrates value. And even if that's somewhat more expensive, but ultimately leads to much better outcomes more quickly for patients, they'll be happy to cover it. So I think we have some of that responsibility, too, is to demonstrate that value. Yeah, I think that's right. And I think, just to share with everyone, a good portion of OM1's business with a real-world data set as rich as this, where you have electronic health records and claims record, is actually trying to help our sponsors, our clients, our partners make the economic case. And you won't be surprised, and you'll see it walking around the exhibit hall, I'm sure, that there's a tremendous amount of effort and resources going in to get treatments earlier in the course of depression, and that we've all gotten a little, I think, maybe, I don't know if complacence is the right word, maybe develop some bad habits about, to Lisa's point, not having urgency for patients who are really suffering, right, suffering for a long time. I'm gonna wrap up in a few minutes here, so we hope folks have questions. But I wanna, I have two more questions. The first one is gonna zoom out a little bit and put Joe on the spot here, and then I'm gonna ask all three of you to answer the question we started this panel with. So Joe, you and I sit together frequently and try to imagine a future where large data sets, like the one we have at OM1, and the amazing tools that you and your team have developed could help change the world for mental health and the crisis we're in. And then along comes CHAT-GPT and large language models, and people, like serious people, are talking about how this is the equivalent of discovering fire. So that's a big statement. And fire can both nurture and destroy. So how should we all think about the enthusiasm around these large, so-called large language models, which we did not use here, right? This is gonna look old school, but next year. Tell us a little bit about the future. Sure, so certainly in this city, I have to be careful with tamping down optimism about these things. But certainly we're still figuring this out, both inside sort of the AI community as well as its applications in all sorts of different fields. With medicine and health, we're always much more conservative with what we do with new tools, of course. And so I think we will learn a lot from other fields about what these things are actually good at and what they really don't help with as time goes on. I don't think this will be as dramatic as the invention of fire. That's pretty high bar. But I do think that one of the things that we'll see with these large language models is a sort of, you know, it's like when we look back at smartphones. Like we all remember a time without smartphones and smartphones are totally common now. But the transition between those two things is sort of a impactful but subtle thing. I think that's what will happen with these large language models. I do think as well, LLMs are the first truly democratizing application of AI that I've seen. Because anyone can pull up, you know, a chat GPT on their phone or their laptop right here, interact with it, and that is interaction with one of these models. And I think that will actually help make people more comfortable with both LLMs and other AI approaches. I don't know if this, you know, the way that we've done patient finder here will be outdated. We're starting to work with large language models in some of our work now. It's not a panacea. And they do lie, as you've probably experienced. But it is helping people to get more comfortable and familiar with these tools, which will ultimately be very beneficial. Yeah, and as you know, I did a couple interviews not too long ago. And when asked about this question, you know, what is the role of, you know, large language models in mental health? You know, I felt like it could help close some of the gaps we have in our treatment. And what I mean by that is, you know, I see patients on average once every eight weeks. That's a lot of time in between visits, right? What is the role of an intelligent bot that could actually, you know, be trained on, you know, validated, manualized care, have empathy, and connect with patients when they need it, which is the middle of the night on a Saturday night, when we're not available. And my concern about that was, well, but what happens with models that are trained on the internet, and someone who's suicidal has a conversation with this so-called bot, and they're actually given directions on how to commit suicide, right? Because we know these models are trained as much on information as they are disinformation. How do we solve that problem? So that's why we're careful before we deploy things like this in healthcare, right? And those problems are being solved right now, but they're certainly not solved. It's, you know, even probably in the popular press you may have seen of instances where, you know, for a couple months, Microsoft was chasing this, where, you know, they would put guardrails around their new Bing Assistant thing, and then someone would figure out how to kind of cheat them and get the thing to tell them, you know, how to build a nuclear bomb or whatever. And then, you know, it was sort of this recursive pattern. So guardrails are necessary, but I don't think that we should judge the potential of the tool by, certainly not its current state, but by the problems that exist, right? I mean, I don't think it's a new phenomenon that, you know, you would see patients once every eight weeks, right? That's been a longstanding problem that hasn't been solved other ways. And so, you know, there are, there will be ways to correct for these things and to put guardrails in place. And then the question will be, you know, does this improve on patient's current state? And if so, I hope that folks will not be sort of reflexively opposed to the use of these things. Okay, so if you have questions, there are mics around. We're gonna end with the question we began with, and I'll go individually. So, Dr. Levine, does treatment-resistant depression exist? Well, I think it certainly does. And, you know, I hope that, whether it's with the power of AI or new tools or the development of additional biomarkers, that we are able to, over time, get more precise about what we mean when we say treatment-resistant depression. I think it's also important for us to be fair about thinking about what are the reasons why patients may be classified as treatment-resistant. And a big one for me is, maybe it, in some ways, reflects the limitations of our tools. As I was alluding to earlier, we have lots of treatments at this point for the treatment of major depressive disorder, but most of them have more in common as far as their mechanism than they do that differentiates them. And so it's entirely possible that somebody who is considered to be highly treatment-resistant may just be SSRI-resistant, right? There's somebody who's not gonna benefit from that mechanism. And, you know, ultimately, there may be many paths to this final common picture, five of nine criteria that we call major depressive disorder, and some people may require a different pathway to treatment. Yeah, and we've been talking about the monoamine system for a very long time, and maybe that's the wrong system. Great. Dr. Harding, does treatment-resistant, I mean, a good part of your practice revolves around this entity, but does treatment-resistant depression exist? So both my kids call me Lisa McScrooge. So every single patient you, depression has no cure. Every single patient that you treat will become resistant or relapse on exactly what you're doing right now. That's a good point. It's in many ways a chronic illness. All right. See why they call me the McScrooge part of it. Yeah. Oof. Well, speaking of Scrooge, Dr. Zivinsky, does treatment-resistant depression exist? No. I think it does exist, but it's contained and captured in the minds of the specialists, and we just have to figure out a good way to get it out. Great. I'm gonna open it up now to questions, but before we do that, please, a round of applause. Please, a round of applause for our panelists. Thank you. Yes, sir. I think the custom is to give your name, where you're from, and then ask your question. Yeah, I'm Joey Cooper from UIC in Chicago, where I'm a neuropsychiatrist and do ECT. And so I really related to a lot of the ways Dr. Hardy talked about this issue. You framed it in the singular. Does it exist? I'm gonna ask you about the plural. Do they exist as the treatment-resistant depressions? I think about, when we think about who's a good candidate for ECT, we think about subtyping within depression. We think about certain features, illness courses, some comorbid with personality disorders, some recovering to a full euthymic baseline, catatonic, melancholic, psychotic features. This isn't one thing that we're talking about, but it's been framed in this discussion as one thing, so I'd love to hear your thoughts on the many treatment-resistant depressions. It's a great point. I'll just comment that I've said in other forums. When I was in training, we talked about subtype depression all the time. Exogenous, endogenous, all kinds. And then all of a sudden it felt like, no, there's just one. It's all a major depressive disorder. Are we swinging back to your point? Is there many? Maybe the diagram up there is explained, but these are just multiple types of treatment resistance. I love it. Go ahead, Steve. Well, yeah, I mean, I don't think I can say it any better than you did, so I'll just say word. Look, when we finally get around to knowing what we're talking about beyond just a DSM definition of depression or treatment-resistant depression, which of course is not in the DSM, but yes, I think, just opinion, I think there's a high likelihood that we will find that these subtypes of depression, perhaps the terms we've used historically or ones we may come up with now and in the future, may wind up being differentiated diseases. I would just quickly add that certainly there's many other conditions outside of mental health where exactly what you're describing, the existence of multiple subtype phenotypes under an underarching umbrella that we're only just beginning to understand is certainly true and very useful when analyzed that way. And so as a non-clinician, again, but just from the data perspective, that makes sense to me. Yeah, and I think the other point that should be made, we alluded to, is with new treatments, with new mechanisms, that might also inspire better subtyping, right? People clearly respond to say, you know, ran alone versus those who respond to COMP360, right? That could by itself become its own subtype. I think the gentleman in the back was next. Yeah, I'd like to share a very brief history of the use of triple therapy in the treatment of refractory depression, chronic resistant depression. This is from, sorry, I'm Jean Boudouin, I'm from Canada. So this is from my old professor, Donald Eccleston from the University of Newcastle-upon-Tyne. And he actually took great pride going to institutions in the north of England and rescuing a lot of those ladies and gents who were sitting lined against the wall, suffering from chronic mental illness. And he actually put them on a combination of three medications, namely lithium, clomipramine, and tryptophan. And this was done in a sequence. And coincidentally, there's also been another triple therapy this time replacing clomipramine with phenylzine. And these actually rescued quite a few patients from chronic depression. And the reason he chose to put lithium in the combination was because a lot of these patients actually had undiagnosed bipolar disorder. Now, it's very interesting when I look at the program today that on Monday, we're actually going to have a presentation on monoamine oxidases inhibitors. And I'm just wondering whether we have somewhat missed a little bit of the boat in being able to use therapies that have worked in rescuing patients with chronic depression. Yeah, I think that's a great point. And I think the other point you're making there, which we alluded to a little bit, but maybe not enough, is misdiagnosis, right? I mean, we have to acknowledge that, go ahead. Yeah, and I think that's where, right to the beginning, when I was talking about the initial diagnostic consultation is very few times do we as psychiatrists, I think, revisit the patient's diagnosis when we see that they're not responding to a treatment. I will tell you here from, I think I've worked in just three states, but it is very, the patients that we see with treatment-resistant depression that get side effects from medications, very rarely, in my experience, I think over the past six years here, that I can convince them to be on more than two medications. And I think that's sort of like the true informed consent of here, and then it's now all over TV. And my son says, and death, and mom, you prescribe that? And so it speaks to that, like patients are looking for medications with cleaner side effect profiles, shorter duration of dosing, nobody wants to be on a medication BID than when they forget to take the other dose. So it's weight gain. There's an obesity epidemic in this country right now, and especially for women. Few people ask them about sex, and few people take weight gain into consideration when we're treating them. And this is from Lisa's perspective. That is why I tend to stay away from second-generation augmentation and things like that. No, I think that's right. And I think this idea of either a single dose or a two-week course of treatment, rapid acting response, is really what people are looking for, particularly in a digital age, where we watch TikTok, right? We can't sit through an hour of anything. How can you wait more than two weeks for an antidepressant to work? Dr. Hassan. You set me up for my comment. I read Rewired. I interviewed you about your book and the effect of social media and the mental health crisis it's creating in so many people. And I'm not a psychiatrist, but I relate very much, Lisa and Joe, to you. Two things, three quick points, but I really wanna hear your comments on that. One is, what about ACEs, adverse childhood experiences, or attachment disorders, and trying to look at that as an underlying piece. I work with people in cults, and that's rampant and terrible. And looking at that, it seems like you have the potential to do a different DSM, a different way of figuring out what's going on to get payers. So that kind of looks obvious. But I'd love to hear your comments on the biopsychosocial nature that is depressing a lot of people when they look out at what's going on in the United States and around the world. Yeah, absolutely. And I think we alluded a little bit to this with the comments about social determinants and how chronic stress plays a role in depression. And we know it does. And Steve was politely referring to my book, Rewired, which is the impact of mobile smartphone technologies on the developing brain from birth to adulthood. And when we look at that, and we see really how, really at every age, we're all changing our brain because we're changing our habits, right? We know when we change our behaviors, the brain changes. How is that complicating chronic depression and the world we're in? Not to mention, when you talk to young people, there's maybe skepticism about that thing that I grew up with, that each generation in this country gets a little bit better. And we're looking at generations who can't do that. You guys wanna talk about sort of stress and how that plays into this model? Yeah, let's make a brief comment. Yeah, I think if we think about, again, the DSM, one of the major innovations of DSM was that it gave a common nosology to probably a pretty disparate landscape of how clinicians were thinking about describing illness, right? So at least there was a common language now. However, necessarily it's flawed in the sense that their disorders, or excuse me, their syndromes, they're not diseases, right? These are collections of symptoms that tend to co-occur. We draw a circle around it. We give it a name. They're Venn diagrams to some extent and overlap with symptoms in common. And then every once in a while, we erase those circles and redraw them and give them new names and move them around, right? But they're syndromes. They're not diseases. We don't consider etiology in those descriptions. And so yes, whether it's ACEs, the adverse childhood events, whether it's social determinants of health, all the other factors that may lead to dis-ease, these categories are completely agnostic of that. And it's one of the significant limitations of the DSM. So if you haven't, you need to watch Nadine Burke Harris. She has about an 11-minute TED Talk on childhood ACEs. My dad says, y'all are just weak. And I think even though that's really provocative in a sense, I think, you know, to Carl's point, each generation has it a little bit easier. But there is something as well, like my final comment will be, part of this is just life and nothing is easy. And I think there is a narrative that is happening right now, and Steve and I were talking about this earlier, that things aren't supposed to be hard ever. And I think when we talk about disease state and we talk about depression, and then we talk about wellness, we sometimes don't talk to our patients about moving through the hard part to getting anybody here with any sort of like Buddhist reading, you know, no mud, no lotus. You have to get through some of the hard parts. And I think we have to remember that. I think that's a great way to end. I want to thank all of you for coming. I want to thank this panel. Nothing weak about this panel, Drs. Levine, Harding, and Zivinsky. We'll be up front if you have questions. Thank you very much.
Video Summary
The panel discussion, led by Dr. Karl Marcy from OM1, explores the complex topic of treatment-resistant depression (TRD) and its existence from multiple perspectives. The conversation involves a deep dive into various definitions of TRD, their applications, and implications for diagnosis and treatment. Three different definitions are considered: the regulatory definition (failure of two antidepressant treatments within a major depressive episode), a simple data-driven definition using machine learning, and an AI model based on clinician attestation.<br /><br />Dr. Steve Levine emphasizes the importance of precise definitions for regulatory and clinical trials purposes. He highlights the need for new treatments and quick diagnosis against a backdrop of high suicide rates. Dr. Lisa Harding focuses on a clinician's perspective, emphasizing patient care, health disparities, and the chronic nature of depression. The discussion notes the lack of urgency among healthcare providers in treating depression as an urgent condition, contrasting it with other medical fields. <br /><br />Dr. Joseph Zbinski discusses the role of AI in identifying TRD, stressing that AI models can identify patterns not easily captured by traditional checklists or current definitions. He points out the need for AI-driven insights to move beyond patterns dictated by available data and to help uncover underdiagnosed patient groups.<br /><br />The panelists also discuss the potential of emerging treatments and their ability to shape the future of mental health care. They recognize the challenges of implementing novel treatments and managing health disparities. Despite some skepticism on the adequacy of current definitions and overlapping populations of TRD, the panel acknowledges that TRD exists but is potentially misunderstood in its complexity.
Keywords
treatment-resistant depression
TRD
Dr. Karl Marcy
OM1
AI model
machine learning
Dr. Steve Levine
Dr. Lisa Harding
health disparities
Dr. Joseph Zbinski
emerging treatments
mental health care
×
Please select your language
1
English