false
Catalog
Big Data, AI and Precision Psychiatry: Advancing P ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hi. Good afternoon. It's my distinct pleasure to be able to introduce Dr. Jordan Smaller. He is a psychiatrist, epidemiologist, and geneticist whose research focus has been understanding the genetic and environmental determinants of psychiatric disorders across the lifespan and using big data to advance precision mental health, including improved methods to reduce risk and enhance resilience. Dr. Smaller earned his undergraduate degree summa cum laude at Harvard and his medical degree from Harvard Medical School. After completing his residency and training in McLean Hospital, he received his master's and doctoral degrees in epidemiology at the Harvard School of Public Health. He is the Massachusetts General Trustee Endowed Chair and Psychiatric Neuroscience Professor of Psychiatry at Harvard Medical School and professor in the Department of Epidemiology at the Harvard School of Public Health. He's the Associate Chief of Research for Massachusetts General Department of Psychiatry, Director of the Center for Precision Psychiatry. And it's in that role that I've actually been lucky to collaborate with him in my work at Homebase. And with no further ado, here's Dr. Smaller. Thank you. Thank you, Sophia. And thanks to all of you. It's so great to see so many folks here. And I'm looking forward to trying to address these issues of big data, AI, precision psychiatry. I'll start, of course, with my disclosures. And actually, before we get into it, let me plant a couple of questions for you to think about. If you have a patient coming to you with depression, or what you and the patient agree is a diagnosis of depression, how do you know what treatment to start with? How do you know what treatment is likely to work best for that patient? Another question, if you are with a patient who you are potentially worried about in terms of safety or self-harm, think about what your decision process is. You're sitting in your outpatient office. What's your decision process for deciding what to do in that moment? Do you send this person home, see you next week? Do you send them to the emergency room? How do you make that decision? How do you know if somebody's at high risk? If somebody comes to you and says, what can I do to avoid developing something like depression? What would you say to them? These are all questions that, at least in my clinical experience, we face all the time, and for which we don't always have such great answers to give people. I'm going to start, as I like to do, by framing the magnitude of the problem that we're up against. You all know this really well, but psychiatric disorders are remarkably common. At some point in our lives, half of us would meet criteria for a psychiatric disorder. As you know, they tend to start early in life, so that people live with psychiatric symptoms or disorders for a large part of their lives. That is, in part, why they are a leading cause of morbidity, disability, and even mortality. People often don't recognize that, for example, people with serious mental illness, on average, have a 10 to 25, depending on the study, year shortened lifespan. Think about other illnesses for which that's true, and that we recognize are really public health emergencies. Of course, suicide is a cause of death too frequently, and now the second leading cause of death among young people. We know that the treatments that we have can be helpful for many people. They are helpful for many people. Sometimes they're life-saving, but for too many people, they're not enough, or we don't know which medicine or psychotherapy or other modality is going to work. These are data, these little bubbles from some of the largest effectiveness studies that you'll be aware of, things like STAR-D or CADI or STEP or other big meta-analyses. For each of these disorders, we see major gaps, and almost all of our FDA-approved medications are based on biological insights that are 50 or 60 years old. I'm going to introduce this concept of precision medicine, which you may be familiar with, probably have certainly heard the term, and it can mean a lot of different things to a lot of different people. I use this definition that the Precision Medicine Initiative Working Group, which was an NIH working group, stated, and that is, precision medicine is an approach to disease treatment and prevention that seeks to maximize effectiveness by taking into account individual variability in genes, environment, and lifestyle. I think the key thing here is individual variability. That is the sort of hook that we don't always get to use, and is sort of the basis of how we might do things a little bit differently. People could argue that this is what we've been doing all along. In fact, I sometimes relate that when I was giving a talk on this topic, I was with a patient, and they asked me, where are you going to be next week? And I said, I'm giving a talk. What about precision psychiatry? What's precision psychiatry? Well, it's an approach to, you know, it's really an effort to account for individual differences in how we do treatment and diagnosis and so on. And the patient said, so, see, I mean, it's really kind of looking at the individual and kind of taking into account their circumstances and, you know, what's going on for them. I said, yeah, that's right. And they said, what the hell have you been doing? So, in a sense, that's true. What have we been doing except this? But the reality is we aren't always doing this. And there's a little bit of a new set of possibilities now, partly due to technologies and the availability of data that really weren't there until maybe the last decade or so and are just only accelerating in their availability. So, we now have lots of resources, including the ability to look at our genomes, genomics, DNA sequencing, other omics, which means things like the entirety of our gene expression profiles or our proteins in our various tissues, epigenomics, you've probably heard of, biobanks now that are available with biospecimens for this kind of research, electronic health records, which, you know, we probably have mixed feelings about, but turn out to be just incredibly potentially important. And I'll say more about that. Of course, digital and mobile health technologies that have really exploded in terms of interest in them and, to a large extent, the availability of them. And then, of course, just big data methods, ways to make use of these kinds of massive amounts of data that we just didn't have before. And this is good news, I would say, because we face some major challenges practically at every level of what we do in the realm of diagnosis. How do we even make a diagnosis? What is a psychiatric disorder? And how is one different from another? And how are any of them different from normal variation? What about knowing who is at risk for some particular outcome and what we could do to enhance resilience? What about knowing how we could actually prevent the onset of illness in the first place? How do we match treatments to people getting the right treatment to the right person at the right time, as we often say? We don't have a lot of tools for that. And can we develop new treatments that are more targeted based on our understanding of the data of various kinds? So I'm going to give you a few snapshots. This is largely the work that we've been doing in Boston, but with collaborators from around the country, around the world, that touch on each of these, just to give you a flavor of the directions that one could go in this area. So one of them would be looking at the nature of diagnosis. And here, some of the advances that we're getting are in the realm of genetics. Genetics has advanced tremendously in the last couple of decades. And one thing that we know, and we actually knew this back in the 20th century, was that psychiatric disorders as we define them in the DSM, for example, are heritable, meaning genetic variation contributes to risk in the population of who becomes ill. And this is really true for all of the disorders that have been studied. So what you're looking at here is a series of disorders, and their heritabilities, the bluish, purple, are estimates from twin studies, and the, what is that, red, are estimates from more molecular genetic SNP studies. These are genome-wide association studies. And heritability is not the amount of, that genes contribute to your particular illness as an individual, but at a population level, how important is genetic variation? And that could range from zero to one or 100%. It's never that. But for all of these, it's pretty, it's highly statistically significant. And the magnitude of the estimates is actually pretty large. If you put things like breast cancer, prostate cancer, Parkinson's against these numbers, those diseases have lower heritabilities. And because of the advance of technology, and particularly because of the advance in the culture of the way genetics has been done in the last couple of decades, people have put together larger and larger data sets. We've seen the discovery of genetic variation associated with a whole range of illnesses. And on this graph, you can see sort of a timeline. And you can see that prior to about 2009, there was really nothing that people agreed was convincingly associated with a psychiatric disorder. Maybe there were a couple of exceptions, things like the APOE4 allele would say Alzheimer disease, which is maybe in that ballpark. But then just at that time, people again came together, they did large scale genome-wide studies, and then you see it take off. And we are by no means anywhere near the flat part of the curve. It just keeps going up and up. And there are hundreds of loci for many, many diseases that we see frequently that have been associated. And also now, because we can sequence the genome, or sequence the exome, which is the protein coding part, we've been able to identify specific genes associated with particularly neurodevelopmental disorders. These are rare variants in genes that then point to those genes as being important. And that leads us to pathways, and it's been very fruitful. However, we all know that the DSM as a system has its limitations. And one way you can sort of just see that in a snapshot is the fact that since the DSM began in 1952, going up to the latest major revision in 2013, you've seen this increase in diagnostic labels. There's a lot of splitting going on. And so we have more and more labels, which I think is an indication of our uncertainty about the nature of these disorders. And even if you just look at three kind of chapters, or three groups of disorders, and you look at the evolution of these over time from DSM-3, which arguably is kind of the beginning of the modern era, to DSM-5, it's pretty startling. So I'm going to show you this for pervasive developmental disorders, mood disorders, and anxiety disorders. And in DSM-3, you know, we have, I don't know if you can read the labels there, but you know them all probably, you know, we have disorders that are in these categories. And then as four, three are, and four come along, some disorders are dropping out, some are, you know, entering the system. And then when we get to DSM-5, something unusual happens, which is in the neurodevelopmental space, we get lumping instead of splitting. And then some further reshuffling of the diagnostic labels and creation of some new labels, but also creation of new categories. So you can see this is a little bit of a moving target, and we know this kind of clinically. So one of the things that I and others in the Psychiatric Genomics Consortium, which is a very large-scale collaborative that's been doing a lot of the largest genetic studies, have been interested in is, to what extent do the genes actually map onto our diagnostic categories? And I won't go into a ton of detail here, but I'll just say that when you, what we can do, for example, is put together genomic data for many different disorders. In the case you're seeing here in that paper, it was 11 different disorders. Everything from OCD, anorexia, autism, ADHD, depression, bipolar disorder, schizophrenia, Tourette's syndrome, etc. And we can ask, let's have the DNA tell us what the relationships are. Are there really 11 different separate disorders at a genetic level? And there are now statistical techniques that we can use to pull out what we would call latent genetic factors that explain the genetic relationships among very different disorders. And so what we did in this case was we found, and it's a complicated looking diagram, but essentially what that's saying is, out of these 11 disorders, if you look at their genetic correlations, they kind of fold up or roll up into four major factors. One of them we call the compulsive disorders factor, on which anorexia, OCD, Tourette's syndrome, which are in different categories in the DSM, load very strongly. There's a factor that has schizophrenia and bipolar disorder, which are very closely genetically correlated. There's another one we call the neurodevelopmental disorders, autism, ADHD. Also interestingly, alcoholism loads on that. And then an internalizing disorders factor, depression, anxiety. And we've subsequently done additional work that sort of replicates this, but adds to it because we've added additional disorders now. And there's interesting biology in these genes that are having these cross disorder effects. We can now understand something about biologically, what are they doing? One thing is we see those genes coming online very early in life. They begin to be overexpressed in the second trimester of fetal development, and then they stay relatively highly expressed. And then we've been able to track down some of the pathways that are involved by those genes. But what's interesting here, I think, is that as we put data together, we start to get resolution on what has been purely a sort of consensus diagnostic system. And actually, one of the things that seems to be happening, and you're looking here at what we would call sort of a heat map of genetic correlations. To what extent are these disorders intercorrelated? We see basic themes of psychopathology, perhaps, or to some extent, just normal variation that these diagnostic labels are capturing at a genetic level. So we're getting gradually a little bit more precision from the data in understanding what's our nosology about. Let me go on to a different area, and that is risk prediction. So as I sort of put to you in the beginning, how do you know if somebody's at risk for a disorder? We're taught certain risk factors that are influential, but we don't really know. At least I don't, clinically, when I'm seeing patients really have a great sense of whether, for example, they are at risk of having a self-harm event or at risk of developing a disorder. And one of the things that is now available is artificial intelligence. And there are many ways to think about this or use it, but I'm going to give you some examples, which are based on a principle that's pretty familiar to all of us, which is the way that artificial intelligence has been commonly used in business, actually. So for example, Amazon, Netflix, Facebook, what they're doing is gathering massive amounts of data about people's behavior and then training models and using that to predict their future behavior. That is things like what do they want to see in their feed or what are they likely to buy. And we can sort of apply those same methods or principles now to healthcare. And one of the sources of the data that we can have at our disposal, which is a massive source of data, which all of us have probably, those are the clinicians in the audience, but also anybody who's been a patient, has been contributing to these data, and that is the electronic health record. So every encounter, as you know, with a health provider is documented in the electronic health record with just some exceptions. And it is constantly growing. As we're sitting here, data are being accumulated about real world health behavior. And just for example, in our Mass General Brigham electronic health record, our data for six and a half million patients and 3.5 billion rows of data. It's big. The data come in two basic buckets. One of them are what we would call structured data. So that's things like diagnostic codes, prescriptions, you know, laboratory tests. These are things that have defined values. And then there's the much bigger corpus of information that's captured in narrative notes when we write notes or somebody has a chest x-ray and there's an interpretation, etc. And those data can also be transformed essentially into usable data through natural language processing. It's also, you know, essentially the kind of thing that large language models are doing now, which are more, which are getting a lot of attention, you know, transforming text into usable data. So one question that immediately comes up when you think about using data like this for psychiatry is, especially maybe if you're a clinician, is isn't it crap though? Like how good is that data? Could it really be relied upon to make a diagnosis? And a while ago, as we were doing a big study on bipolar disorder, we knew those questions were coming. And so what we did was we trained an algorithm to diagnose bipolar disorder based on the data that were in our health system. And then we also trained an algorithm to diagnose the absence of bipolar disorder. Okay, and then to validate it, what we did is we said, we're going to take the algorithm and apply it to the health system and have it identify people it thinks have bipolar disorder and people it thinks don't have bipolar disorder. And then we're going to invite them in for a diagnostic interview, the gold standard SCID interview by psychiatrists and psychologists who have been, you know, formally trained, but who are blind to the diagnosis. And just to make it harder for the clinicians, we're also going to invite people who carry a diagnosis of depression or schizophrenia, but not bipolar disorder, just so it wouldn't be obvious that there are two groups here. And then we said, let's roll the dice and see how good is this algorithm? And the answer was, and we actually, we had three different definitions, which are not really worth detailing, but we interviewed about 200 people blindly. And the top three rows there are our EHR-based algorithms, automated diagnoses. And then the bottom are the controls, people who the algorithm thinks don't have bipolar disorder. And the numbers there you see are the positive predictive value. That means if the algorithm thinks that you have, say, bipolar disorder, what's the probability that a trained clinician doing the SCID would come to the diagnosis of bipolar disorder? And you can see it's pretty high. It's more than 80%. And if, in terms of the controls, it was 100%. And if you know the literature on inter-rater reliability of psychiatric diagnosis, you get two psychiatrists making a diagnosis, that's arguably at least as good, let's put it that way. We also took another step, which was to say, we have genetic data now from these folks, actually, when we apply this algorithm linked to our biobank. And there are these big genomic studies that have already been done of bipolar disorder by, say, this international consortium. And since we have the genetic data, we can actually look at how similar genetically are these two groups of people, the people who are diagnosed by an electronic algorithm, health record algorithm, or the standard SCID or other diagnostic interview. And when we did that, the answer was they're practically the same genetically. So we calculated the heritability, and they were the same. And then we also calculated the genetic correlation, just sort of how similar are all the variations and their genetics in these two groups. So now we feel pretty confident that we can do this. And now we can use that to do other things. So one of the things that we did was, could we develop an algorithm that could predict whether somebody was going to develop bipolar disorder in advance? And you can imagine some use cases for that. For example, somebody with depression, let's say, if they actually have bipolar disorder underlying, you know, instead of depression, which you may have diagnosed, there are consequences to prescribing an antidepressant without mood stabilization, et cetera. So we did this in the context of another consortium that we established called the Psyche Merge Consortium. They have healthcare systems around the country linked to genomic data from their biobanks. And the motivation, in part, was the average delay from the onset of symptoms to a correct diagnosis of bipolar disorder, according to the literature, is about six to ten years. Incredible long odyssey of getting a right diagnosis. And the longer bipolar disorder is untreated, the worse the outcome. More recurrent mood episodes, suicide attempts, et cetera. So we put together data from three different health systems. We had about 3.5 million patients. And we trained a variety of machine learning and artificial intelligence models to try to predict this in advance. And this just used the structured data. So it's not using any notes or anything like this. And each team trained different kinds of models. And then they validated them in the other health systems. And without getting into a ton of detail, all of them had very good predictive performance. And when you look at the relative risk, essentially, if the model thinks you have bipolar disorder compared to not, it's anywhere from 15 to 19-fold increased risk, which is pretty good. We've also recently been able to do this in our own health system, but now focused on young people with bipolar disorder. And we developed models for three different cohorts, our just general pediatric population, or kids who had a diagnosis of ADHD, where they might be at higher risk, or kids who had any other mood disorder diagnosis. Again, good performance, almost no matter how you did the model. The top 20% of predicted risk accounted for 60% to 80% of the cases that later developed over the next two years. So that's a bit of a proof of concept. We can do this. We have the data to do it. We have the analytics to do it. How far can you take this? And the area that we've actually done the most in, and that I think is arguably one of the most important use cases, is, of course, suicide. And you know that deaths from suicide have been increasing, 35% in the last 20 years or so, nearly 60% in young people, for whom it is now, as I said before, the second leading cause of death. There is an important fact that gives us an opportunity, and that is that most people who attempt or die by suicide are seen by a healthcare provider in the month leading up to the event. So we are seeing folks before this event, and we have a window of opportunity for prevention. Now, fewer than 30% will disclose their intent, and they may not even have it at the moment. But we also know that clinicians, like me, don't do that well in identifying risk. And in fact, this is not just an opinion, but 50 years of research, systematic review, has shown that we rely on a very limited set of predictors. These are probably ones, if you're a mental health clinician, you were taught prior attempt, global insomnia, anxiety, none of those are very strongly predictive. And there's been no improvement in our ability to do this. So this was an area where we thought early on, leveraging big data and AI could, in a health system where we have this window of opportunity, could be very useful. So one of the first things we did was trained a model that could predict high risk levels of suicide attempt or death. This was in a sample of 1.7 million patients. And it did pretty well. It detected 45% of all attempts or deaths at 90% specificity about two to three years in advance. And these curves you keep seeing here with AUC, that kind of a curve, is a measure of the model's performance in terms of what we call discrimination. Sometimes people call it accuracy, but it's really how good the model is at separating people in one group from another. And that dotted line is an area under that curve. If you had the dotted line, it would be 0.5, meaning it's a coin flip. The model is not helping you at all. And you like to see 0.7, 0.8. You'd love to see 1.0, but that never happens. So we had this data, and then we said, could this work in other health care systems? And then we validated it in five other health care systems around the country and found that it did perform just as well. Now, would this be useful in clinical care? Could it be cost effective, which is always a question that comes up? And so we did a very detailed economic analysis and found that our models, along with some others that have been developed now, would be cost effective. They exceed the cost effectiveness thresholds if they were implemented in primary care and paired with evidence-based strategies for prevention. And then we went on to do something more difficult, in a sense, which was to prospectively look at how well is this performing. And so we had a study where we enrolled. This was with Matt Nock at Harvard University, 2,000 patients who came to our psychiatric emergency room with psychiatric complaints. And we asked the clinicians who were seeing them, what do you think is the likelihood that this person is going to attempt suicide in the next month or the next six months? And then we ran our models, and we said, how does the model do? And the bottom line was, the clinicians didn't do much better than chance. The models clearly outperformed the clinicians. And there's one sort of piece of data that I want to just highlight here. I don't think there's a pointer, but at the bottom there, what you're looking at is for our EHR model, the positive predictive value for the people that the model thinks are at high risk. They're in the top 10% of predicted risk. At one month, 40% of those people went on to make a suicide attempt, and at six months, 60%. These are big numbers, actionable numbers. So we have now gone on to build an app that is integrated into the electronic health record at the point of care that can deliver this information with context to clinicians. It can accommodate any kind of data stream that you want, and it documents a safety plan with the patient and helps guide through a particular care plan. And we are just about to launch, through a new Center for Suicide Research and Prevention that we just established, a randomized trial of 4,000 patients who will be coming into the emergency room and randomized to the clinicians, either treatment as usual or getting this risk information. So this will be a true test. Does this matter? We will also, by the way, take the data about what happened to these folks, again, accumulating big data, and what happened to the clinician decisions people made. Did they hospitalize the patient? Did they have them go to, say, partial hospitalization? Did they send them for outpatient care, et cetera? We can then train a different kind of model that basically says, given this person's profile, what is the best decision to make? If you see a patient who is acutely suicidal, what do you do? What's the... What? Admit to the hospital, right? That's, you know... And the data would suggest that 70% or 80% of the time, that's the case. Does that help? The data don't suggest that it helps. This is a study that was done... Did I... Did I... I'm removing... I had some... Oh, there it is. Yeah. Look at that. Okay. So this was a study Ron Kessler and colleagues did recently, looking at all of the patients who had emergency visits or urgent care visits in the VA, and for suicidal ideation or suicide attempt, and asking the question, does hospitalization reduce risk of subsequent suicide attempt or death over the next year? You may not be able to see this, but some of the important takeaways were, if you look at this table with the arrows, in the entire sample, you didn't break the patients down by any particular groups. Those who were hospitalized had a 12% risk of suicide attempt over the following year. Those who were not hospitalized had a 12% risk of suicide attempt over the following year. If you were being seen in the ED for suicidal ideation, but you hadn't made a recent attempt, again, no difference. Didn't make a difference. If you had made an attempt in the last week, but not the last day, it also didn't make a difference. In fact, arguably, well, yeah, didn't make a difference. However, if you had come in and you had just made a suicide attempt within the past day, it did make a difference. There are different subgroups of patients. One thing we can do is, advance my slides, overall, hospitalization was associated with a reduced risk of suicide attempt in about 28% of patients, and an increased risk in about 24% of patients. What the investigators did was train a model to predict, given predictors or clinical features of the patient, what is the likelihood that they will see benefit from hospitalization, and then derive a rule, what we would call a precision treatment rule, which would be pretty simple. If the predicted effect of hospitalization is benefit, hospitalize them. If it's not, if it's risk, don't hospitalize them, try to do something else, and otherwise defer to clinician judgment. They then tested this rule in a separate test sample and compared the predicted outcome to the actual outcomes. Implementing the rule would have, depending on the sample and the diagnoses, I'm not showing you the whole table, but effectively, 16 to maybe 20% of suicide attempts would have been prevented had you used a rule like that, and you would have averted a whole bunch of hospitalizations. That's another potential utility of these kind of predictive models. How about identifying biomarkers of risk and disorder? Again, one of the things that we've been able to do is put together a consortium, this thing we're calling Psych Emerge, my colleague Leah Davis at Vanderbilt University, and a lot of institutions around the country. We now have EHR data for about 29 million patients and 2 million whole, not whole genomes, but 2 million genomes with genotyping, and we're doing all kinds of analyses. One kind of analysis we're looking at is something called polygenic risk scores as biomarkers. How many people here have heard of polygenic risk scores? Wow, that's great. So polygenic risk scores are a score, a number, that for you as an individual is kind of an index of genetic vulnerability, let's say, to a certain outcome based on genome-wide association studies, meaning somebody tested a lot of markers in a very big sample, and they found markers that seemed to be associated with whatever that study was about. Let's say it was schizophrenia. Now the markers on their own, even if they're statistically significant, they tend to have very small effects. So the genetics of psychiatric disorders is not about a mutation that's recessive or dominant or that kind of thing. It's all of these tiny little genetic risk factors that add up to vulnerability or liability. But if you sum them for any individual, I can get your score, which is, and forget the formula there, but basically it's saying for every variant that I'm looking at, I'm going to give you, you know, think of it as two points. If you have two copies of that variant, one point or zero points. And I'm now going to add up all those variants into one score. And that makes something that's kind of useful. And we know something about how these things perform. So for example, in the largest studies of schizophrenia, if you take people who are in the top 10% of this polygenic risk score distribution and you compare them to people at the bottom 10%, those at the top have about a 16 fold increased risk of having schizophrenia. If you took the people at the top 1% versus everybody else, they have about a five and a half fold increased risk. But importantly, the distribution of those scores you're seeing on that last panel there are not very separate. The mean scores are pretty close among people who have schizophrenia or don't, right? But you know, it's conveying information just like risk factors, and we don't have a lot of really well-established risk factors, frankly, in psychiatry. One question we had was, well, this is great in these big research studies, but what about in real world care? If you applied these scores to people in real world health systems, for example, would you see this kind of effect? And so we did that, and the bottom line was across four health systems, you do see this effect. You see the polygenic risk score is associated with, in this case, risk for schizophrenia. The effect size, the size of that effect was smaller, although, you know, comparable to risk factors that we think about commonly in areas of medicine like smoking for heart disease. The other thing you can do, though, is because you have in the electronic health record the entire phenome, essentially, that is all the medical conditions that people have, you can ask not only is this score associated with schizophrenia, but is it associated with anything else? Is there genetic overlap between, say, risk for schizophrenia and some other medical condition? And so this is a graph where you're seeing the different systems of the body, and then there's a dotted line beyond which are diagnoses that are statistically associated with risk for schizophrenia. A lot of them are psychiatric, and that fits with what we said before. There's a lot of genetic overlap between psychiatric disorders, but some of them are interesting. Palpitations, obesity, dysuria, viral hepatitis, synovitis, antena synovitis. So we start to get some new clues about the underlying biology and relationship of these disorders to others. In this same consortium, we looked at another thing that people don't often think about, which is in the electronic health record, if you think about it, there are labs being drawn all the time, right? Millions of data points about blood markers or other markers. Could we use that to identify something that's associated with the genetic risk of these disorders, which might tell us something about the biology or be a biomark? What we did here is we took a genetic risk score for depression, in this case, and applied it to patients across multiple healthcare systems, and looked at all the labs. There was one lab, really, that just was screamingly statistically positive across all these systems to the same degree, and that was white blood cell count, which is surprising and maybe difficult to interpret because you're not drawing white blood cell counts on patients with depression. And in fact, the magnitude of elevation of the white blood cells was not the kind of thing you'd see with a bacterial infection. But it's pointing to something related to inflammation, presumably, that is being conveyed along with this genetic risk for depression. And so it's a way of, again, approaching getting information using big data and applying it in a data-driven way to psychiatry. When we think about risk stratification, there are some big opportunities and challenges. So I've showed you these electronic health record models, for example. In suicide and in other places, they seem to do better than clinician prediction, for example, or stratification. But one question, of course, is, well, what am I supposed to do with that as a clinician? If you tell me this person has a statistical risk that's elevated, I also want to use my clinical judgment, and I would want clinicians to do that. But how do you put those two things together? How do you weigh them? Also, whenever you're training models on the real world, which we like, you're also training them on what happens in the real world. And what happens in the real world is not always good. It's not always equitable. So any biases or inequities, essentially, that are happening could get baked into these models. They could learn from that, and that's, of course, what you don't want. So people pay a lot of attention to trying to de-bias these kinds of models. The polygenic risk scores, which I would argue are maybe the best established biomarkers in psychiatry, they're not really diagnostic or ready for clinical use. They also have their own biases, which is that they are trained on data that is almost overwhelmingly of people from European ancestry and not global ancestries. And so their generalizability and their equity is questionable. And again, we don't know how best to combine them with clinical factors. And perhaps the biggest thing, and we've run into this as we think about implementing suicide risk prevention, is the actual implementation as a clinical decision support tool. When you're building something into the electronic health record or the health system, there are a lot of things you want to do to be sure it works, to validate what are the optimal thresholds, make sure it's not stigmatizing, make sure that it's of net benefit to patients, and actually satisfy regulatory requirements as well. So this is an area that I think is moving quickly and we will see in clinical practice, and we're beginning to see already, but there are a lot of things to keep in mind. This other area of precision treatment, I started with a question, you know, how do you know what to prescribe somebody, let's say who's coming in for an antidepressant? Most of the time it's kind of a one size fits all trial and error approach, right? We don't really know. And in fact, if you look at antidepressant treatment studies, clinical trials, on average the mean advantage of taking the drug compared to placebo is less than two points on a Hamilton depression scale. That is not clinically meaningful. But we also know it's not the whole story. We know that there are people who benefit tremendously and then there are other people who don't benefit or can't even tolerate the medicine. And so in this study, for example, they modeled the response data and found that, you know, there are meaningful, essentially subgroups of people, some of whom do really well and much better than on placebo and others who don't. One way people have begun to think about precision psychiatry is in the realm of pharmacogenetics. And if you go out to the exhibit hall, you'll see a lot of companies that are implementing pharmacogenetics and many people here may have tried these tests with patients. And just to remind you, the way these basically work is there are two kinds of genes that they test for. One are pharmacokinetic genes, usually the P450 enzymes, which are involved in metabolizing drugs. And the other are so-called pharmacodynamic genes, which are in theory related to the mechanistic action of the drug. What is it working on? And there are recognized pharmacogenetic associations. So, you know, FDA labeling for some drugs and guidelines that point to the fact that some of the, especially these P450 enzymes, might influence a person's response to treatment. Or in the case of carbamazepine, for example, HLA genotypes that can sometimes have catastrophic effects if they're ignored. But how good are these? Well, typically the commercially available tests, what happens is you, you know, patient spits into a tube or whatever, you send it off to the company. They have a proprietary algorithm, and then they give you a report. And it's usually kind of a color-coded report or some variation of that. These are drugs that would be potentially problematic, red, yellow is, you know, use with caution, green is, sure, you know, not anticipated to be a problem. There have been a number of meta-analyses now, many of them sponsored by the companies. And the overall odds ratios are in the 1.4 to 1.8 range, meaning if you, the studies are set up so that you deliver the results to clinicians, or you just let people have treatment as usual. And they do show some advantage. Many of them have not shown, they've shown advantage on some outcome measures and not others. They have been widely critiqued for having a high risk of bias due to missing data, lack of blinding. And so many institutions and work groups and so on have made statements about this. The most recent, actually, is from an article, Bal Medal in the American Journal of Psychiatry, from the biomarker work group of the APA. So you have the International Society of Psychiatric Genetics, American Academy of Child and Adolescent Psychiatry, APA. They're all saying the same thing. We don't think these are ready for widespread use. And partly that's because they have limitations. So one of them is they're comparing outcome with the test to treatment as usual, but they're not telling you, which to me is actually a more interesting question, is not, is this person going to respond to this drug, but what choice should I make among the available options? Also, well, the effect sizes are modest, we said, but most of them ignore the fact that there are other things that could make you look like, for example, a poor metabolizer, other medicines you're on, or your diet, or underlying medical conditions. And they're proprietary algorithms. I don't know exactly how they're using the information. They do give a very helpful traffic light kind of report, but those can be oversimplified. And sometimes, you know, you might have a patient who says, you're prescribing me a red drug, you know, that's not good. You don't know what you're talking about. And then they have these pharmacodynamic variants built in, which for the most part are not really well supported. So I don't think yet, I'm a believer in pharmacogenetics as a really important opportunity, but I think there's definitely more to be done. There are other approaches people are taking. Here's an example from the work of Amit Etkin and others, one of the first studies of a series that they've now published, in which they've used EEG signals to predict antidepressant response to major depression, and they, you know, they were able to extract an EEG signature that seemed to predict the response to sertraline versus placebo across several sites, study sites. And maybe more interestingly, that sertraline response signature was predictive of a poor response or vice versa to TMS. So at least there is a little bit of a guidance of one treatment over another, for example, and this has been further evaluated. This kind of research is continuing. I'll just mention that we've also, again, used big data from our EHR and artificial intelligence to try to predict differential response to antidepressants. So we started with all of the data we had in the system for patients when they started, one of four kinds of antidepressants, which are the first line, right, SSRIs, SNRI, bupropion, or mirtazapine. We had 38 years of longitudinal data. We used natural language processing. We used deep learning models, but also simpler models. And we predicted response at four to 12 weeks after starting the antidepressant. Now, when I see a patient, it's this scenario, as I imagine it is for you. Let's say I have two patients here, Tom and Megan. The effectiveness studies and meta-analyses would say, I have, let's say, these four choices. There's about a 50-50 chance that they're gonna respond to one of them or all of them, but I don't know which one is the right one. So these models were able to correctly predict response in an independent test set, of course, 74% of the time. So their positive predictive value was 74%. Tom was, for example, these are actually data from two patients in the data set, Tom was predicted to respond to an SSRI with 94% probability, Megan, 28% probability. But more interestingly, we could predict essentially the counterfactual, that is, what's the response to each of these likely to be? So for Tom, actually, the SSRI was the best choice, but he was gonna do well on all of them. For Megan, she wasn't gonna do that well on all of them, but the best one, according to the model, was mirtazapine to start with. So you could imagine, even modest improvements in how we do in drug selection could be very meaningful. What about prevention? If you, somebody says to you, what can I do to prevent depression in my child or something? Well, let's say my adult child, well, maybe my child in general. What would you say? Well, one thing would be, don't have affected relatives, try not to have people in your family who have depression, try not to have early adversity or a traumatic childhood, and maybe don't use drugs, none of those are super actionable. So we have taken in a series of studies, again, a data-driven approach, and you can do this for other things as well, to say, what do we find that is consistently associated with protection from the onset of depression? Looking at what we would call incident depression, not somebody doesn't have it, and then you've got a large sample of people, some of whom develop it and those who don't. And we used a number of data types here, electronic health records, genomics, we used a method called Mendelian randomization, which is a way of getting at whether an association is a causal association, and two factors, this is work led by Carmel Choi, two factors emerged as causal and modifiable for the prevention of incident depression, actually regardless of your polygenic risk of depression or history of trauma. What do you think those were? We've seen this over and over in cohorts now. Exercise, sleep, diet, okay, so the things, some of these have come up, but the things that come up consistently, one is exercise, physical activity, and the other is social connection. And again, very large studies using these causal inference methods, it's actually changed my practice. I mean, I always used to say it's good to be physically active and stay connected and find ways to do that, to help people do that, but I'm really convinced that these have an important effect. There is some evidence that they may have a differential benefit for some people. For example, this is a study that looked at polygenic risk for depression and social support. We had not seen an effect of polygenic risk on social support in our studies, but this one did seem to find that those with higher genetic risk had greater benefit from social support in preventing, or at least reducing, depressive symptoms. Okay, one last general area is can we actually do better in developing new treatments? And you may know, when you hear about precision medicine, one of the places you may have heard about it is the places where it's been really successful so far, and that is cancer and rare disease. The last decade in oncology has seen a whole bunch of new treatment approaches that are designed to target mutations that may be driving tumors in patients. So HER2, you've heard about these, that really, for subsets of patients, can be dramatically more effective than just one-size-fits-all chemotherapy or other kinds of treatments. In rare disease, like cystic fibrosis, finally understanding the mechanism of the genetic cause of that disease and then targeting it with therapies that actually target that mechanism has transformed the outlook for patients with cystic fibrosis. It is dramatically increasing life expectancies. And in fact, the evidence is clear that if you have a genetic, if you have a target for developing a drug that has genetic support, that mechanism has genetic support, that drug is two-and-a-half times more likely to go from early development all the way through to being marketed compared to a drug that doesn't have that genetic evidence. And it doesn't even matter how big the effect of that genetic effect is. So just to give you one example that is some work that came from our lab, Robbie Miller, who has led this work, wanted to follow up a genetic variant in the very large schizophrenia studies that was strongly associated with risk of schizophrenia, although its individual effect size was small. But it was notable because, unlike almost all of the other variants, it actually changes the protein sequence in the gene that it's in. And that gene is a gene called SLC39A8, I'm sure we're all familiar with that. And this is a missense mutation, so it's changing the amino acid sequence. And what is that gene? It turns out that gene is a manganese transporter. It's responsible for transporting manganese, you know, the mineral, across cells. And that's interesting because manganese, which you can get, you know, it's in your diet, it's in your multivitamin probably, but manganese is a cofactor for all kinds of enzymes that are involved in a process called glycosylation, where cells are adding sugars onto proteins and lipids. And that is a really crucial part of brain development and helping brain cells know where to go as they're maturing in the brain. So Ravi was able to do studies, again using our electronic health records and actual specimens from people who carried this variant, whether or not they had schizophrenia, and showed that they had detectable reductions in manganese in their blood. When he did fancy profiling of the biochemistry of their cells and looked at their glycosylation, they had much simpler structures. In other words, the glycosylation had gotten interrupted. And when he knocked this human variant into mice in the brain, it caused these very notable dysregulation or dysfunction of protein glycosylation in specific areas of the brain. And the point is that this is the kind of example where using data and learning in a sort of hypothesis-free way, in a way, points us to new biology. I don't think glycobiology is on anybody's lips as an important component of the biology of schizophrenia. Interestingly, Ravi has also shown that there are kids who have what's called a congenital disorder of glycosylation. So this gene is essentially missing in those kids. They have very severe deficits. But they have these same biochemical glycosylation deficits. And if you give those kids manganese, you can partially reverse that biochemical deficit. So that suggests, and we're doing a lot more work, or he's really driving this work, to see whether this could in fact be a genetically guided treatment development that targets a certain component of the biology of the disorder. That's just one of the biological themes that's emerged from genomic studies, from these large-scale genetic studies. And you can see there are a number of others. All of these are promising areas that drug companies, for many of them, have been following up on. The one other approach I'll just end with that you can use is to say, well, what about what's already out there, drugs or compounds that are already being used? Maybe they could be helpful for some of the disorders that we treat. And again, here we can use the biology and big data to try to answer that question. So you remember I told you that we've done these studies looking at the genetic factors that seem to underlie a lot of psychiatric disorders. In a follow-up study, we took those genes, or genetic variants, and used them to essentially impute gene expression in the brain. I can go into the detail of how you would do that, but what that does is it tells you which genes are being expressed more or less in different parts of the brain. And we use that, you can then take that and say, well, what compounds that we know about act on those genes or those pathways? And we use that to identify 466 genes that are tapping these mechanisms and matching drugs, and we found 39 repurposable drugs, meaning drugs that seem to, would be predicted to act on the mechanisms that these genomic factors are telling us about. So I will end and take questions with just the point that we have a lot of new opportunities. I think it's important not to over-hype where we are, because it's really easy to do that. And so this is, what's exciting I think is that there's a lot of work going on in this area that wasn't possible before. Some of it I think will turn out to be very promising. And for us, the sort of watchword, we have a Center for Precision Psychiatry, as Sophia mentioned, is innovation to implementation. So there are lots of discoveries that are important and interesting, but for me the things that are exciting and that I'm sort of committing my career to at this point are things where I can see a path, not in 50 years, but in leveraging the data and tools that we have now to bring to patients in the pretty near future. And I think those are potentially transformative. I hope I've given you a flavor of the kinds of things you might be able to track down with those. But it's going to take a lot of work and a lot of interdisciplinary expertise with clinicians and data scientists and implementation scientists and so on. So I'm going to stop there. I would love to discuss further and thank you all for being here. So we invite people to come up to the mic to ask questions and just if you can speak loudly because it is being recorded and so that way people can understand. Thank you so much for this wonderful presentation. Very exciting. However, my perspective is kind of different angle since I'm forensic psychiatrist and I work for the government. I'm used to scrutinizing standards of care and also convincing those with the power of purse to spend money on things that actually help. And with those AI models and predictions, how do you see the ethical concerns that it may stigmatize people who may or may not develop mental illness and be denied certain level of care or be asked to carry higher insurance premiums on health and life insurance. That's question number one. And from forensics standpoint, how do you see psychiatrists being able to use AI models as proof that they provided care that they needed when let's say the model tells you a 75 percent chance if you beat somebody to the hospital, they're going to be worse off. But 25 percent maybe not so and somebody doesn't put the person in the hospital and they end up committing suicide and end up with a malpractice lawsuit. Great questions. So your first question, which I think is a super important question, is how do we know this is not going to make things worse for people and create stigma? You can imagine some use cases where even if it worked as intended, it would be potentially bad. So you're saying to somebody, you know, you have a high risk of let's say schizophrenia or you have a higher risk than other people. And now, first of all, what are you supposed to do with that? And secondly, you're carrying that information and maybe it affects your self-concept or how others treat you or the kinds of treatments that they make available. I think that's crucially important, which is why, you know, we've engaged a lot of bioethicists actually in many of the steps, especially in the suicide work. And I think there are some use cases that are not worth really going too far with, but some that to me have an ethical imperative on the other side as well. Suicide to me is one of those in a way. Obviously you could do it irresponsibly. But think about the alternative. So the alternative is, let's say we have ways, we try to do this clinically, right? We try to say, I think you're at high risk and I'm going to hospitalize you. Let's say that we have ways that actually are informative. We have risk factors or we have predictions. We need to make sure that they're incorporated into clinical care in responsible ways. But this is a life and death situation, obviously, that is only getting worse. And there, to me, it seems like the cost benefit is different than the cost benefit of, say, predicting, certainly predicting a fetus is gonna develop schizophrenia, which I think is not good, not a good use case. So I think it's something we have to wrestle with and do very responsibly. It has to be part of the implementation. It has to also involve people with lived experience, people who are, you know, we do a lot of focus groups with clinicians as well. So it's not a trivial question at all. And I mentioned the algorithm bias. We're now doing a lot of work to sort of, as are many people, to try to figure out where those biases could be occurring if they occur. The second question was about, what about medical legal stuff? Let's say the algorithm says, this person is not that at high risk. And you let them go, let's say, and something terrible happens. And we did actually implement this in an emergency room in a quality improvement scenario. And we spent a lot of time developing the contextual material and educating clinicians about how you might think about this information. Making sure that they're not overvaluing it. Making sure that they treat it as a piece of information, but, you know, from the forensic folks that we spoke to and, you know, clinical compliance and those kinds of folks. Documenting that you made a decision based on information that was available to you. And, you know, you made the best decision that you could, and these were the reasons. Like in current practice now is something that we do and is, you know, as good as you can do about protecting against legal action. Which, of course, anybody can sue for any reason, but they're good questions. To your point about AI bias, I'm curious if these models, if it depends on the EMR, meaning like is that data only relevant to large hospital systems that use Epic, or is there, can they be generalizable to other settings, other EMRs? How important is it to have all of the information, the 10 years of being a patient and those healthcare systems? So is that type of model relevant to patients new to the healthcare system? I'm curious if you could expand a little bit on that. Yeah, that's great. Yeah, so we've looked at that to some extent as well. And one thing, for example, in our suicide models, we actually explicitly tested how much information do you need, historical information, to kind of get to the level that the model's gonna be good and might plateau. And in that case, it turned out that two years was enough, was good. Like you didn't need more than two years. Six months was also pretty good. So I think it's true that if you have less data on somebody, the model will not give you as good a prediction. That's why you might wanna build these models in ways that incorporate other data, clinical information at the point of care, which we're doing as well, so that it may weight certain things higher if you have them. And you can also train them or calibrate the models so that the predictions are based on the time window that you have, and that can enter in explicitly into the confidence of the prediction and so on. The question about like, so what do you do? This is a big hospital system with Epic or something. Is that the only place it can work? And there are two answers, I think, so far to that. One is no, in the sense that many of the, information technology standards now are using a common data model or common data standards, which cross platforms. It doesn't incorporate everything in the electronic health record. But there are ways, it seems to be somewhat case dependent. Like when we train these models for bipolar disorder, they worked really well across different healthcare systems. Like you trained a model in one system and ported it. In other cases, we've seen it not work well. And we've done some deep dives. For example, there are different cultures in different healthcare settings, right? In one hospital system we had, which was mostly a rural-based hospital system, not a lot of psychiatric tertiary care, a lot of the diagnoses were being made by primary care docs or surgeons, or they would propagate a diagnosis somebody made. So the quality of the data was very different. So without getting too nerdy about it, we are in the process of developing what are called transfer learning methods, which can kind of align two different systems to make the predictions tuned, essentially, to the system that you're in. There's a lot more work to be done for that, though, yeah. I have a question, if I could. Yes. Moderator's privilege. So as you know, we're working on trying to develop something at home-based, looking at multiple different markers, whether that's liquid markers or genetic imaging, and using big data. But I guess, for me, the holy grail seems, how can we utilize a system like mass general abridgment and what's at our fingertips to develop something that would be useful for the entire population? And I guess, is there a way that you can speak on that, and how can you develop something like that? You mean people use it in a way outside the health system or something? Correct, something that's generalizable to the public. Yeah. Well, if the models we're talking about were going to be used, you'd have to still have people be interacting with the healthcare system as part of the information you have. But there are lots of other ways to develop these big data models that are applicable outside. For example, digital and smartphone sensor data, which is being collected and is also incredibly rich. And I forget the number, what is it, 97% of people have a smartphone or something. So there's a lot of sensor data that's being captured, but then there are a lot of specialized apps that will capture these data. And those kinds of things also allow you to deliver interventions in real time. It's another thing that we're doing in another context with suicide, of delivering interventions through the smartphone at times that we think people need it most. Thank you. Yes. Dr. Smeller, I'm fascinated your presentation and I can only envy the amount of data which you can accumulate in your brain. And maybe you have illegal contact with Elon Musk and he implanted you something. I didn't want to say it, but yes. My question is what, from your point of view, if is the future of current residents when they would sit down in the office and to see patients, that obviously there is a limit how much human brain can accumulate on the average level, of doctors in the regular offices, not a scientific organization, and how they would able to operate with all data. But the most intriguing, did you ever try to analyze interaction between patient and psychiatrist? Why the same psychiatrist is the best for some patients and the same psychiatrist is the worst for some patients. What is going on between them? Or we already lost human interaction between patient and psychiatrist in 15 minutes prescribing medication. Yeah, okay. Those are great questions. Briefly, your first question about how do you, what are we gonna do in the future with all this information? That's why I think the goal is to develop what we would call clinical decision support tools, meaning you can't keep track of all this stuff. It needs to, but you could benefit from knowing some piece of information in your patient interaction. It could be something like the person's level of risk or what treatment they might respond to. That could be delivered to you in the office. Or it could be information about how, that they are reporting on, let's say through their smartphone, about how they're doing out in life, which you don't necessarily know right now, which could enrich the patient-provider interaction. And that gets to your second question about patient-provider interaction. I think that's a fascinating field. There are people, including some in this audience, who are leading a lot of research on understanding the nature of empathy and dyadic relationships and affective relationships. And what is it that's going on that is the active ingredient when a clinician is transformatively helpful for somebody versus somebody who's actually, shouldn't maybe even be in practice. And I think there are technologies now. In fact, there have been studies looking at the synchrony of biological signals and those kinds of things. As people get into conversations or relationships that may inform us about what are some of the things that we should be emphasizing in training, for example. It's not my area, per se, but I think it's a really interesting one. I'm always fascinated by the literature of Jerome Frank, who had sort of said, there's a lot of therapies and they all kind of, on average, work about the same. And the ingredients that he pointed to that I think have been supported since then are a caring, nonjudgmental person who is a consistent presence and who inspires, essentially, a belief that they're engaged with a person in a way that's going to be helpful. And those are elements of a lot of different types of therapies. But we could learn a lot more, I think, about what is the master clinician doing. Thank you so much. Thank you. Jyoti Pathak from Cornell. Dr. Smaller, fantastic to see you. Nice to see you. In fact, I read all his papers. I really like your last comment about innovation to implementation. And my question is not so much of a clinical or a technical. In my humble experience, when we are trying to implement these tools, it's mostly institutional. Yeah, yeah. And I'm curious to see what wise advice you can share with us, because most of the time it's about revenues and reimbursement and not about the science. Yeah. Yes, that's a very good point. And I think it's totally true. We could have interventions and tools that are well-supported and we think are gonna be helpful for people. But the reality is they're not gonna happen unless there is effectively a business model, pardon the expression in terms of putting it that way, but a way of financing these things. And that could, and of course we know our healthcare system is complicated. There are third-party payers, there are institutions that are on the hook for managing a population. And so I think part of it is engaging with the systems early on and understanding what are the problems that the system is facing, let's say, or recognizes, some of which have economic elements to them. For example, we've been doing work on how do we do better in ED workflow? Psychiatric emergency rooms, as I'm sure everybody here knows, since COVID have been overfilled. People are spending long periods of time. Bed placement is difficult. Lots of beds are closed, et cetera. What can we do to use data to solve real-world problems like that, which have resonance for the institution? But I think it is a challenge because it's not always a cost-saving thing. It might be a cost-effective thing and making that case is something that we have to do. And some of that is having the right players in the room, including possibly third-party payers and understanding the economics of it as you probably do. It's a challenge, I agree. Hi, my question is similar to one of the previous ones about how much data you need before in order to make a prediction. And I wonder specifically in the bipolar predictions, did you get any sort of sense of what the prominent features were that led to the prediction? And I ask this because I'm a psychiatrist and I'm head of data science at a small startup that's developing precision psychiatry tools. And when we've tried kind of similar type stuff, usually the features that have come out have not been the most helpful. Yeah, I agree. Yeah, so we've looked at feature importance in all of these models. So that's sort of ranking, what's the influence of a given predictor in that whole model? In the bipolar models, and we did, I said we did these two kind of streams of work. One was across multiple institutions and the other was in our system for young kids. So for the ones across multiple institutions, they were an odd collection of variables. Some of them had to do with things you would expect that made a whole lot of sense, like prior mood symptoms or something like that, or subsyndromal stuff, or another anxiety disorder. But some of it was particular constellations of medications or talk screens or things like that. And I agree, that's not terribly helpful. And it also makes you worry a little bit about how much of that is capturing. The only thing that makes me feel better about this is that we tested it in external systems, but you could, there's a big problem in this kind of big data machine learning thing of overfitting. So you could be learning something really specific about your health system or the doctors who happen to be practicing, and once you try it somewhere else, all bets are off, it doesn't really work. In other cases, though, they have been more helpful. In the case of the young people with bipolar disorder, a lot of them did make a lot of sense, actually, and they were things like having to do with impulse control problems as kids or attention problems and so on. I think a big challenge for the field is getting to causal AI, essentially, because these models are optimized for prediction, meaning you don't really care, necessarily, what the model features are, as long as it's telling you something that is really highly predictive, right? But not necessarily, because if it's not really indexing something causal, it's probably not that stable when you go somewhere else. And so there are ways, but at least I feel like we need to do a lot more, to try to hone in on those factors that are not just correlated with something causal in some distant way that adds up to a good predictor, but actually are part of what's going on. And those may or may not be adequately captured in an electronic algorithm. I was about to ask, do you think that data exists in an EHR, like you had to put your nickel down? Yeah, I don't know, and I think it's probably use case. If you could add something, I'm sorry, this will be my last one, if you could add something to the EHR to kind of use that to get more, what you think might be, to get to a more causal piece of data, what would you change? In our realm, a big part that we have too little of are social determinants of health. So we, you know, systems are now doing a lot more in this area, but a lot of, as we know, a lot of the things that lead people to have psychiatric difficulties, or make them a lot worse, are things that are not, per se, medical conditions, but conditions of life. You know, poverty, job insecurity, food insecurity, crime, neighborhood crime, et cetera. There is a very large-scale study, which I've been involved in, called the All of Us Research Program, which is actually gathering those data along with these other types of data, which I think is gonna show us, to some extent, how important that stuff is. But, yeah. Thanks for answering all the questions. I have a question regarding that smart system that you said can be incorporated into EPIC. Yes. So I have two questions. So I work in a healthcare system, and we mostly deal with patients who are involuntary committed for DTS, DTO behaviors. So I have two questions. One, is this smart thing available for any EPIC system, or is it still in the R&D phase? And the second thing is, we get patients who have already made a suicide attempt, or have been threatening. So is the system better predicting the future of whether they will commit suicide or not? The second, well, first of all, the smart on fire, or smart apps, is really more of an information technology framework that allows this kind of common data model. So it's not our model, per se. You can essentially build an app that uses this technology, which makes it interoperable, or at least largely interoperable with other systems, and other healthcare, or EHR vendors, and things like that. So in theory, we're just revising our app now, actually. But in theory, yes, it would be portable to other systems. And EPIC itself has what they call an EPIC app orchard, where it's like the app store. That's one way that people have these apps go from place to place. Your second question about predicting the suicide in, say that again? In those people who've already been petitioned for being suicidal, is there any benefit for using any such apps? Yes, got it. Well, we have built our models also incorporating prior suicide attempts. And in those models, prior attempts are, as you would expect, a strong predictor. But you're still getting additional prediction of a subsequent attempt. Now we've been, we just had a paper, again, led by Yihan Xu, using what are called neural ODE models that are trying to model the dynamic trajectory of risk. Because right now we say, oh, your risk is X percent over the next year, but it's not gonna be X percent every day over the next year. And so there are, it's beyond your question, probably, but there are some ways to approach the dynamic nature of risk over periods of time. So, yes, recurrent attempts, for sure. Okay, well, thank you. Thank you. This will be the last question. Thank you for the talk. I'm a forensic psychiatrist in Canada, and we have a few people who are ultra-treatment-resistant to schizophrenia. It was very interesting to hear about the glycobiology. Is there any way, anywhere you know where we can send samples to find out, you know, is there anything that's, you know, like manganese for, I mean, I'm gonna, next thing I know, I'm gonna be giving out manganese. Oh, no, no, don't do that. We don't know that that's gonna help yet. But there could be, I mean, it would be as part of research to look at whether, these are, these are folks in a forensic institution, so that research part might be more challenging, but it's not, you know, I don't think we have the data now to say that giving manganese will improve psychiatric symptoms in those folks, but we're hopeful that that's the case, and we need to gather more human data. So, I mean, if you think that there's the possibility for research collaboration or something to look at that, that could be very interesting. Thank you. Thanks, everybody.
Video Summary
Dr. Jordan Smaller, a psychiatrist, epidemiologist, and geneticist, has shared insights on the integration of big data and AI in precision psychiatry. He highlighted that psychiatric disorders are common and often lead to significant morbidity and mortality, yet effective treatments can be elusive. Dr. Smaller emphasized the importance of precision medicine, which leverages genetic, environmental, and lifestyle variability to improve disease treatment and prevention.<br /><br />He detailed various advancements, such as using electronic health records (EHRs) and AI to predict disorders like bipolar disorder and suicide risk. His research demonstrates that EHR data can accurately predict psychiatric conditions and outcomes, potentially enhancing clinical decision-making and prevention strategies.<br /><br />Dr. Smaller also discussed the use of polygenic risk scores as psychiatric biomarkers. These scores, derived from genomic data, could help identify genetic predispositions to disorders like schizophrenia and depression, although their practical application is still developing.<br /><br />Furthermore, Dr. Smaller underscored the potential of pharmacogenetics and machine learning in tailoring psychiatric treatments. He highlighted the possibilities presented by digital health technologies and algorithms to provide real-time, personalized interventions.<br /><br />The challenges discussed include addressing algorithmic bias, integrating these technologies into clinical practice responsibly, and ensuring ethical use to prevent stigma. Overall, Dr. Smaller envisions a future where innovative data-driven approaches significantly enhance the precision and efficacy of psychiatric care.
Keywords
precision psychiatry
big data
AI
psychiatric disorders
precision medicine
electronic health records
polygenic risk scores
genetic predispositions
pharmacogenetics
machine learning
digital health technologies
algorithmic bias
×
Please select your language
1
English