false
Catalog
Biomarkers in Psychiatry – Are We Ready For Prime ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Today I have the honor to present you with a panel of experts talking about biomarkers in psychiatry. So first, my name is Nina Kraguljak. I'm at the University of Alabama at Birmingham. I'll be talking about the potential role of biomarkers in schizophrenia. Then Dr. Kumar, who is an expert in Alzheimer's disease, will be taking the concept of biomarkers in that area. He will also make a distinction between biomarkers and molecular targets, which seems untrivial, but is really important. And then finally, Dr. Grisenda will talk about machine learning and tell us about not just the promise, but also the pitfalls of machine learning as potential mechanisms for biomarker discovery. And at the end, Dr. Nemerov will round out the biomarker talks and talk about big picture and take us on a journey of biomarker development and where we are and where we can go. So let me just pull up my slide for a second. Okay, so I'll get started. First, my disclosures. And then I would like to start out with telling you a little bit about myself. So I am a psychiatrist who specializes in first episode psychosis. So that's the patients I treat all the time. I usually work in the emergency room because that's where most new patients present, at least in our healthcare setting. And those are really the hard discussions, right? You have your 18-year-old, your 20-year-old, your 22-year-old coming in, and you suddenly have to break it to the patient and break it to the family that they have a serious mental illness that we have to treat and that really is potentially going to impact the rest of their lives. So these discussions are especially made hard because we don't have biomarkers that help me tell the family about what to expect, tell the patient about what to expect, right? So right now, you know, the most common question is, like, are they going to be okay, right? Can this be treated? And all I can say is, like, well, most people respond pretty well to treatment, so fingers crossed, hopefully you're one of them. Well, as a psychiatrist, I find that extremely frustrating, right? Because, like, that's hoping for the best is not what I would like as a treating physician to offer my patients. I would like to have more precise tools where I then can say, like, okay, you're going to have a mild course of the illness. It's going to be fine. You just need to take your medications. Or even saying, like, okay, you'll have a more severe course of the illness. So we need to be really aggressive very early on so we can get this under control. And even if patients do respond well, it doesn't necessarily mean that they will respond well over the course of their illness. The other thing that's really frustrating is, like, this slide. This is the bane of my existence. So this slide shows you that in first episode psychosis, it can take up to 16 weeks for a patient to respond to antipsychotic medication. And what you see here is that, like, it's not like, you know, generally people think, like, most people respond very early on, right? Like, you know, it used to be that the first, second week of treatment, the early response was a good indicator for outcomes. But in this study that was from the race cohort, it was demonstrated that this is not the case. Like, treatment response and time is a linear relationship. So basically, every week, like, only a subset of patients respond, right? And it can take up to week 16 to actually have your 75% of people to respond. So because I don't have a biomarker that tells me how long I have to sit around until my patient responds or if he doesn't respond or she doesn't respond, when I can switch them to another medication, I basically watch my patients come in and out. And, like, they are psychotic, they, you know, like, can be still pretty symptomatic, and sometimes you have to watch them for six weeks, eight weeks, ten weeks, and they're still very symptomatic. So what I wish I would have is a therapeutic monitoring biomarker, a treatment response predictor. So biomarkers is a wide range of different things. So what you can do with a biomarker, it's basically a biological signature of either a disease process or another biological marker that, like, allows you to make predictions on certain things, like diagnostic biomarkers, right? Like, sometimes, especially in the first episode, it's very hard to differentiate. Is this a patient who will end up having schizophrenia? Is that somebody who's only going to have a brief psychotic episode? Are they going to end up with bipolar disorder, right? So right now, only time will tell, right? Like, with first episode psychosis, it's not always clear what kind of disorder they really end up with. So in that case, a diagnostic biomarker that helps me differentiate between those patients would be really helpful clinically, because then I can decide, like, is there a mood stabilizer that would be helpful in addition to an antipsychotics, or do I not expose the patients to a second medication? Then, of course, risk prediction biomarkers would be even better, where you can basically have people, even before the illness starts, put them in a scanner, you know, get some pretty pictures, and then those then tell you if a patient is at higher risk for schizophrenia or if they will develop the illness. So right now, we have a proxy. It's called high risk for schizophrenia, but that high risk actually is only 20% that convert, sometimes even less. So that high risk is, yes, it's much higher risk than a baseline, but of 10 patients in front of me, only two patients will develop the full-blown illness. So if I could identify those two people ahead of time, that would be fabulous and would really make an impact on outcomes. So then, when it comes to treatment development of new treatments, like, it's always important to know that, like, your drug does in the brain what you think it does. So those are called targeted engagement biomarkers, where you can really see, like, what the drug does in the brain. And then, of course, therapeutic monitoring biomarkers. So, for example, like, is there a marker that tells me that my patient will gain 40 pounds on the lansipine? Or is there a biomarker that tells me whether my patient is going to get severe EPS or get severe chart of dyskinesia? So that would be great to have, like, those therapeutic monitoring biomarkers that you cannot just use for therapeutic effects, but you can also monitor for side effects. Of course, anybody in the room who is a clinician knows that this is a science fiction story that I'm telling here, that we're nowhere near to actually implement that in the clinic right now. But I do want to tell you that we've made good progress, and I want to take you on the journey that I've been on since I started looking into schizophrenia research. So, of course, biomarkers, you can potentially also have them as clinically relevant. So the easiest biomarker would be is, like, this abnormality. We can, like, quantify the abnormality that causes the disease, right? And then we can capture it. So with neuroimaging, you have several different opportunities to capture disease-related processes. You can do structural imaging, like where you didn't, like, measure cortical thickness. Or you can do functional imaging where you see, like, how the networks in your brain are, like, abnormal. You can do metabolic imaging, like with spectroscopy, where you can, like, measure your metabolites. So all those different techniques are measuring complementary things. They're all in a different time scale, and they all measure a different aspect of the illness. But what the literature tells us, like, no matter which method you use, you find an abnormality. So that's the interesting thing about schizophrenia. It is not that you find a single abnormality in a single region of your brain, right? It's not like a stroke where in neurology you put a patient in the scanner and you see, like, it was an MCA infarct. Like, we're not there yet, right? We're not there. But what we do see is we see subtle abnormalities already here. I'm showing you a data set in medication-naive first-episode psychosis patients, most of them who presented to the emergency room acutely psychotic. And what we do is we take them, we talk to them about our research study, we scan them the same day, and then by the evening they get started on antipsychotic medications. So this study is a nightmare to conduct because, like, that means you're on call 24-7. And, you know, whenever a patient shows up in the ER, you're very grateful that your resident woke you up at 10 p.m. or 2 in the morning to let you know that there is a patient because those are very hard to recruit. But the effort is really worth it in the end because, like, what you are able to do when you image medication-naive patients is you can image the brain without confounds of antipsychotic medication exposure, for example. Like, antipsychotics, we know, affect brain structures and functions. So that is something that when you look at medicated patients, you never know. Is that a disease or is this, like, just really a medication effect? So that's one important thing. The other important thing is, of course, like, you've seen first-episode psychosis patients, right? A lot of them look like you and me. You know, they have very subtle illness, but then you also see the spectrum of, like, very chronic patients who cannot function a lot. So there is the idea that schizophrenia, at least in a subgroup of patients, is a progressive illness, right? So and then what you wonder is, like, is it the pathology I'm imaging or is it the illness chronicity that I'm imaging? And again, if I want to figure out a mechanism, like a reason for why the people are sick, the chronic side effects is not going to be what is going to be helping me to move the field forward. So again, so we scanned those medication-naive first-episode psychosis patients, and what you can see here is already in these patients you can see abnormalities in their white matter integrity. So here we measured fractional anisotropy, which is a nonspecific marker of white matter integrity, so how the brain is connected to each other structurally. And you can see that patients are already abnormal. However, what you also can see when you, like, compare the healthy controls on the left and the patients on the right, not every single patient, like, every dot is a patient or a healthy control, and you can see not every single dot is clearly abnormal. There's only a small subgroup of patients that are abnormal. But, you know, what we do, what most of our field is doing, we compare group A to group B, and we say, like, this is abnormal for the entire group. We also have shown that this, like, neuronal, this white matter integrity deficits are clinically relevant. So what I show you here is the relationship between white matter integrity and duration of untreated psychosis and the relationship between white matter integrity when patients are medication-naive and, like, how does it predict improvement to treatment after four months of treatment. So what we see is, like, the more intact your brain is, the better you respond to treatment, right? And the longer you go without treatment, like, the duration of untreated psychosis is the time when patients first have symptoms to the time when they finally start getting treated, which in the U.S. is about 16 months. So it shows you that, like, this white matter integrity marker really is associated with clinically relevant variables. However, again, the studies that we're doing is, like, we're recruiting a bunch of healthy controls, right, from a big, big population, and then we're also recruiting a bunch of patients. And then the way we analyze data, what we do is, like, we then figure out what is the average patient, what is the average control, right? So, again, white matter abnormalities. But when I have my patient in front of me, right, what it doesn't tell me is which dot are you, right? Are you the one that really has abnormal white matter or are you one of those dots who looked kind of normal? And, again, like, that's, like, the critical question on, like, how can we move those group findings to individual-level data findings so that I can actually make inferences based on the data for the patient that's sitting in front of me? So that, of course, gets me to a very uncomfortable question, namely the question of what is normal, right? And I don't understand why I didn't ask myself that question the first 12 years of my research career. But, like, once you ask your question, it keeps you up at night. So, for example, I give you an easy example about weight, right? 50 kilograms for me would be fine. For Dr. Nemerov over there, I think I would start a cancer workup right away, even though those are both the same numbers, right? So normal really needs to be put in context. So who did that really well is the field of pediatrics, where they have developed growth charts where you measure, like, is your baby appropriate height and weight or is it falling off the percentile curve? I think most of you in the room are familiar with those curves, and I'm sure a lot of you guys had nightmares about your kids with those curves. So what we now can do, now that we have open access data, like, basically what you can do is then you can really separate patients and control, separate women and men, because, like, clearly we all know that the development of women is different in terms of timing than of men. So it would be unreasonable to assume that those are measuring, that you can just, you know, generalize over both men and women. But also when you look at development and aging, it's a really important factor is the age, right? Like, things are not static, right? So when I have 50 kilograms now, that's fine, but if I was 10 years old, 50 kilograms would not have been fine whatsoever. So those are things that you then can set in context. And based on these things, you can then develop those growth curve charts. So what I'm showing you here is, like, a theoretical growth curve chart, and we're going to actually look at our patients here. So basically, when you bring your kids to the pediatrician, right, you basically have age and, let's say, height, right? You put the child on here. The one is at the median, so they basically are normal weight, right? They're within, like, the 68% of people that are in normal weights, at least, like, those are the majority of people at that age with that height. Similarly, what you can do is you can look, you can also see that some people are lower here. For example, here this patient is two standard deviations away from the norm, so that is a negative deviation. So once, for our data, we use a cutoff of two standard deviations because that means it's, like, it's the 5% of this population that has those symptoms. So 95, or that has those negative deviations, let's say, lower weight, because 95% of people at that same age with that same height will have higher weights. You can also do that with positive deviations. So a positive deviation here is, for example, your weight is higher than in your reference cohort, but this one is only about, like, between one and two standard deviations higher than the reference cohort, and that means it's, like, it's still within the normal range, right, but it's something maybe to watch. The good thing about open access data now allows us to actually do growth curve charting for the brain, which is, like, super exciting, and I'll show you why it's important, too, from a scientific perspective. So you can see here I'm charting a couple different brain regions where we do growth curve charting. So we basically have big data sets, right? This data set is based on 58,000 people, and in those 58,000 people, we did the same, like, growth curve charts. For every region of the brain, we found the number, and then we developed those charts, and you can see that the developmental patterns here are different in different brain regions, right, and the aging patterns are also different in different brain regions, so you cannot just, like, assume one size fits all for the brain, right? Some brain regions mature earlier, some later, like the frontal cortex, it takes a long time to be myelinated, so it's really important to, like, put this on a region-level basis, and also the first people who showed proof of principle here is Andre Markrand's group who invented the normative modeling for the brain, and they showed that in reality, the group differences really don't accurately reflect what happens on the individual level, so what I'm showing you here is on the top is the percent of negative deviations from the normal model, so basically, where is the brain smaller than expected, and then the one on the bottom shows you the percent of extreme positive deviations from the normal model, and percent, we mean the percent of patients that we scanned have this abnormality, and you can see, like, it's very heterogeneous, because, like, usually, like, what you see in this cohort is, like, 5% is about the max that patients share those abnormalities, those extreme deviations, so what you also can appreciate here is, like, that abnormalities are present, yes, but it's not for everybody, and, like, the spatial patterns are very different, so that means probably different brain functions, different brain regions, different cognitive functions are impacted, so, again, like, this is gonna start helping me understand my patient at the individual level, right? It helps me understand, like, you have those negative deviations in this region, so I will show you one example of, like, how we did the first proof-of-principle study for clinical relevance, so, again, we recruited antipsychotic medication-naive patients, typically from the ER, and then we scanned them, we put them on risperidone, which is the first-line treatment of first-episode psychosis, and then we basically, after 16 weeks, we determined how well they responded to treatment, so just to give you an idea, those patients were fairly sick, BPRS score of 47, basically, that's a psychotic symptom measurement, and it basically tells you that most of our patients ended up having to be hospitalized, at least for a little while, so when we then do the standard thing, right, like, we do healthy control group and our patient group, there are about 100 in each group, and we compare, like, how big is brain region A compared to brain region B, and what we see is, like, there's already reductions in, basically, brain volumes, here you can see it's in the hippocampus, orthalamus, interestingly, you can't yet see ventricle abnormalities, which is a little bit surprising, because it's one of the hallmark features in schizophrenia, but it can be easily explained when you look at individual-level data, so what we see here when we look at the ventricle, of course, there's about, like, 30, 40, 50% of patients who already have deviation in ventricle volumes, but we also find that it's not always an increased ventricle volume, right, about 10% of our sample had a decrease in ventricle volume, which I really dug into the literature, but couldn't find anything that ever was reported like that, so you can see this is a testament to the heterogeneity of the patients that we have in front of us. Of course, you want to know if there's anything that, as a clinician, I actually do care about, because, you know, those are pretty pictures, but, like, what does that mean for me, so when we then did a head-to-head comparison of, like, our normal volume measures, compared to those deviation measures, we saw that in three different regions, baseline negative deviations predicted subsequent treatment response. However, the raw values did not predict response whatsoever, so it's, like, the first example that these deviation scores can be clinically relevant. Finally, I want to point out the brain regions that were relevant. It was the caudate, the putamen, and the ventral diencephalon, which contains the substantor nigra, and if anybody thinks nigrostriatal pathway right now, you're right on the money. So how do we interpret it? Because usually it's, like, the more normal the brain is, the better people will respond. Here, that's not true. It's a little more nuanced. It's those people who have decrease in volumes in those dopaminergic regions that are going to be the better responders, and that's perhaps not surprising. In PET studies, it's been shown that the higher your dopamine levels are, the smaller your brain volumes are, so antipsychotics counteract that, and we all know that antipsychotics can increase brain volumes in those regions, so it makes a lot of sense, and it's also showing you that, like, in those pathways specifically that are relevant for psychosis, those are the brain regions that actually are predictive of subsequent treatment response. So that's just one example that we start out with and so we can see where we can go with this. But it shows that your regular volume measures don't measure up to your deviation measures. So of course the end goal is to find clinically relevant biomarkers, but this is gonna take us a while, which means job security. And finally I wanna say it's like, this type of data takes years to collect and takes big, big efforts, lots of people to help, so I just wanted to acknowledge them. Okay. And next it's gonna be my pleasure to introduce Dr. Kumar, who's gonna talk about biomarkers and therapeutic targets. So I wanna thank Dr. Emeroff, Dr. Kay for inviting me to be part of the panel. What I'm going to do for the approximately next 20 minutes is I'm going to tell you a story. It's about Alzheimer's disease, it's about biomarkers and the status of experimental therapeutics in Alzheimer's disease. This is obvious to all of us. Aging, if it's not your issue, it will be. We all get there, sooner or later. We were all young once, and et cetera, et cetera. You get the drift. This is Dr. Alzheimer. He, some of you, most of you probably know this, but he was a psychiatrist by training. And Dr. Sigmund Freud was a neurologist by training. Dr. Alzheimer has been described by many leaders of modern psychiatry as the father of, sort of, by Kraepelin, who was the father of biological psychiatry. Dr. Kraepelin was Dr. Alzheimer's mentor. And that was a school of German psychiatry going back around the year 1900, which was extraordinarily strong. It included psychiatrists who were also anatomists and histologists and neuropathologists. It included, in addition to Dr. Kraepelin, Dr. Alzheimer included Nissl, the Nissl granule, Dr. Golgi of the Golgi bodies. They were all part of this famous research institute in Munich at the time. This is Augusta Dieter, the first known case of Alzheimer's disease. In her case, the symptomatology began not with cognitive changes, but with paranoia and suspiciousness of her husband. He tried to bandage her at home as long as possible, and when he couldn't, he admitted her to this place. It's called euphemistically the Castle of the Insane, the municipal asylum for the insane and epileptics in Frankfurt. This is where Dr. Alzheimer trained as a psychiatrist. This is where he worked for a few years as a staff psychiatrist. This is where Augusta Dieter was admitted, and this is when he examined her clinically initially and was very interested in the case. After Dr. Alzheimer moved to Munich to be with Dr. Kraepelin, he kept in touch with Professor Scioli, who was the superintendent of the hospital. After the patient died, the brain was sent for an autopsy to Munich, and the autopsy was performed by Dr. Alzheimer. This is Dr. Holzman's cartoon. You can see, look at what's purple, the area that's purple, that's within the cell, next to the purple, you'll see blue sort of intertwined, almost like pieces of thread. That's the neurofibrillary tangle, which is an intracellular protein. Outside the cell, you'll see something that's sort of reddish pinkish. That's the amyloid core, what's called beta amyloid, and we'll talk more about that. It's surrounded by what we now call biomarkers, inflammatory markers, mitogly and so on. This is what Dr. Alzheimer found in Augustine Dieter's brain when he did an autopsy. This is the first time in psychiatry, neuroscience, however you want to call it, there was a connection between abnormal proteins in the brain that one could correlate to abnormal behavior, antemortem. So the plaques and tangles today have still become the sine qua non of Alzheimer's disease. You need a clinical history, memory loss, and other cognitive deterioration together, used to be together with post-mortem evidence. Now with neuroimaging, that's changed. Again, plaques and tangles under the microscope. This is when Alzheimer first described Augustine Dieter's case in 1906, and presented this and was published as a case report in 1907. But in 1911, he had a few more cases that he put together. This is his hand drawing of what a neurofibrillary tangle looked like to him, his sort of back of the envelope, if you will, drawing of what the tangle looked under the microscope. So what's a biomarker? You've heard Dr. K describe very elegantly some biomarker aspects of psychosis. So this is a very simple, but a clinically useful definition, a definition that the rest of medicine follows for a biomarker. A biomarker is a laboratory-based measure that can be used to, one, diagnose a disease, two, monitor a disease, and three, study the impact of an intervention on the disease. In general, all three metrics have to be met before you consider a protein or some other laboratory measure a biomarker. The most common example, the one everyone relates to in medicine is hemoglobin A1C. It's used to diagnose Alzheimer's disease, it's used to monitor Alzheimer's disease, and when you go to your doctor every three months, six months, or once a year, in addition to a random glucose level, what is a more definitive test is A1C. But treatment is not directed at A1C. Treatment of diabetes is directed at changing the underlying pathology and insulin and glucose utilization, et cetera. But that's a biomarker we can all relate to. So we talked about post-mortem findings of amyloid. Imaging, this particular finding is arguably the most impactful finding in psychiatry and in neuroscience over the past several decades. This is, the compound is called PIB, Pittsburgh Compound. It was developed at the University of Pittsburgh. Chuck Mathis is a chemist who developed the compound. Dr. William Clunk is a psychiatrist who was involved in the development and the first clinical study. This is imaging, in vivo imaging of amyloid in the brain. So if you go from left to the right, control subject, obviously orange and red, that's the amyloid PIB, PIB binds to the amyloid protein. Mild cognitive impairment is really very early, pre-dementia, but with memory impairment, MCI 1, 2, and 3, you notice amyloid binding, PIB binding increasing. And then you go to the Alzheimer's patient and it's substantial. This is revolutionary in terms of our ability to image a protein in the brain that's related to behavior and psychiatric disorders. My colleague, Gary Small, at UCLA at the time, developed another compound called FDVNP, which binds to tau and to amyloid. However, the signal-noise ratio is much better with PIB. PIB has now been replaced by other compounds that connect to 18-fluorine, meaning more convenient imaging. But nonetheless, we get the picture. Injecting a dye with a protein that links, a ligand that links to amyloid, and then imaging the brain, and essentially what you do is you can quantify it to some degree, but you can clearly see abnormal protein in the brain. This is Trey Sunderland, another colleague of ours who was at NIMH for many years. Trey is also a psychiatrist. This is lumbar taps, and these are cerebrospinal fluid measures of tau, TAU, and amyloid. While amyloid is higher in the brain, it's lower in the CSF. And you can see, not a perfect separation, but considerable separation between controls and Alzheimer's disease. So the status of these biomarkers today is that increased amyloid in the brain and increased tau, both total tau and phosphorylated tau, are fairly good, reliable biomarkers. What do we mean by biomarkers? They can be used to diagnose Alzheimer's disease. Not perfectly, but they can, especially when correlated or when used together with clinical features. They can be used to monitor the disease. There are many papers, New England Journal of Medicine, Lancet, et cetera, over the last 10, 20 years, where individuals who have very mild cognitive impairment can be tracked over time, a year or two or three, and as cognition deteriorates, amyloid binding goes up. So two of the three metrics have been met. They can be used to diagnose a disease. They can be used to monitor a disease. The third one is where it gets tricky. What about the impact of treatment? So this is a paper that Nemerov and I and several colleagues who are part of the American Psychiatric Association's biomarker task force put out in the American Journal of Psychiatry. And we raised the question here, amyloid and tau, are they biomarkers or are they molecular targets? Molecular targets meaning molecules that are the basis of therapeutic targets. The biomarkers are supposed to tell you something about the disease. It's not necessarily the disease itself. Example being hemoglobin A1c. In Alzheimer's disease, when we published, it's about two years ago, and much has changed in the past two years. We'll get into that. At that time, we were at the end of the first phase of anti-amyloid treatment for Alzheimer's. 50 to 100 therapeutic trials had failed, completely failed, meaning there was no separation between placebo and anti-amyloid agents over time when given to patients with Alzheimer's disease, which made us think these are probably biomarkers and not therapeutic targets. We moved to an FDA-accelerated approval process. So in the mid-'80s, during the near peak of the HIV-AIDS epidemic, there was a lot of angst nationally, scientific community, Congress, FDA, et cetera, asking, do we really need to understand a disease before we treat it? Meaning, what if we get treatments that are effective, and if you wait for the entire process to play out, it takes too long? So the FDA developed accelerated approval pathways at the time, and these are the requirements. The disease has got to be serious or life-threatening. Approval of the compound, the treatment, should be based on substantial evidence of effect on a surrogate endpoint, i.e. a biomarker. That could be a lab measure. It could be a radiographic image or a physical sign. It should be a clinical endpoint that can be measured earlier, earlier than the final disease scores. But here's the kicker. The surrogate is likely to predict a clinical benefit. It cannot be just a biomarker. Retrieve a biomarker where there's reasonable evidence and expectation that's connected to disease progression. So this is how many compounds got approved early on by the FDA, especially in the HIV-AIDS. It's now moved on to cancer, et cetera. For about 10 to 15 years, all amyloid trials failed. No one raised the issue about accelerated approval until the drug company Biogen introduced a compound called Liduhelm about two years ago. Liduhelm failed in the FDA, the usual trials. FDA advisory committee advised against approving it. It went to the final FDA board. They overruled their own advisory committee. Three committee members resigned in protest because the FDA said, we're using the other criteria. We're using the accelerated approval criteria because amyloid is lower in this. Yeah, patients don't get better. There's some suggestion that a subgroup does. Raised a furor, considerable national objection saying they were cheating and changed the rules. Finally, CMS, Center for Medicaid Services, refused to cover payment for clinical treatment and that stopped the process. Then came Eli Lilly, the Dynanomab. Now let me tell you what I'm doing. I'm not necessarily badmouthing pharma. I'm pointing out how what was widely considered a biomarker over time slid very quickly into becoming a therapeutic target. People said, assumed that amyloid was causing the disease. Therefore, when you take out amyloid from the brain, everybody gets better. This was published in the New England Journal of Medicine shortly after the other fiasco. Here's what I want to show. For 76 weeks, that is a year and a half, this is patients who were given, you know, it's a placebo-controlled design. For 76 weeks, some of you will remember the mini-mental state exam. If you get 30, you're doing great. If you have less than 10, you have advanced dementia. There was absolutely no meaningful separation between the two. The curve above is Dynanomab. Both progressed, unfortunately. Now look at the same study. Look at what happens with that. The patients who got Dynanomab, the plaque-removing agent, dramatic drop in brain amyloid measured using positron emission tomography. The placebo group had nothing. So this drug does what an amyloid-lowering agent does, expected to do, lower amyloid in the brain. But patients did improve. So that was Dynanomab, the drug. This is from the New England Journal article two years ago. There was a greater reduction in the amyloid plaque level in the Dynanomab group than in the placebo group, for which we were unable to show an association with clinical outcomes at the individual level. For more secondary outcome differences between the two groups did not provide clinical support and efficacy for treatment. So you don't treat the biomarker. You treat the disease. And clinically, what you're looking for is cognitive improvement. This next drug, Licanumab. Oh, again, New England Journal of Medicine. Here, they used a different metric called CDR sum of boxes. So memory, orientation, judgment, problem solving, community affairs, home, personal care, were the six items that this particular scale uses. Each item is zero to three. So if you're impaired in everything, you have a score of 18. This is published in the Archives of Neurology. Same scale. So this is what you saw. A very small difference between the placebo group and the drug group, again, after about a year and a half. But look at the difference in the PET imaging in the brain. Patients who received placebo did not respond. Licanumab. So you see, again, a big disconnect between what the drug does in terms of reducing amyloid in the brain associated with minimal clinical improvement. So this is an editor in the well-known British medical journal Lancet. And they called it Tempering, Hype, and Hope. These are some of the points in the editorial. A 0.45 difference on the CDRS-B might not be clinically meaningful. Minimally clinically important differences for the CDRS was 0.98 for people with mild cognitive impairment and 0.168 for those with mild AD. Whether licanumab is a game changer, as some have suggested, remains to be seen. Important point here is statistical difference versus clinical difference. After 18 months of treatment, if all you see is 0.45, it is very small. Now, there are others who claim that finally we're seeing light, possibly, but it's still a very thin ray of light. So this is an op-ed. I've been spending a lot of my time over the last years writing op-eds and trying to generate excitement, for want of a better term. Dr. Nemeroff and I wrote one, similar one, about a year ago. And we talked about the difference between a drug that can produce different status, statistically significant versus clinically significant, and concerned about the hype. There's a lot of demand. Everybody wants a new drug. You know, I do, you do, everybody wants one. The government wants one, pharma wants one, advocacy groups want one, et cetera. The question is, what is clinical improvement? What is meaningful? I talked about Donanimab in the New England study. Lillies come back, same drug. They now say they have data from a phase three trial, 1,700 patients as opposed to the phase two trial, same compound. And this is what Nature says about us. Another new Alzheimer's drug, promising trial results for treatment. So now we are in the second wave of Alzheimer's treatment. We are talking about anti-amyloid compounds that are showing some signal, but they also are very toxic and they're very expensive. But this is different from the first 10 to 15 years when there was absolutely no signal with these compounds. So here's again, this is from the Nature article. On May 3, pharmaceutical company E.L. Lilly announced in a press release, that's where it all begins, that its monoclonal antibody, Donanimab, slowed mental decline by 35% in a 1,736-person trial. But researchers warn until the full results are published, questions remain as to the drug's clinical usefulness as well as the modest benefits that outweigh harmful side effects. Always beware when you start with a press conference because then you, it's first sense, first step in the hype process. You get everyone all excited and agitated saying there's a drug here that's useful. So this is Dr. Meselam. Dr. Meselam, some of you know the name. He's a professor at Northwestern. He's one of the world's leaders in dementia research, especially front temporal research. But Marcel Meselam, a neurologist at Northwestern University in Chicago, is more cautious. The results that are described are extremely significant and impressive. But clinically, their significance is doubtful, he says, adding that the modest effects suggest that factors other than amyloid contribute to Alzheimer's disease progression. We are heading to a new era, he says. There's room to cheer, but it's an era that should make us all very sober, realizing that there will be no single magic bullet. These drugs have tremendous side effects. Now, this is done on a map. The results haven't been published, but these are data that really is released slowly. Amyloid-related imaging abnormalities, called AREA, microhemorrhages. They say that 34, or 31% of their drug group was a 30.6% of placebo group, short microhemorrhages. Edema, 24% in the drug group, of which 6% was clinically significant. And AREA-leading seizures in the drug group. All drugs have side effects, but these are serious side effects. These are potentially life-threatening. These are potentially immobilizing. And I'm not saying these drugs are terrible, but it's time to, to quote the Lancet article, temper hype and hope. We need to be very careful about, before these drugs are released to the market, they're very expensive, about 26,000 a year, intravenous infusion, done on a map once a month, like on a map twice a month. It may be worth it if you get a compound that's very effective. Take home message to everyone in the scientific community, and all of you are part of the scientific community, statistical significance versus clinical significance, they're not the same. Consider side effects. So these are not SSRI type of side effects, not that they're simple. These are extraordinarily dangerous side effects. Hemorrhage, edema, and seizures. Need to compare clinical efficacy with existing treatments. There are existing treatments for Alzheimer's. The coronary stress inhibitors, we forgot about them. They were approved 20 years ago. They have comparable side effects, comparable effects to the new drugs, yet they don't have the side effects, and they're relatively inexpensive. You need community-based effectiveness studies. Most of the bleeding and hemorrhages occur in individuals who take, who are on anticoagulants. Many elderly patients are on them. Caution is needed. Separate public pressure from scientific merit. So let me leave you with this last slide. So what's a biomarker? What's the real pathophysiological process? Neurodegeneration, that's the fundamental process in Alzheimer's, as you're aware. Progressive cell loss, progressive brain death. You have a host of biomarkers around that. You have tau, you have inflammatory biomarkers, you have a variety of other vascular features. The closer, a biomarker needs to be connected to the basic pathophysiological process. Otherwise, it's not going to be helpful. But the question is, how close should it be? And is the biomarker diagnostically useful, or is it the actual process? Could it be both? In theory, yes. And the recent study showed it could be both, but with a cautionary note. The closer it is to pathophysiology, you're likely to incur a lot of side effects. So it's a question of balancing, what you want is clinical efficacy, what side effect, what's your risk, how risk-averse are we as a society? Are we willing to say, my parent is seven-year-old, 80-year-old, 90-years-old, this could produce some improvement, but I also need to know what side effects are. That's where we are today. And biomarkers can be very useful. Dr. Emeroff will talk when he discusses biomarkers in other states, hypercortisolemia, et cetera, depression in other states. But this is where we are in the Alzheimer's. We have progressed, the scientific community has progressed, the research is good. Nonetheless, we are at the cutting edge and there is tension. I hope it's creative tension between science and public health. Thank you very much for your attention. Thank you. Well, thank you, Dr. Kumar, for this wonderful talk and for sharing your wisdom with us. And next, I'll give it to Dr. Grisenda, who is gonna talk about machine learning and their pitfalls. So, Dr. Grisenda, UCLA, these are my disclosures. Aimed today is to really provide a broad overview about machine learning. So, there's a lot of promise. It's a hot topic word. And we're gonna discuss why. And there is a lot of promise that's validated there, but there are a lot of challenges, some of which have already been discussed by the other speakers today. And I'm gonna expound upon those. Now, this is meant to be relatively data-type agnostic as well as biomarker-type agnostic as well, whether it's prognostic or treatment prediction. We'll focus a little bit on treatment prediction, but as you will see, all of the machine learning data for biomarkers has very similar issues. So, treatment selection in psychiatrists to date has been very much a one-size-fit-all, right? It's very personalized, but not very precise. We personalize it by our preferences as clinicians, the patient preferences, cost. We occasionally look at the empirical evidence and try to fit it towards the patient. The problem being that the empirical evidence is generated from, as Dr. Kay mentioned, it's purposely designed to minimize individual variation, right, to get towards group-level treatment effects, effects in what they call the quote-unquote average patient, which doesn't really exist. So the individual walking into your clinic looking for treatment recommendations very rarely is probably gonna resemble that randomized control trial patient. So attempting to apply the empirical evidence to those individual patients can be very difficult. And then obviously as a result, we pick one and mostly it's trial and error. So pick our favorite, give it the standard dose, and see what happens. They may benefit, they may not, and they may have adverse reactions, and it re-triggers the cycle for those who don't have benefits. And yet again, we try to optimize based on a sort of loosely personalized approach that isn't terribly precise. What precision medicine hopes to do is in order to get towards individual or at least subgroup-level treatment prediction, it's supposed to be a shortcut. So we still don't quite understand all the mechanistic underpinnings of our disorders. We don't even understand most of the mechanistic underpinnings of the treatments either necessarily. But it says we don't need to necessarily have those. We can approach this in a data-driven fashion and predict on the individual level. So don't need to understand the mechanisms, don't necessarily understand the treatments, but can potentially get towards better designation to get towards recovery faster. So this would use biomarkers. So either omics markers, transcriptome, metabolome, methylation, imaging neuroimaging, electrophysiology, and again, starting to get into digital biomarkers, wearable sensors and such. And from those biomarkers, better predict on a subgroup or an individual level which treatments we should start with first. So hoping for chances of likelihood, of higher likelihood of success than the current trial-and-error approach. Now big data comes in different varieties. You can have omics data as a form of big data that is very much feature-heavy. So you can have tens of thousands of genes from a single RNA-seq experiment, but usually few observations unless you're very well-funded. The other option is electronic medical record data. It's on the other end of this, where you have lots of observations, perhaps fewer features, but I'd actually argue EHR data these days is very much high observation, high features. Now, the traditional method for looking for biomarkers, and we're gonna just look at the example of gene expression data, has been to perform the gene expression analysis, and you can see the differences usually between responders and non-responders, very small. So average treatment size, I think there was a systematic review that looked at this previously. Even under ideal RCT conditions, treatment effect sizes are about 0.5. So looking for very nuanced differences for most of these interventions. So then we'll apply a statistical model for differential expression, LIMA, DexSeq, whichever you prefer. What that's doing is fitting, basically, a pre-assumed model to the data, and then looking at threshold changes. So everybody's familiar with this. People will pick a fold change and a p-value that they think are somewhat meaningful, and then do some sort of attempt of correction for the multiple testing problem. The problem with the threshold-based biomarker identification paradigm is that then you'll take those, your long list of genes that are upregulated or downregulated, and then try to fit them onto pathways with what little information you have. So when you try to fit them onto pathways, obviously, you'll have a lot of genes that just simply weren't significant, didn't meet your threshold requirements. So you're really trying to take a cross-sectional piece of data and try to infer a dynamic state about what's going on and how this is affecting treatment. Now, statistics versus machine learning. So there's always a little bit of confusion on this. Sometimes there's a little bit of argument, occasionally, that you don't necessarily need machine learning. There's a few papers that have shown that statistical models, when well done, marginally are comparable to machine learning models. There's a few reasons for that we'll discuss. But the primary issue is that, yes, you can do prediction with biomarkers with just plain statistics. But statistics, typically, isn't designed for the point of prediction. So typically, you'll start with a pre-assumed model. So you're gonna decide how, you are gonna set how the relationship is between the features and the prediction, right? By whatever you choose, linear logistic regression, some typically parametric model, which means it's gonna have very high assumptions about the data, you don't want it to be normally distributed. You're gonna have primarily an additive effect that you're looking at between the features and the outcome. And you'll then have to pick very few predictors because otherwise it'll overfit, so maybe 10 to 20. And then run your model, and then you'll do hypothesis testing to sort of look at that underlying process. So, right, the statistical model is wanting to how best capture the data generating process. Then use hypothesis testing to make predictions about what's going on, sort of underneath the hood in the system that you're looking at. The problem with that is, is that you can have predictive features then. So you can then use that model for prediction, a fitted model for prediction. The problem, again, is that it was not built for prediction. Looking at the p-values of the predictors that you have in that model confers nothing about their actual predictive value. It's their association with the target outcome, which means you could use those same predictors in another model and it will predict with very low accuracy. On the other hand, machine learning uses general learning algorithms. They can function in a non-parametric fashion, which means they can capture the complexity of sort of non-linear relationships. So all of these relationships that are here, if we assume that there's probably gonna be minor contributions of each of those, it better captures that. It can run multiple models and it's really based on increasing predictive performance. So it models to the point of increasing accuracy or whatever your predictive performance metric is. So it's designed for prediction. That says then inference, which the statistical model was better designed for, is something that's much more difficult to extract from a machine learning model. So the hope for single biomarkers I think as Dr. K nicely demonstrated, is probably sort of dead for psychiatry. If we were gonna have them, we would have them. So if we could have a single feature, sorry, I guess you didn't quite say that, but if you could have, I'm getting to the point of it, if you could have a single feature from a single modality that predicted treatment response or, you know, response or remission, we probably would have found it by now, even with our sort of crappy statistical, you know, models that we've had. It would have popped out. The fact is, is in reality we know that each one of those different genes that participates in the cascade, especially in gene expression methylation, whatever you want to say, each of those genes contributes in a nuanced fashion, in a not necessarily additive or linear fashion, which is why it's typically not captured by the statistical models. We have to think about, too, about gene versus environment effects. If we had a single biomarker that we were chasing after, it's equally as impacted by things that are going on over time in its expression. So its expression is never stable. As long as you could have a 20-year-old who had one gene biomarker that predicted the response to acetalapram and it would be the same as a 60-year-old, that's simply not gonna happen. The fact is, is because of those gene environment effects, you're always gonna have things changing suddenly. You're gonna have the impact of exposures. So for example, for gene expression, methylation could be impacting the expression, which could be impacted by smoking, could be impacted by childhood trauma. There's lots of things that go into these interactions. So we're most likely gonna be, as Dr. Kay mentioned, looking more at biomarker signatures within a single modality that better captures that complexity in a more stable fashion that can be measurable over time and be able to be deconvoluted from all these other confounding exposures. Now the machine learning types, just very briefly, we're usually concerned with supervised learning, which is taking a data set that's been labeled, we know the features, we know the outcome, we give it to the learning algorithm, it learns the patterns, then you give it new data and it's able to make those predictions. Unsupervised learning is starting without the answers and allowing the learning algorithm to find the patterns. So clustering, for example. And then deep learning, I just wanna point out, because this is one that's definitely a hot topic these days are neural networks and they become deep when you give them multiple layers of learning. They can take many, many inputs, many, many outputs. So the nice thing about machine learning models and sort of their detriment as well is that they have many more parameters than your standard parametric models. So simple linear regression has slope and intercept, two parameters. You get into the deep learning models, you have millions of parameters. So this is great. This would seem like it would naturally catch sort of what we're looking for in these biomarkers. Should be easy peasy, right? We should have these by now. So unfortunately, not to be the doom and gloom, but our progress towards clinical implementation of biomarkers or what are called predictive models using biomarkers for treatment response or diagnosis or prognosis or monitoring has been slow. So much interest. As you can see here, it's a near exponential curve of interest over the last 10 years. But the number of actual FDA approved machine learning applications or models for psychiatry is zero. There's two in neurology that relate to autism diagnosis so we can maybe sort of latch onto that to show that we made progress. But for the most part, we do not have any. There have been a number of recent meta-analyses and systematic reviews that have delved into the root cause analysis of this implementation gap and most of it is bias in the analysis which I'll talk a little bit more about. So the explanatory factors. This is the progress, the pathway as a local machine learning models goes. This isn't meant to be a primer for machine learning. So it's just to kind of give you a rough idea. So you start with your offline data, clean up the data, pre-process the data, make sure all the features are properly scaled, pick your features automatically or with a regularized fashion if you would like to. Train the model, you optimize it, you do some form, you should do some form of internal validation which helps mitigate over-fitting or under-fitting which we'll talk about. And then it should go through a process of external validation and eventually you have a candidate model that you can then use. The problem, the trends that we see is that there's just simply inadequate details on pre-process. People aren't reporting how they're processing their data. Most people are not internally validating so most of the models are over-fit and over-optimistic and there's very low rates of external validation to mitigate optimism. In fact, 92, 95% in most of the systematic reviews perform no attempt at external validation. So machine learning models are excellent at finding patterns to their detriment which means they love to over-fit. The mitigation against over-fitting is external validation. It's regularization which are types of algorithms that let you shrink the coefficients so that you don't have so much complexity in your model. The other issues are data drift. So data changes over time. Patterns of diagnosis change over time. Patterns of prescribing change over time. So it's very hard to get two data sets to match occasionally. Data leakage is a function of not splitting your data properly. So you start to get information that shouldn't inform about the target but cheats and gives your upstream features information about the future. That's much more prescient to time series. Again, there's processing errors and again, just inappropriate validation. So what you have is a failure to generalize. When you cannot externally validate your model, when your model cannot make the same level of accuracy in predictions on new data, it doesn't generalize which means it can't cross contexts. So this is an example I gave. So if we were looking at a model predicting a granulocytosis for a clozapine treatment of gene expression, this is about the inappropriate evaluation errors. Depending on what you pick, you could have an excellent model, an almost perfectly predicting model. So by accuracy, this is 97%. But it does not predict the minority class that you actually care about because the data is imbalanced. So you can cheat the metrics to look like you have a very good model. But again, data quality is a big issue. Garbage in, garbage out. So you can have the converse which is bad modeling which is okay data in, garbage out, or you can just have data quality. Psychiatry's always had a data quality issue. The problem is now we're doing it at scale with machine learning so it's really highlighting the issues with the data. So you have a lot of noise in our labels, a lot of measurement errors, and again, sample size is a huge one. Machine learning cannot function properly. All the benefits of machine learning cannot function properly when you don't have enough data to do it. And again, when you are missing samples, so in terms of representation and algorithmic bias, if you didn't have an example that you wanna predict on, it's no better than the RCTs when the patient walks in the door. You have very little certainty about your prediction if they weren't present in your data. So it's not more data that's better. More data, it has all the same inherent problems. It can cause all the same issues. And again, you need a lot of data sometimes to overcome those issues. So in terms of moving ahead, we need to look at our data. We need to do data-centric machine learning. Andrew Nye at Stanford has really been a proponent of this in other sectors, which says that you have to start looking at your data quality, more so than your algorithmic complexity. And we have lots of ways, not to be doom and gloom, that we can improve the data with machine learning. So in some ways, improving data quality for machine learning involves more machine learning. And again, to advance the field, we have to be accountable by peer review. We have to commit to reproducibility, reporting standards, and improved data and code sharing. If you want a primer on machine learning that's a little bit more detailed, there's one that everyone on this stage contributed to. Well. Thank you. Thank you so much for breaking down a very complex topic. And lining out where we need to go next. So finally, I invite Dr. Nemerov to give us a big picture overview, his 5,000-feet overview of biomarkers in psychiatry. Easy feat for a 15-minute discussion. But you go for it. Thank you, Nina. Thanks for having me here. This panel is a work product of the APA work group for biomarkers and novel treatments that I've had the privilege of chairing. And I'm gonna do the unthinkable and not show you any PowerPoint. I'm actually gonna talk to you, but I'm gonna tell you a joke first. And the joke has to do with a research psychiatrist who is walking on the beach at Big Sur, and he sees a bottle wash ashore, and he picks it up. It's an old, dinky-looking bottle. He rubs it, and a genie jumps out. Says, this is fabulous. You've rescued me. I've been in that bottle for so long. You can have any wish you want. And the psychiatrist thinks for a while, and he goes, you know, I want you to build a bridge between the United States and Europe so that I could drive there with my family. It would be so fabulous. The genie says, that's a really big ask. Do you have another quest? Just one other wish you might want instead? He said, I'd like a biomarker to predict response to antidepressants. And the genie said, you want two lanes or four? Okay? So fundamentally, what biomarker research is about is really, as the other speakers have alluded to, personalized medicine in psychiatry. And that burgeoning field has three goals. One is to predict people who have disease vulnerability, people who are likely to develop a disorder so that we could practice preventative psychiatry. The second goal is to have a diagnostic marker of the disease, of which there are two types, a trait marker, which means you have the disease whether you're symptomatic or not, or a state marker, meaning when you're depressed, you show the marker, and when you're not depressed, you don't show the marker. And then the true holy grail is, of course, a marker that will predict treatment response. So why is it so difficult for us? Well, let's start with the fact that all of you in this room picked a really difficult organ to study, right? The human brain has 86 billion neurons and an equal number of glial cells, and it's not like the liver, okay? The liver has three cell types. The brain has thousands of cell types. No one brain region is like any other. So it wouldn't surprise you that we've made a lot more advances in hepatology than we have in neuroscience, because we have this really, really complex organ, which, if you haven't figured out, is really hard to access. You know, I had a resident in the research colloquium ask me today, do you ever do brain biopsies? I said, well, not legally. I mean, no, okay? That's a real problem. So a lot of our markers are blood markers, and all of you use biomarkers all the time. I just want to remind you of this. There are lots of validated biomarkers you use every day in your practice. You measure TSH. TSH is a great biomarker for hypothyroidism. You don't really need anything else to tell you that someone has either incipient or full-blown hypothyroidism, right? You can measure copper, and you can diagnose Wilson's disease, right? So we have validated laboratory tests that we use all the time, but they aren't peculiarly available for psychiatric disorder, either diagnosis or treatment. So why aren't they? Why is this so difficult? Well, let's think about, other than the complexity of the disease, there's an ongoing battle about what we measure in blood has anything to do with brain, and that's a very legitimate question. So, you know, there are 100 papers about BDNF as the serum marker for depression. Where does BDNF come from? Well, it ain't coming from the brain. It's made in the GI tract. It's made in the adrenal medulla. It's made in a whole bunch of peripheral structures. People, when we were youngsters, well, you guys are youngsters, but when Anand and I were youngsters, we measured serotonin in urine. We measured serotonin in blood. And you know where serotonin comes from, from blood? It comes from the GI tract, and it's stored in platelets. It doesn't tell us a damn thing about what's going on in the brain. So we were sort of like the drunks looking for our wallet under the streetlamp when we lost it three blocks away, right? And every time you read a blood marker paper, you need to think about what does this have to do with the brain? Now, one of the promising areas is genomics. That's been alluded to. And you all know about GWAS studies. And what I like about GWAS studies is that they are a-hypothetical. So there are 22,000 genes in the human genome, and with whatever small technical limitations we have, we could measure all the genetic variations in a genome, and it's pretty cheap now. You could do it for a couple of hundred bucks. And it will be part of your patients' and your permanent electronic medical record because your genome doesn't change in structure. What you have at birth is what you're gonna have forever. And it turns out that when you study 100,000 people with PTSD and 100,000 match controls, there are some single nucleotide polymorphisms, genetic variations that stand out as different in the disease group compared to the control group. When you study 200,000 people, and I'm no statistician, but Adrian can tell you, you got a lot of statistical power. So you get big, significant differences, but the magnitude of the effect is tiny. So no single gene variant is responsible for developing PTSD or depression or schizophrenia. That's why we've moved away from that. And now we're summing them all together in what are called polygenic risk scores. Polygenic risk scores sum up all of the individual variations that are different between the patients and the controls into a number, and then you could ask a very simple question, which has been done in other diseases, like heart disease. If you take polygenic risk factor scores for coronary artery disease, and you add it to the other risk factors that you know about, high cholesterol, lack of exercise, family history, cigarette smoking, polygenic risk scores really help in predicting who's vulnerable to have a heart attack. And we now, in psychiatry, we will be able to look at polygenic risk scores for PTSD, for schizophrenia, for bipolar disorder, and we can say with a reasonable degree of likelihood that in a population, the general population, if you're in the top 1% of polygenic risk scores for schizophrenia, you're much more likely, in fact, to develop that disease. Among people who are psychotic, it's a tremendous increase now using polygenic risk scores, but it's not ready for prime time. Because genetic variation is only one of the risk factors for developing a disease, right? Remember, genes are expressed, and I'll just remind you that different genes are expressed in different brain regions all the time. So what we're looking at in blood are these genetic variations, but we're not looking at gene expression. And that's what's important, is what genes are being expressed. And obviously, brain genes are expressed in brain tissue, not in liver tissue, hopefully, and vice versa, right? But then there's epigenetic modification of gene expression, and ultimately, there's protein expression. No one, with the exception of Dr. Kumar, mentioned proteomics, and that's a new frontier, because in the end, the most important product of gene expression is proteomics. And what we need to understand is whether proteomics might represent part of the Holy Grail. I personally think it may well. Other fields of medicine have been so fortunate because personalized medicine in oncology has revolutionized diagnosis and treatment. All of you know about BRCA1 and BRCA2. You know, Jimmy Carter, my old friend from Georgia, is 99 years old. He's in hospice, but he has had stage four malignant melanoma for 15 years. And why is he still alive? Because of personalized medicine, right? He had a biological marker on his melanoma, and when he was treated with a drug that was designed to address that particular protein, what happened? All of his tumors went away, including his mets in the central nervous system. So we are making progress, but again, in some ways, cancer is a little easier to understand. It's more accessible. We understand the genetics better. I wouldn't be a good discussion if I wasn't a little critical, so I'm going to be a little critical and raise some other issues. I love Nina. I try to get her to come to live with me in the department, to be clear, in Austin. I wasn't able to do that. I love the work she's doing, and first episode psychosis is really a frontier. But I don't know what changes in cortical volumes mean, and I would say none of us know what they mean. Is it axonal pruning? Is it dendritic pruning? Is it neural degeneration? Is it a lack of development of neurons? Is it inflammation? Is it a change in water? And there's never been a study done in psychiatry in which changes that have been observed on structural MRI have then been demonstrated to be present in post-mortem brain tissue. So we don't understand what the underlying pathophysiology is for brain imaging. Now, it seems like it would be a great idea to use fMRI, PET, and EEG as information in developing personalized medicine in psychiatry. It would make sense. But let's just think about it. Most of your patients don't have instant access to fMRI, certainly not to PET. EEG does have some promise, and there are two reports of EEG predicting antidepressant response, one from the MBARC Sertraline study, one from a study in Canada with Lexapro, S-citalopram. So I think that that's helpful. I do want to say something about pharmacogenomics. No. Okay? While I was sitting there, I get this e-mail from some yo-yo who's telling me that, why are you against pharmacogenomic prediction of antidepressant response? And I've written extensively about this. The APA task force is about to submit our new meta-analysis that shows that it doesn't help at all. All it does is spend your patient's money and insurance money. Pharmacogenomic testing for antidepressants in psychiatry does not predict antidepressant response. So, you know, I know that you get bullied into prescribing that crap all the time. Don't do it. Okay? I want to say something about ethics with personalized medicine psychiatry. And then lastly, something about potential new biomarkers. So in terms of ethics, we learned a serious lesson when the Huntington's protein was discovered to diagnose Huntington's disease. And the first thing we learned was that a very substantial number of family members of patients who had Huntington's disease refused to get tested. And they fundamentally said, you know what, what would I do? There's no treatment. So what would I do? And we could say the same thing for Alzheimer's disease. Secondly, we have to worry about PHI being available broadly, particularly to third-party payers. Right? I don't want to have a situation where we develop a great biomarker that shows that you're vulnerable for, I don't know, schizophrenia, and then Blue Cross and Blue Shield, I'm picking on them, decide we better not insure anybody who has that marker. Right? Now, I know that's evil, but there are evil people in the world. Right? And so we need to think about that. In the end, I think that what we've learned is that these tests need both sensitivity and specificity. So, you know, EEG as an adjunct to diagnose epilepsy, if you do a 24-hour EEG, has a relatively good sensitivity and specificity. It's not a great yield, but compared with pure clinical observation, it certainly can be helpful. Right? So that's what we want to strive for. My gut feeling is that there are a tremendous number of variables that predict whether one is likely to develop a particular psychiatric disorder. Right? There's genetics, think bipolar disorder, schizophrenia, and to some degree depression and PTSD. There are environmental factors like childhood maltreatment. There are other environmental factors like air pollution, poverty, family disruption. Right? And then there are lots of other demographic characteristics. Right? So there are, as Adrian can tell you, thousands of potential data points that can contribute towards raising or lowering the threshold to develop a psychiatric disorder, and probably equally to determine treatment response. Right? And therefore, AI has to be part of the solution. Right? Because we're not going to be able to figure this out with a single marker. The idea of identifying the bipolar gene or the panic disorder gene, you know, it seemed like a great idea at the time, but it was pretty silly. Right? So we understand that. I am an optimist by design, and I think the future is very bright here. There are a number of startup companies beginning to look at this area, and they've gone outside of the box. There's a company I've worked with who's realized, you know, get this, the eye is connected to the brain. Well, if the eye is connected to the brain, can you look at some optical measures like pupillary response, stress response, oculomotor muscle movement, saccades, gaze, can that be helpful for diagnosing a stress-related disorder? Right? And we're pretty good clinicians, all of you are, so you just want a little help. Right? And then the second issue has to do with comorbidity. You know, I love it when I write grants to study PTSD, and then the reviewers come back and say, no, you're studying PTSD with patients who also have depressive symptoms. Yeah, all of my PTSD patients have depressive symptoms. No, they want me to study pure PTSD. Well, where am I going to find that? Right? And that gets back to the points that the other speakers made, which is if randomized clinical trials only have homogeneous, pure patients, what is that going to do to tell you, inform you about your patients? So last week, just to give you an example, I saw a 27-year-old young woman who, starting at age 13, marinated herself in a variety of substances, starting with marijuana, then heroin, methamphetamine, occasionally cocaine, hallucinogens now and then, and she comes to me, and she has intractable auditory hallucinations, which she could describe to me in great detail, and they're just the kind of auditory hallucinations that you see in your patients all the time. You know, self-deprecatory. You're the worst person in the world. You really ought to kill yourself. You know, you're worse than Adolf Hitler. I mean, literally, right? I have no clue. She's failed a number of antipsychotics. I have no clue and no guide at how I should treat this woman. You know why? Because the hundreds of patients I've seen with this history have never been in a clinical trial, because you would never let a patient with a polysubstance abuse history be in a clinical trial, which leaves all of us begging for personalized medicine in psychiatry. So I think Dana's going to lead a question-and-answer period, but thank you very much. APPLAUSE Well, thank you, Charlie. That was a wonderful talk. Minus the comments on the fMRI, but I shall forgive you. We're happy to take some questions now. That was great. That was great. You guys are fantastic. David Holliday, private practice. I have a question for Dr Kumar regarding a bacteria in our mouths, Porphyromonas gingivalis, I believe, is a keystone bacteria that's associated with amyloid plaques. And to underline your point that the plaque may not be the cause, it's postulated that the plaque may be a defence against this bacteria that's getting into our brain. And are there biomarkers for this? What's the research in that area? Tell me more about gut microbiome and brain stuff. All that. Thank you. Thank you. Next question, please. LAUGHTER So, there is a minority opinion in neuroscience that used to argue that plaques are part of a repair mechanism, that the more severe the dementia gets, the larger the plaque count. But that's not because plaques are causing dementia, that's because the brain's trying to fight dementia. Evidence for that is not strong, but that viewpoint exists. A variety of agents have been tried, or they're talking about things which modify plaques and tangles. So, microbiome, gut microbiome is another hot area, like proteomics. It's more recent. Last five years, it's gotten a lot of attention. There are a lot of findings in terms of, could a bacteria remember? There was... There's a prion hypothesis, right? The so-called mad cow disease, Jakob-Reichfeld disease in humans. So, it's not entirely out of the realm of possibility that a microorganism could cause a big brain disease. We know from encephalitis and so on, that's certainly true. But now we're talking about degenerative brain diseases that people thought were genetic, maybe traumatic, maybe stroke-related. But prion disease points to the possibility. In fact, now, many of the long COVID studies that are being proposed, they're wondering whether five, 10, 15 years from now, that would be a risk factor for dementia. We don't know. But from interesting biological hypotheses to therapeutic interventions, that's a big step. That's where we can make mistakes. That's where we can prematurely approve drugs. I just think one needs to be cautious. But who knows what's really contributing? And going by the quote from Dr. Meselam that I used, it's, even if amyloid is a factor, it's one of many factors, very likely. It could be microbiome, could be proteins, could be other combination of things that contribute to this. I may have ducked your question, but hopefully I did it in a kind of a... They make these toxic proteases, right, this bacteria. So is that causing it? I would say possible. I know that's not terribly helpful, but that's the most honest answer I can give you. But thank you for the questions. It's a very good one. Thank you. Okay, microphone in the back, please. Yes, I'm curious about the kind of sorting out the heterogeneity in the biomarkers, particularly what was brought up in first-break psychosis, but I'm sure applies across a variety of diagnoses, in the setting of understanding that our DSM diagnoses likely represent themselves a heterogeneous set of presentations. And do you guys have any work with trying to sort that out? Particularly, does a combination of biomarkers and particular factors of a clinical presentation help to create a more homogeneous population where maybe certain biomarkers start to, you know, come out as statistically significant? I mean, that's the dream, right? Like, that's the dream. Okay, microphone in the front. Hey, so I'm a PGY2, and I do quantitative microbiome analysis. And kind of the latest that the labs that I've been working with have been taking our research has been applying shallow learning to our microbiome datasets to help understand how they impact psychiatric outcomes. And it's kind of brought to our attention that there really isn't an established gold standard for the, you know, like, within validation of data quality. And then kind of the generalizability question, I think, is really critical. Do you have any insight about kind of gold standards that are being produced within this area of research on, you know, it can enhance the quality of evidence? And then the second question is, I noticed that in a lot of the machine learning concepts that have been presented, it's a biological focus. Are you aware of any efforts to include both the biopsychosocial models to subtype illness as caused by biopsychofactors versus biological factors to improve the sensitivity of biomarker identification? Right, so for the first question about standards, it's a huge problem. There are some that have, you know, organizations that have come out, consortiums that have come out and come up with, like, the tripod, CHARM, you know, some of those that have tried to get to it. There's little to no enforcement, unfortunately. So even though I would say that tripod and some of those don't necessarily address data quality and things like that, the fact is it would already be a huge advancement forward if just universally that was a standard that was upheld by journals. It's just not. Part of it is because people don't like to read mathy papers. There's been studies on this, so the machine learning math frightens people and for good reason. And there would be incredibly long technical, you know, supplements, but that's what we need because we're not gonna get to the generalization problem if we don't know, one, and to the problem that was just asked, what we're, and I got short on time for it, but we have so much uncertainty in what we're using as our predictors and what we're targeting. So it's like quicksand on both ends. The actual biological data is not too bad. It's actually fairly consistent. But when you bookend it with all this uncertainty, you're sort of chasing from very uncertain places. The generalization issue, there are no standards really. You know, we believe, we've not actually crossed that implementation gap that probably you would need a model, whether it's prognostic, diagnostic, treatment-based, that would probably need to generalize across time, generalize across clinical sites, generalize across, you know, definitely trends and chains and things like that. So that's the first step. Most people don't undertake it. I actually suspect most people do attempt to undertake it, and the following performance is so bad that they don't bother to report it that they tried it. That's a problem. So your second question about looking at psychosocial factors and things like that, absolutely. So people are sometimes using that as the subgrouping analysis to then tie to searching for the biomarkers or the treatment response. And of course, they found, you know, DSM, totally heterogeneous, you know. Either they share genetics, they don't share genetics, you know, so for the subgrouping analysis, definitely happened. But we've also found symptom-level heterogeneity in most of our psychometric scales. So then you're still chasing a target that's not quite, we have a prior and a posterior knowledge problem. And that is something that both machine learning can help with by doing unsupervised learning to find those groupings, to just kind of get a cleaner, you know, start point for the predictive target. Thank you. We have time for one last question if it can be answered in one minute. That's the challenge. Okay, one minute. Just briefly, you know, we are trying to find biomarkers for different diseases, disorders, and also predisposing factors, vulnerabilities, and ACEs, adverse childhood experiences, and trauma and neglect that, you know, causes these possibilities. We may be looking for biomarkers for all these problems, but is there any comment, especially Dr. Nemerov could address this, that there are people who go through all these traumas and neglect and very adverse circumstances and come out resilient and stronger and happier. And if you would comment on this kind of biomarker. Well, you know, I've spent the last 30 years of my life trying to understand the consequences of childhood maltreatment. And the part of your question is, what about resilience? And what about the biology of resilience? And that has received a lot of lip service and not a lot of research. So we just finished a study of 1,700 children in the immediate aftermath of trauma as part of the Texas Child Trauma Research Network. And we looked at resilience markers using the Connor Davidson Resilience Scale. And what we found was, it was a predictor of that lower resilience levels predicted the development of depression, but surprisingly not PTSD. So 1,700, not the biggest sample in the world. What the biological underpinning is, I don't know. But I had just got an NIH grant in which I'm beginning to look at inflammatory markers as potentially mediating this. One of the things we know about child abuse and neglect is that in many people it results in persistent inflammation for the rest of their lifetime. So you're on the right track, we're just not there yet. Well, thank you everybody for coming to our talk. Thank you to speakers, I appreciate it. Thank you.
Video Summary
The panel discussion centered around the potential of biomarkers in psychiatry, including their roles in diagnosing and predicting treatment responses in mental health disorders. Dr. Nina Kraguljak emphasized the need for precise biomarkers in schizophrenia to predict treatment courses effectively, lamenting the challenges faced due to the lack of such tools. Dr. Kumar explored Alzheimer's disease, highlighting the distinction between biomarkers and molecular targets, and critiqued the current reliance on amyloid-based treatments despite their limited clinical efficacy and significant side effects. Dr. Grisenda discussed the promises and pitfalls of using machine learning in biomarker discovery, stressing issues like data quality, validation, and the need for a structured approach to overcome current limitations. Finally, Dr. Nemerov provided a broader overview, noting the complexities of brain disorders and the challenges of developing meaningful biomarkers. He underscored the ethical considerations in biomarker use, such as privacy concerns and the implications of genetic testing. Nemerov pointed out that the complexity of the brain and psychiatric disorders requires a multifaceted approach, utilizing AI to handle the plethora of data, and suggested that while progress is slow, the future holds potential for breakthroughs in personalized psychiatry. The discussion touched upon biogenetic and psychosocial factors, emphasizing the interconnectedness of environmental and biological influences on mental health.
Keywords
biomarkers
psychiatry
schizophrenia
Alzheimer's disease
machine learning
treatment prediction
brain disorders
genetic testing
ethical considerations
personalized psychiatry
biogenetic factors
psychosocial factors
×
Please select your language
1
English