false
Catalog
Computational Psychiatry and Future Perspectives
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good afternoon, everybody. I am starting the session today. In this session, we will talk about computational psychiatry. After a general introduction and speaking about what the brain electrical activity promises us in the future. What is the computation psychiatry promises in the future perspective. And the first session, I share you, second session, Dr. Mahan will talk about neuromodulation in psychiatry in the current perspective from neuroimaging and proteomic biomarker. Can I correct second presentation, Dr. Sermon, brain waves, rhythm, entropy and chaos. Then second, third presentation, Dr. Turker will talk about deep learning application disorder classification. And last presentation, Dr. Mahan will talk about neuromodulation in brain injury, current perspectives from neuroimaging and proteomic biomarkers. Computational psychiatry, as you know, is the new perspective in psychiatric diagnosis and treatment. It is even breaking our usual thought and belief. This is an area where concepts such as mathematical modeling using information technology, deep learning, the record of the brain, biological signals and entropy are discussed. It is also criticism as biological reductionism. Classical psychiatry proposes making a treatment plan after diagnosis based on symptoms. This approach is considered a good gold standard, but, however, we cannot teach the correct diagnosis and correct treatment without measuring the pathology. That's why we need more, we need new biomarkers. We have no doubt that psychiatric diseases have a brain equivalent, so we need to improve our ability to measure brain function. As you see, biomarker subtypes, predictive biomarkers, there are prognostic biomarkers, pharmacodynamic biomarkers, pharmacotoxic biomarkers, efficacy response biomarkers. We can talk today, predictive prognostic biomarkers subject. Then, as we see, the universe was previously considered mother-based, and material, mental illnesses were shifting between concrete and abstract information. Now it is understood that the universe is energy-based and digital-based. So we have to meet psychiatry with new science, genetic science, neural network, and information technology we have to discuss. We cannot discuss neuro-quantology yet, but we will be able to generate mathematical models between brain waves and diseases. The classical EEG records the electrical activity of the brain, quantitative EEG can convert this to digital data, LORETA can provide electromagnetic mapping, PET, TPEC, fMRI can show hemodynamic and metabolism, and entropic calculation can give thermodynamic mapping. CORDONS is a different calculation method between brain electrical activity and brain metabolism. As you see, some genetic and neuroimaging EEG, evocularly related potential, autonomic system, general cognition, social cognition, different mechanism, and MCA, Alzheimer's, schizophrenia, depression, PTSD, ADHD, there are some biomarkers types. And the other side, we as clinicians find the clinical correlation. The last cornerstone development in this regard is the FDA approval of a QEG-based study in ADHD in 2013, as you see, FDA release 2013, I think this is important cornerstone for psychiatric illnesses. Perhaps it is the next 10 years, mathematical modeling of every psychiatric illnesses will be found. We will be able to do treatment through this modeling. This is a human in the changing world, this is the Industrial Revolution's case. Industrial Revolution, first in 18 years, mechanical revolution, mechanical steam power, then money, power, and domination began to change hands in this century. And second Industrial Revolution related to electrical revolution, third Industrial Revolution related to electronical revolution in 2000 years, then 20 years, 21 years, there are industry 4.0. We are talking about netting in industry. And what is the industry, third Industrial Revolution, autonomous reward and simulation, system integration, Internet of Things, cybersecurity, cloud computing, augmented reality, and big data. Big data is very important for psychiatric diagnosis and treatment. Data is called everything, but now called everything is data. We say money, power, and domination are in the hands of those who change the data. We healthcare professionals have to make the big data available to our patients. Let's briefly remember the relationship between brain and behavior. And in this slide, you see how the ecstasy pill creates false happiness in the brain. The image of the healthy brain is on the left and on the right. On the right, the brain after ecstasy, serotonin stores empty. In the anterior brain region, as you see, after ecstasy, and ecstasy has serotonin stores empty. In the anterior brain region, however, it can return to normal after two weeks, three weeks, after three weeks. This is brain, brain changing models, very important for creating biomarkers. This is the normal brain PET image, and the right, the schizophrenic brain, right there is a hypo-function frontal brain area, the same antisocial brain PET imaging, but the same antisocial brain, same according to healthy brain, there are hypo-function frontal area. It is in schizophrenia, antisocial, and frontal region have hypo-function, but clinical appreciation of this patient is completely different. We need more advanced technology. As you see, the Rocha clinical psychodynamic test, as you know, the Rocha, this is a Rocha test, we know, it is like fortune telling, it works very well, we can make some psychodynamic comments, but it is not verifiable and falsifiable. So we need new tools as a scientific technology, scientific methodology, for this reason, needs the new perspective, new views, problematic issues in the mental illnesses, mental illnesses might account for about 60% of total health care costs and 30% of disability claims. Diagnosis and prognosis are problematic, and available biomarkers, new biomarkers. Biomarkers are objectively measured characteristic in the indicator and sensing causes of illnesses, the clinical course of its modification by treatment, biomarkers definition working group in the context of drug discovery, biomarkers are objectively measured indicator of pharmacological response or biological process that are quantifiable and precise and reproducible. And ideal biomarker should be reliable, objective, readily, easily available. Treatment in psychiatry, initial treatment of frequency to don't lead to remission and recovery, the current treatment approach involves length, trial and error period. Failure to achieve remission, more recurrent and chronic course of illnesses, increased medical and psychiatric comorbidities, greater functional burden, increased social and economical costs. As you see some biomarkers and entropy measurement, our study, EEG biomarker in depression, there are EEG using and some different connectivity studies, this publication, and in the different publication we discussed the entropy studies about the brain activities. And what is the QEG? EEG and QEG, EEG is measure of the brain spontaneous electrical activity occurred from the electrode placed on the scalp. Recorded activity at each electrode is the gross measure of electrical activity arising from the number of different neurons in cortical areas surrounding the electrodes. The rhythmic EEG spectrum is categorized into various oscillation frequency delta waves, accompanied slow wave sleep, theta waves, reflect at a state of grossness, alpha waves, accompanied relaxed state beta waves, reflect and engage or active brain. QEG, the EEG signals are either described in the terms of absolute or relative power. Absolute power is the amount of the power in the EEG frequency, but at the given electrode measured in microvolts. Relative power is the percentage of the power in the any frequency one compared with the total power of the entire EEG spectrum. Constructive EEG has become a popular tool to explore potential biomarkers of the several psychiatric disorders by classifying electrical activity of the brain in different requests. So also you will see FDA approved 2013 in ADHD biomarker, the theta beta percentage. Electrical signals from the brain were converted to digital which allows patterns undetectable by the naked eye to be revealed. Constructive EEG, cost effective, easier to than other modalities of brain, function measurements such as PET and SPECT. However, QEG biomarker have not been independently validated. And QEG increased alpha power in depressed patient who responds to SSRI and raised frontal theta power. Theta power was the best EEG predictor to check in Hamilton depressants of course. Alpha bond asymmetry of the left lateralization has been demonstrated in treatment response. And predictor of the clinical response, neuroimaging, neurocognitive, electrophysiological measurement, differences are related to subsequent clinical response to antidepressant drugs. Even related potential neuropsychological factors, genetic polymorphism, sleep. EEG for predictor treatment outcome increased posterior alpha in depressed patient who subsequently responded to amyotryptiline. Bloch-Sutton responder had greater alpha than non-responders. They also found SSRI responders to differ from non-responders in alpha asymmetry with responder showing relatively less cortical activity over the right posterior region as you see. What is the CORDIS? CORDIS is an interesting EEG calculation style. CORDIS is an EEG method which combine complementary information about absolute amount of the power and the frequency amount of the given electrodes and relative power, the percentage of the content in the frequency amount, relative power, total spectrum of EEG spectra and CORDIS computation. CORDIS computation is the electrode site, frequency amount, normalized absolute power, normalized relative power. If the site is discordant associated with the white matter lesion, CORDIS is negative. If the site is concordant, CORDIS is positive. Pet studies, related to pet studies, similar pet studies results we see. I want to see your three cases about before treatment, after treatment. 21 years male, fear of death, obsession with religion, perceived anxiety, palpitation and sleep problems. DSM, according to DSM, obsessive compulsive disorder, treatment with SSRI for six weeks, only SSRI using, back anxiety decreased 33 to 7, yellow-brown decreased 35 to 20. And in comparison with the normative database, so Z-scores, there was an increase to dominant beta-bond activity. That was changed to normal level, this age following the treatment. Notable, we also observed relatively decreased alpha-bond activity in the frontal region before the treatment. And as you see, this Z-score to brain mapping, this increased increased beta waves before treatment. After treatment, using SSRI treatment, decreased beta waves frontal area. This is the significant results. This is what we can interpret this biomarker, this biomarker. 22 years female, fear of death, overthinking, hypochondriac pulse and behavior, sleep problems, perceived anxiety and palpitation. According to DSM, anxiety disorder, six months SSRI treatment, back anxiety decreased 36 to 12. And an increased frontal central theta activity was resolved following the treatment. In addition, we observed increased left frontal central beta power, which was changed to the normal level for this age following the treatment. As you see, theta power, high theta power before treatment, and decreased theta power after treatment. There are meaningful results, I think. And last case, 43 years old female and a factory worker, palpitation, severe thinking, hypochondria, and depressive mood. According to DSM-4, anxiety disorder, six months SSRI treatment, back anxiety decreased 38 to 11. Increased, in this case, what is the brain changing? Increased absolute power in the beta bond, lower relative power in the delta bond before treatment, decreased relative power in the high theta bond. The relative power of the high beta and delta activity were not affected by the treatment. Absolute power is the beta bond, beta bond, changed to normal level for her age after the treatment. Moderate increase in the theta bond relative to normal relative. As you see, beta bond, high activity, beta activity, according to the normative databases, the high activity after treatment, six month treatment, decrease beta wave. This is the clinical results and bioelectrical results, very correlated as you see. Last of the presentation, I want to think last results, this as a result, this will add a horizon to psychiatry. Thank you very much for your attention. Morning, the ability to produce a quality alpha wave is associated with the individual's affective repair quality. Mood disorder is an irregularity between slow and fast wave activity. It has been suggested that the relationship between slow and fast wave activity reflects cortical and subcortical interactions, including emotion processing in which the other component of the activity is attention. In fact, it was defined as mood regulation capacity index in the fMRI findings showing the cortical subcortical coupling between the amygdala and the frontal cortex. And increased delta beta coupling in the frontal peritoneal region means better executive function, such as an increased inhibitor control and health decision-making processes. The ability to produce a quality alpha wave is associated with the individual's affective repair quality. Mood disorder is an irregularity between slow and fast wave activity. It has been suggested that the relationship between slow and fast wave activity reflects cortical and subcortical interactions, including emotion processing in which the other component of the activity is attention. In fact, it was defined as mood regulation capacity index in the fMRI findings showing the cortical subcortical coupling between the amygdala and the frontal cortex. And increased delta beta coupling in the frontal peritoneal region means better executive function, such as an increased inhibitor control and health decision-making processes. In bipolar cases, these functions are disabled. In our studies, cross-frequency coupling appears both as a trait to deal with depressive symptoms and as a state to reveal mixed symptoms. Brain-fuzz synchronization are occurred in the same way in the healthy traits. One, in processes that develop suddenly in an emergent manner. Two, in processes that stimulate past memories. And three, in decision-making processes. Moreover, under half the circumstances, the processes tend to consistently create these fast synchronizations. Chaos occurs when entering and exiting these synchronizations. When entering and exiting synchronizations, uncertainties in phase change are perceived by the change of entropy. Multi-channel spontaneous EEG introduced is space-time joint data, which is called spatio-temporal data. Depending on the sampling frequency, number of bits, the representation for each sample, and number of electrodes, this type of data is big data, recorded during at least three minutes ongoing duration. One hertz component of EEG is completed in one second cycle in time. Chaos has a characteristics of distinct various frequencies spread in the widest coupled spectrum. In the other words, it is a some sort of degree of growth change in phase space, which create an uncertainty in instability under certainty. Periods of love frequencies diffuse into large frequencies, bottom up, and vice versa, top down in a mutual manner during chaos event in intermittent period. This happens very smoothly as mixing, which is a condition to create chaos, and it happens for a short time interval. The prolongation or shortening of this time interval during chaos may reveal a disorder. The strength of the diffusion of different frequencies at different time points cause to drop single event from among infinite possibilities very quickly. Meanwhile, entropy increases to a point during a chaos process. This increment is directly proportional to the homogeneity of the diffusion of spread rate. Contrary to the density of the scrambled egg at different points, we understand how well the egg is whipped and how homogenized it's from the change of entropy. For any psychiatric disorder, the change of entropy at any recording time interval can be a biomarker. In our last study, we examined chaotic entropy change at the end of the first hour following basal and oral 300 milligrams lithium carbonate intake in 10 bipolar cases. In eight of the 10 patients, we indicated that chaos decreases associated with entropy decreases. Neurons have shown a different collective behavior at a mesoscopic scale than a single neuron behavior. And even that takes place at a single neuron level is not superimposed onto each other at the mesoscopic level. Chaos occurs in the dynamics of the collective behavior of neuron populations emanating with phase transition. This cause is reflected in mode decision-making processes and creativity. At the scale level, a question like which electrode and which brain duration is not so meaningful. The dynamics at mesoscopic scale of subcortical neural populations can work space-time at macroscopic scale of the cortical structures. There is no liner transformation from one neuron up to the global cortical scales. Thank you very much for your attention. Wonderful. Thank you very much for having me. I'm very excited to be presenting at the American Psychiatric Association. I'm very excited to present with my collaborators from Istanbul. Thank you very much. I will be talking today about neuromodulation and brain injury and current perspectives from neuroimaging and proteomic biomarkers. I'm currently the clinical associate professor in neurosurgery at Stanford Medical School. And I'm also a senior scientist in the rehabilitation department at the VA Palo Alto Healthcare System. I have no disclosures other than the fact that I am a scientific advisor for True Genomics Health as well as NeuroFit. Both of those companies do not have anything to do with the data that I'm presenting today. Today, we're going to talk a little bit about the, first of all, detecting the neural signature of brain injury. Brain injury is a very heterogeneous problem. So you can get hit or by the different things in different parts of the brain and how, or you can be in a blast. And we will talk about that a little bit because my data that I'm presenting has to do with brain injury today. We will talk about how do we diagnose the structural and the functional changes in the brain that happen with brain injury. And then we will talk about how I'm using a method called repetitive transcranial magnetic stimulation to treat patients with these brain injuries. And we will talk about how we integrate behavior, neuroimaging and other biomarker techniques to understand whether the treatment's working or not. So the first thing I want to give a little, a brief background of what happens when you actually get hit on the head. So you can either get hit or you can also be in a blast. You can fall, you can be in playing sports. It could be a result of sexual trauma. It could be a part of a military operation. What happens with an impact is that you get these rotational forces that result in brain contusions. The blast waves that you might be in in a military or a suicide bombing attack or something like that. We have data from Pakistan now that shows that people have been part of, when they were part of a group that went through a suicide bombing attack, they were in blasts. And their brains have gone through very similar things that military personnel go through. The blast waves actually share and they distort neuronal tracts. And the biochemical changes happen with that tearing that leads to diffuse axonal injury. And you can see on the slide what happens with full contralateral injury and how the axonal swellings can lead to a diffuse axonal injury. What happens with traumatic brain injury, this is a very important point is we have really studied it with a focus on military personnel and it's known as the signature injury with Operation Enduring Freedom and Operation Iraqi Freedom and other conflicts. What happens with this is that you have a very high rate of post-traumatic stress disorder, which is something that has been found to have biological components of the brain and it interacts very heavily with anyone who has traumatic brain injury. You can see in the graph here that you have big, especially when you have mild and moderate traumatic brain injury, you may actually have a very night, a very big co-occurrence with PTSD and also with depression. One of the things that we have really looked at in our studies is we have looked at traumatic brain injury patients and we have looked at mapping the fiber tracks in their brain through a technique called diffusion tensor imaging. And when we map the fibers in their brain, the big white matter fiber tracks, we are able to do so by different methods such as root receiver operator characteristics or really figuring out by different templates that we use. We figured out where in these big white matter fiber tracks are there any breaks due to injury, which means that the flow of water or the integrity of water is going to be disturbed. What we have found in our papers is we have really discovered certain clinical thresholds for the properties of water that can help us determine that this person who had brain injury in a certain area of the brain did show a change in water integrity and this area of the brain is damaged as in the white matter tracks. In our paper that was published in 2017, we were able to show that the right anterior thalamic tract and the right inferior longitudinal tracks were actually very salient or incredibly, it was a very good way of these two tracks to tell us that even though you might have mild traumatic brain injury or moderate years ago, 10 years ago or eight years ago, I think the average was 10 years, we are still able to see damaged integrity of water flow in these two fiber tracks. And we did this in a big analysis where we looked at all the different fiber tracks in the brain. So structurally, the brain is going through changes, still has changes from brain injury, years after injury, and we're able to locate this change by looking at the white matter fiber track structure. So the implications of that, that we should really in future look at diagnostics and interventional applications of DTI that may focus on thalamic and inferior longitudinal tracks. Now, these are really big tracks. They're not very small and they connect different brain regions. And we're looking at a structure of the brain. So the next question would be, what is the function of the brain? So in order to really look at what happens in an injured brain to brain connectivity or function of the brain is to really look at functional MRI as opposed to the structure that I was just talking about in which we looked at the fibers. Recent work indicates that there's changes in brain connectivity and functional MRI, the way the blood response hinges across different regions of the brain when you're adjusting state activity, when your brain is at rest. It's a very low level response, but we can look at that brain connectivity and we can look at if there are any alterations in brain function years after mild to moderate TBI. So we did talk about structure first and now we're talking about function, brain response, blood response, the activity. So we focused on a model that was constructed to investigate the networks that are responsible for the most debilitating problems in chronic stage of TBI. And one of the biggest problems that we have seen in literature when you have brain injury is that you can't plan, you can't organize. You have a very big, what's called executive dysfunction. What we found in our data, again, this has been revised and it's going into clinical neuro image. It's under revision, is that when people have traumatic brain injury years after traumatic brain injury, they're still reporting some memory problems. And when they report these memory problems, they actually show a lot of functional connectivity changes. And this functional connectivity changes really stems from an area called left dorsolateral prefrontal cortex. So the connectivity from that area, the left dorsolateral prefrontal cortex is really not very good in people who have had brain injury before eight to 10 years before and are complaining about memory problems. So their blood response in that area and going to three other areas in the brain is not good at all compared to people who don't report or are normal. That means that area in the literature, you will see left dorsolateral prefrontal cortex is the hub of central executive network. It's involved in planning, it's involved in organization and it's also involved in cognitive control. So, so far what we have just told you in our research in this presentation is that we're able to define areas in the brain that are structurally big fibers that are damaged and we're able to define functional areas in the brain that are damaged years after injury. And we've now focused ourselves in one area called the left dorsolateral prefrontal cortex, which is where connectivity is not showing as good as it is in controls. So I actually just said that. So we're kind of cross pollinating, borrowing from different advanced neuroimaging to tell us where we should treat for brain injury. So what we decided to do is that, okay, we know these areas are important, so why don't we stimulate or try to attack a certain area in the brain which is damaged so it can get better. So we decided to look at an area in the brain for executive function and emotional performance called singular opercular network. So if you stimulate dorsolateral prefrontal cortex, it can actually stimulate downstream certain other networks. It can stimulate singular opercular network, it can also stimulate executive network. So why did we do that? Well, we decided to do it. This is my first trial that I did, gosh, four years ago. And since then we're involved in so many different trials. This was a double blind randomized clinical trial study. We stimulated the area with what's called repetitive neuromagnetic stimulation, dorsolateral prefrontal cortex. And we ended up testing, actually recruiting 32 veterans with mild and moderate TBI and the outcome measure was an executive function. And we wanted to see if we could improve the executive function or brain activity. A little background, this is how TMS actually used to look in 1911. And this is how it looks now. So you can see how we have really improved how we stimulate the brain. TMS is basically electrical energy. And I know that most of the people in my collaborators in Istanbul and Eskadar and the hospital, they have this technique. And a lot of other hospitals around the world have this technique and we're using it for research. And it's also being used clinically for the treatment of depression. Hopefully it would be FDA approved soon for other things, but right now it's FDA approved for major depressive disorder. It's electrical energy within isolated coil that induces MRI strength magnetic fields. Magnetic fields pass unimpeded through the cranium for two to three centimeters. And then in turn, they induce an electric current in the brain and this stimulates the firing of nerve cells and the release of neurotransmitters in that area we call the synaptic cleft. And these neurotransmitters are released and hopefully will cause plasticity and create a lot of crosstalk between the neurons and help the injured brain heal. One of the things here that's important is that it does this through BDNF, which is brain drive neurotrophic factor. And I wanted to throw that in there because we're going to talk about that in the next three slides. So really quick, our data shows we have 17 patients in active RTMS, people who got active treatment, and we also got 16 sham who did not get active treatment. Somehow we did not randomize on PTSD, we randomized on mild and moderate. We were pretty much balanced, but all the female ended up being in the active RTMS for some reason. We did not randomize on male and female. We don't have enough females in the veteran population. The first thing you see in this behavioral result is that like some of the other trials that have used RTMS for depression, we did not see an effect of improvement in executive function. There was no difference between sham and active group. In fact, sham was showing a larger magnitative difference compared to active baseline. This is basically our behavioral results. We tried to explain this. We were like, what is going on? Well, it turns out that we, because we had not randomized on PTSD, it ended up that a lot of participants with higher score on PTSD checklist ended up being in a sham group. And we think just like some previous trials that the attention that was provided and the fact that they had higher PTSD, being in a clinical trial for two weeks or more and getting three treatments a day or two treatments a day, they showed more improvement than people who had lower PTSD. So PTSD kind of took over everything. I would not be presenting this analysis if I just had those results. We also did a diffusion tensor imaging analysis in which we looked at the difference between sham and active in previous at baseline and then after treatment. And we looked at changes in white matter integrity. That is the same stuff I talked about before. What does the white matter fiber tracks look like? Well, it turns out that we found a difference in what's called the cingulum hippocampus fiber. We found that there was a difference in active more than in sham. So if you did it at time one, sorry, in time two. So time one is baseline. So at baseline, there was no difference between active and sham. Obviously they had not gone through any treatment, but when we multiple corrected all the fibers, when you looked at active versus sham after treatment, there was a difference in, and it was in the right direction. There was an increase in the white matter water diffusion integrity in a fiber, which is responsible for a lot of things such as memory as well as emotional processing. It's the cingulum hippocampus fiber. So we know that, so we just talked about behavior. We talked about the structure. Now we can talk about the functional connectivity. When we looked at the overall change in functional connectivity in active versus sham, we were able to see a decrease in connectivity between stimulation site and the cingular alpha killer network, which means that at session two active versus sham, there was an increase in activity with RTMS in the sense that this was very nice that we were able to show differences in the brain route, even though we didn't see any differences in behavior. We were also able to compare the mood, which is depression and executive changes. And we were able to kind of compile profiles that when you change after two weeks of treatment, executive function shows kind of a different connectivity profile that has gotten better and improved. And mood is a secondary outcome that also changes and gets better with RTMS. And we were able to show some regions that were involved in this clinical efficacy as well, which is something that is very preliminary. Last thing I wanted to mention was we did go from behavior to structure to functional. The last thing we looked at, and this is something that we are also preparing for submission in a paper, is we also wanted to look at what's going on in the synaptic cleft. Are we really releasing BDNF and is that really the biomarker that's going to be the basis of why TMS works in the brain? So we basically took blood samples at baseline and then throughout the treatment of two weeks, every, I think we took two more treatments, and then we were able to take one post and then six months. So we extracted the plasma, we measured the BDNF and we used ELISA and Western blood approaches. We were able to do it at baseline and at treatment and at six months follow-up. And what we ended up showing is there's no standard errors here. There's better graphs now that we're preparing for the paper. I didn't want to show too much, but you can see that you have baseline and you have treatment at 10 treatments, and then you have 20 treatments and you have six months. So we were able to get one blood draw between treatments. I tried to do more than that. It was very difficult with the logistics. The blue is active, the orange is sham. And you can see that actually the active statistically significant stays up, not just at the end of the 20th treatment, but it also stays higher than sham, the BDNF and pro-BDNF ratio when at six months. Western blood neurotrophic factor is the good molecule that you need in your synaptic cleft. Pro-BDNF is the bad one, which is that it cleaves and then makes BDNF. So you want more of BDNF. Now you're going to ask me, why is it at baseline that you have a higher active? Why is it that you have higher BDNF and active at baseline? The answer to that is if you really go and look at the Western blood and look at the polymorphism of BDNF and break it down into val and met, you will realize that at baseline, the val-val homozygotes had a higher circulating BDNF levels than met carrier. This graph is really important to understand how RTMS and target not just the specific area in the brain, but also target very specifically the genetic makeup of each individual. It's the highlight of precision medicine. If we knew from the start that val-val, we had certain val-val people in our population that already have high circulating BDNF, then we know that RTMS is not going to show that much of an effect on these people when you are val-met, then you have a lower level of BDNF and you may respond better to RTMS as things go on. Again, genotype is a significant factor. We should look at genotype and we should look at structural function as well as behavior to really understand all the different things that are happening in TMS. All of these things that I've told you about, this is a very nice way of illustrating all the different mechanisms that we're using to understand TMS and what we should randomize on and what kind of comprehensive model we need that incorporates measures from different resources. I think what is important is that you need multiple data sources and when you get multiple data sources, you're able to do really good analysis, but then you're also able to publish guidelines. We just published the guidelines for the North American Neuromodulation Society and part of the Steering Committee and we published the guidelines for Post Traumatic Headache Task Group, Transcranial Magnetic Stimulation for Pain, Headache, and Comorbid Depression, the INS NANDS Expert Consensus Panel Review and Recommendation. This was published because we combined neuroimaging functional and structural with RTMS in people who have headaches and people who have TBI and depression and we were able to look at all the different things, put them together and to present to the scientific community that this is what works and this is what does not work. Since I have been involved in this pilot study, I have been funded for about three more grants that are RTMS, one looking at improvement of memory in older adults with TBI. We're looking at TMS and alleviating pain and comorbid symptoms of L4 illness in veterans and we're also funded for neuronavigation guided TMS in also looking at L4 illness headache and pain symptoms. That is a Department of Defense grant. And that's the end of my presentation. I wanted to thank all the people that have really helped me put this together. And if you want to see more, you can visit my website at abslab.stanford.edu. Thank you. Next presenter, Dr. Turkert. Professor Mahin, thank you for your contribution because we are all from Üsküdar University and being a partner of our research group, Mahin is taking part in our presentation, so we highly appreciate for her participation to our session. The last session is about deep learning application and disorder classification. As Professor Karahan shared, the new approach, the new studies are mainly focusing on big data and since the beginning of Industry 4.0, the data that is collected from all types of science fields is massive. So since the data size and the resolution of the data is increasing, the regular standard classification methods or approach like artificial neural networks or support vector machine are not good enough to work on that massive or high resolution data. So the data mining processes are quite valuable in that step. Since the data resolution is increasing, both voice recognition or image processing steps are having more importance because the data collected from subjects are mostly image or signal data like EEG or MR data. So today we will be sharing a recent study that is focusing on EEG data using deep learning approach to classify healthy subjects from MDD subjects. Before starting our presentation, let me share the main difference between machine learning, the standard, the classical machine learning approach and deep learning approach. The standard or classical machine learning approach are called or named as superficial learning approach. Since the data resolution is increasing, the new technologies or algorithms or hardwares are good enough to process that massive or high resolution data, while standard artificial neural network or support vector machines are not good enough. Since they are mostly named as superficial learning algorithms, so for our high resolution data that is collected from our MDD and health control subjects, since its resolution is high, we didn't employ or use standard classical machine learning approach. And for the following years and recent studies underlying the importance of deep learning studies, since the resolution is increasing for all type of data. In order to underline the importance of the collaboration of new technologies, I just wanted to share the figure on the left. The combination or the hybrid use of Internet of Things technology with artificial intelligence is quite promising since the collaboration or the mutual interaction of those technologies are quite important for the following years. Because the medical treatment processes are going to be using mostly virus communication technologies. For example, you're going to swallow a pill and the pill will collect biological data from your body and will share the information, the data over a virus communication with an algorithm like a machine learning algorithm or deep learning algorithm. The algorithm will process that raw biological data in order to predict or estimate some disorder with a given probability. So that is quite a valuable step, I mean the collaboration of Internet of Things technology and artificial intelligence technologies and the future of those two technologies are quite important when they work together, especially for healthcare studies. And there's another chart over here that is taken from PricewaterhouseCoopers International Limited. This chart also underlines the importance of healthcare studies in the middle that is underlining the current AI adoption of the given technologies. As you can guess, financial services or high-tech telecommunication companies are mostly using AI technologies and healthcare technologies are mostly are intended or the tendency of using artificial intelligence technologies are increasing. So healthcare is a new step or new investment field for the developing technologies and it's a new field to work on. This curve that is called Hype Cycle that is published by the company Gartner and it's published every year that represents or gives the impression about the following years for the investment fields. For example, when you see the peak point of that slope or that chart, that graph, you see deep neural nets or deep learning at the top. So it represents the peak of inflated expectations. So it means the expectation of deep learning is quite high and the blue color or that circle deep icon represents that in two or five years, it's going to be in plateau productivity. It means in two or five years, the productions of deep learning algorithms will be used in our daily life. So it's quite important since it's a progressing technology and some applications are also being used in healthcare studies and it's going to be very wide or very well-known in two up to five years. It's quite an important chart for all kinds of technologies and it's peak level for expectations of you. And let me share a video with you on the left before that I'm going to share some statistics or some numerical values that are quite important to take the photo current AI technologies in terms of healthcare industries. I'm going to start from the first one. By the United Nations estimate, China's population over 65 will reach 303 million by 2050. That's quite much because it's in 30 years. So in 30 years, the population of China older than 65 will be more than the overall population of the United States. So it's quite important. It's not possible to serve all people in the hospitals. Instead, AI technologies will be employed, will be used to serve the people before they come to the hospital, which will eliminate the application process and will serve to the people or healthcare system in terms of diagnosis or treatment prediction processes. So it's quite important that China is investing on AI technologies, especially in medical studies. And the video on the left, given that I'm going to run soon, is related to that studies. Another company, I'm just going to skip the text on the right, Infravision is helping 280 hospitals worldwide to detect cancers from images. And as far as I know, it's the company of Jack Ma. Probably you already know who Jack Ma is. He's the owner of e-commerce company that's called Alibaba. He's also investing to healthcare studies. So that transformation is quite important in terms of AI technologies. So I'm going to run the video on the left. It underlines the importance of AI-supported technologies because it's going to compare the performance of the real doctors with the AI technologies, which doesn't mean we will, I mean, the AI-supported technologies will surpass the medical background or medical support of all our hospitals. So it's quite important to understand that with the support of AI technologies, healthcare technologies will improve itself. So with standalone, the doctors, the system is still working properly, but with the support or with the help of AI technologies or applications programs, the doctors will have more time to focus on the subjects who care time. So I'm going to run the video in order to compare the performance of the doctors and the AI-supported software. It's a two-round competition. The first one is brain tumor diagnosis. AI was correct 87% in 15 minutes for 225 cases, while the doctors achieved 66% and it took 30 minutes. And in the second round, AI prediction capabilities, 83 in three minutes, while the doctors are 33% and it took 20 minutes. So the contribution that's quite important in order to understand the underlying contribution of AI technologies, because with the contribution of, or with the support of AI technologies, the diagnosis processes will be faster and with high classification accuracy, that's quite valuable for the doctors and healthcare systems. So what is required for deep learning algorithms? The hardware is quite different from standard computer hardwares. We need GPUs for deep learning algorithms because they require powerful hardwares and GPUs are used for data processing or classification algorithms. So the default GPU is given on the left bottom, while the regular standard CPUs are given on the left bottom. And that is the framework used by AI or deep learning algorithms. Mostly TensorFlow is used to develop those programs. And I'm not going to focus on the other remaining frameworks, TensorFlow is a widely used one. The standard artificial, so what is missing or what is the deficient part of artificial neural network? What is missing in artificial neural networks and why are we using deep learning algorithms? When you add hidden layers to your neural networks, the number of layers increases. And since the learning process comes from the output to the input, and it's called backpropagation, through that propagation process, since the derivative of the error is calculated, computed, when you increase the number of hidden layers, the error, the value of the error at the beginning of your neural network structure will be almost zero, which is not correct. I mean, because of the number of neurons or hidden layers, the derivative of the error will look like zero, although it's not zero. So because of that problem, it's not recommended to use more than, let's say, three or four layers for artificial neural networks. So in order to overcome that problem, and that's called vanishing gradient problem, in order to overcome that problem, we're using deep learning algorithms and rectified linear unit transfer function. And this is quite a high performance solution to overcome the problem, and it's a subtype of deep learning structures. There is a figure given on the screen, and I'm going to start my deep learning classification process by using a daily example. When we see a different number of leaves on the screen, it's quite difficult for us to discriminate them. I mean, for example, on this screen, we have 50 different leaves, but it's not possible, for example, for me to understand which leaf belongs to which type of tree. But when you feed the deep learning algorithm with different types of leaves, since I said that there are 51, sorry, 50 leaf samples on the screen, when we give the 51, I mean, the new leaf to the system, the system will try to find the features which are similar to the former 50 leaves. It's like, suppose that you have a cup, and a cup has some distinctive features, like it has its handle, its shape, its height, its, let's say, the material cover for the cup, I mean, the ceramic or porcelain material. So, when we see an object, we first try to find the discriminative or distinctive features for that cup. For example, for a cup, as I told you, we have to look for some discriminating features or some specific features peculiar to that cup. And it is valid for deep learning algorithms. For example, another image, this one, it is, for example, difficult for us, I mean, for me or for my kids, to understand whether the given on that square is a cho-cho or a muffin. They look quite similar, but for a deep learning algorithm, it's not that much difficult, because there are some specific features of a cho-cho dough, and there are some specific features for a muffin. So, when we feed the deep learning algorithm with different types of cho-cho or muffin pictures or photos, the system automatically learns or understands or, let's say, memorizes, that's called actually a learning process. When the model learns the main difference between those two types of classes, it predicts the next, the newcomer image. How does that deep learning algorithm learn that difference? It's like us, I mean, the human, we learn by experiencing. I mean, since we know that there are quite, I mean, the cho-cho and muffin images are quite different, but when I give those pictures to my, let's say, my son, he's five years old, he's going to have some difficulties to discriminate those photos. So, he is going to, first of all, try to find the distinctive features of the dough and the muffin. So, the deep learning algorithms are also working in that way. They are going to try to find the specific features. That's called feature extraction. So, we're not going to explain, for example, I'm not going to teach my son that the dogs have a nose or two eyes and they have symmetric face. So, when he experiences that the images are quite different, the model, the deep learning model will also understand the main difference or will try to extract the features that makes them specific to muffins or specific to dogs. That's how the deep learning algorithms work. So, I'm going to give a specific example for two letters. The first one is X and the second one is O. On the left, there are four different types of X, but they are all X for all of us. I mean, both the first and second or let's say third and fourth one are all X symbols, but they are exactly quite different from each other. Then, why do we say that they are all Xs or what about, let's say, focus on the O symbol on the left bottom. They are all O characters, but they are, although they are different from each other, we classify or we see them as the letter O. So, since they're all, I mean, both the X and O, they all have the main, the specific features of being X and O. That's why we classify them either X or O. The next layer called CNN is the abbreviated formal convolutional neural network, is the one, it's a kind of layer, is the one that is used to look for the specific features peculiar to X or O. For example, the one on the left, the green one, the purple one, or the orange one are the specific features of X. I mean, for example, the green, I mean, the squares in the green one, the squares in the orange one, or the squares in the purple one are not seen on O symbol, or we cannot see, for example, them in three. So, they are specific features and they are peculiar to X. They are specific features of X. So, first of all, you're going to try to find the specific features of a given image. The one on the left is given the left diagonal, the one on the middle is the X symbol, and the one on the right is right diagonal. So, they are the main features of X. So, we're going to use those filters. I'm going to call them as filter because they have specific features and they all belong to the character X. So, by using those filters, we're going to look to the original image by using those filters. It's a kind of a matching filter. So, when we see that they are matching, and that is the value used to underline how good this filter matches this image, we find, you see on the right, green, orange, and purple image. We see this one in the middle is the filter, and when we look to the original image with this filter, we see the one on the right, the green one, and the left one, I mean the black and white original image, are our original data, and we look to the original data by using the filters given in the middle. When we use the filter given in the middle to the original one on the left, we see the filtered original matrix on the right, and it's valid for the last one, I mean the purple one. When we look at the original image with the purple filter that has specific features, belongs to X, gives the one on the right, the purple one. So, probably you can understand or see the right diagonal for the green one, the X in the middle, and the right diagonal for the purple one. So, when we apply that filter to the original one, we're going to get the one on the right. It's convolutional neural network, and it's used to generate a new layer which has the similar patterns of our original image. So, the main purpose of a deep learning algorithm is looking for a specific pattern. It's like fingerprint. It's like face recognition. So, the deep learning algorithms are mostly focusing on distinctive patterns or features. So, the next layer is pooling layer in order to minimize your matrix size. So, from 7 times 7, we converted our original matrix to 4 times 4 in order to diminish the size of your data. It's called pooling. This is a rectified linear unit that specifically eliminates the similar ones from the original one, and then the whites and the black ones are separated, and you can easily see the right diagonal in the middle. So, when convolutional rectified linear unit or pooling layers are added one to another one in a consecutive order, we will finally get an output function that is called softmax. Softmax function is used in order to classify whether a color, for example, is green or blue or purple, but probably you can see the probabilities of the colors are listed. Instead of giving this is green or this is blue, the system automatically calculates the probabilities using softmax. So, when we use the probabilities, it's going to be an implication. So, we will predict how green this color is. So, this output represents it's probably green, and with less probability it's blue. So, why is that that much important? If you feed the system, for example, with an image and the model does not know whether this is a car or truck or a ship, when we feed the system with an original image, let's say it's a car, the model or deep learning algorithm will try to find the main features of a car, like it's going to have four vehicles, it's going to have mirrors, its shape is, let's say, like the one on the left, its height, its weight. So, those features do belong to a car. At the end of the layers, convolution, pooling, or rectified linear unit layers, the model, by using softmax function, will calculate a probability. So, it says it's probability, it's probably a car. With less probability, it's a truck, and with a decreasing probability, there are other types of transportation vehicles. So, from this probability, we're going to say that it's probably a car. So, this is a specific example. The deep learning model is fed with an original image, and we wanted from the model to predict whether they are dogs or person or horse or car, and as you see from the left, the deep learning algorithm, it's called AlexNet, catches the car with one probability. So, it says, the model says it's a car, this is a dog. What is the probability of being a dog? It's 99%. Or let's say, what about this one? With 97, or let's say 98%, it's a person. It gives the probability of being a person. So, when I switch back to the right image, it gives the probability of being a dog. Why does the model classify this object as a dog? Because it probably, or mostly, carries the features of dogs. That's why it says it's probably, or with 99%, it's a dog. It's valid for a cat too. So, benefiting from all those information, we use a different model, and it's called deep learning algorithm. And recently, a recent study with our research group, it's already published in Clinically Neuroscience. We classified major depressive disorder subject from healthy controls by using a deep learning, it's called ResNet, a deep learning approach. And I'm going to give some screenshots of our original data, and we converted that original data, and I'm going to share the results of our study. This is the process that we followed. We first collected the original data by using the EGCAP. The raw data is collected from each and every electrode, and then we converted that 10-20 system placement of the electrodes to a head shape matrix, and then we converted that circular matrix to a square matrix. It's probably, you can see all those numbers, excluding the zeros, represents the electrodes. So, each and every matrix represents one sample. If the sampling rate is, let's say, 125 hertz, then each and every second, we get this matrix 125 times. So, with that high resolution and massive data, we converted the original raw EEG to an image file. This is a raw EEG file. I mean, let's say, hospital two electrode gives that raw image, but it's not an image file. So, since this is a signal, an EEG signal, we converted that raw EEG signal to an image file. How? First of all, we normalized that EEG data to a number between zero and one. So, if the maximum value, let's say, if the maximum value of the magnitude is, let's say, a specific value, we set it as one, and we set the minimum value of the amplitude to zero. So, we normalized the amplitude value of each and every electrode between zero and one. Then, we converted that value, the value between zero and one, to a grayscale color where one represents black and zero represents white. So, the next screen shows the image representation of raw EEG. You see a white line separator between those two images. The one on the left is the image representation of our raw EEG for a healthy person, while the one on the right is representing a grayscale representation of a raw EEG signal of a subject suffering from MDD. So, it's quite important for us to convert an original EEG signal to an image file because for the raw EEG, even on the screen, it's not two-dimensional. It has its time scale and it has its amplitude. So, we converted from time and amplitude domain to a two-dimensional image. Since deep learning algorithms are mostly using images, that grayscale image is given to the system for a group of subjects suffering from MDD and healthy controls. As far as I remember, we had 46 members for each group. We then used different types of deep learning algorithms and that is the process that we followed for separation of raw data and converting to two-dimensional metrics for creation of image files or grayscale image files. This is the output and the performance of our study. As you can see, for delta frequency band, ResNet-50 classified with 90% accuracy, which is quite high. As you can see from the number of subjects suffering from MDD and healthy controls, we had 46 members and 45 of our subjects suffering from MDD are correctly classified as MDD. Probably, you can see how sensitive and how specific this model is performing. As to the receiving operating characteristics curve, which is abbreviated as ROC curve, it underlines the performance of ROC curves and the area under ROC curve is quite important in order to compare the performance of different models. As you can see from this model, this ROC curve with a continuous black line, which represents ResNet, its performance is quite better when compared to Inception and MobileNet. We used ResNet and as I shared in the former table, the performance of ResNet-50 is better compared to the other deep learning architectures and we underline the importance of delta frequency band. As you see from Theta, Alpha and Beta frequency bands, the classification accuracy is quite high when I focus on delta. The classification performance of Alpha is low for all types of deep learning architectures. It's quite important to underline the frequency band as delta for MDD subjects too. This is the ROC curve and it's quite important really to underline the performance of the models by using ROC. In the course of time, with the contribution of deep learning algorithms, we're going to need some human-based studies. Since in 2017, the Prime Minister of Shinzo Abe underlined society 50 instead of saying industry 50. So really, the contribution of those smart systems are quite important for the human being and it's really important to teach the next generation our kids with values. I'm going to share a video. It's quite important and I really like that video and I just wanted to share a video with you that is narrated by the head of e-commerce company Alibaba, Jack Ma, and I hope you will be agreeing with me. Yeah, that's quite important for me and for our kids too. It's not possible to compete with the machines. Instead, we have to be teaching our kids some soft skills and that are quite important for our kids and for the next generation, for the society. Finally, I just want to thank to all audience and all our session partners for their valuable contributions and please visit our research paper which is called parge.uskudar.edu.tr to get information about our neuroscience-based studies and I highly appreciate American Society, American Psychiatric Association for their valuable organization to take part in. Thanks. Education, it's a good big challenge now. If we do not change the way we teach, 30 years later, we'll be in trouble because the way we teach, the thing we teach our kids are the things in the past 200 years is knowledge-based. And we cannot teach our kids to compete with machine who is smarter. We have to teach something unique that is machine cannot catch up with us. Value, believing, independent thinking, teamwork, care for others. These are the soft part. The knowledge may not teach you that. That's why I think we should teach our kids sports, I think we should teach our kids sports, entertainment, music, painting, art. So making sure humans should be different from everything we teach should be different from machine. If the machine can do better, you have to think about it.
Video Summary
The video discussed the concept of computational psychiatry and its promises for the future of psychiatric diagnosis and treatment. It highlighted the use of computational methods such as mathematical modeling, deep learning, and measurement of brain activity to improve the understanding and treatment of psychiatric disorders. The video also presented several presentations on topics such as neuromodulation in psychiatry and brain injury, deep learning for disorder classification, and the use of EEG biomarkers. It emphasized the importance of collaboration and the integration of new technologies, such as internet of things and artificial intelligence, in the field of psychiatry. The speaker discussed the limitations of traditional machine learning approaches and the need for more advanced techniques, such as deep learning. A recent study was presented that used deep learning algorithms to classify major depressive disorder subjects from healthy controls using EEG data. The study found high accuracy in the classification of subjects and highlighted the importance of specific frequency bands in distinguishing between groups. The speaker also emphasized the importance of soft skills and values in education to ensure that humans can differentiate themselves from machines in the future.
Keywords
computational psychiatry
psychiatric diagnosis
psychiatric treatment
mathematical modeling
deep learning
brain activity measurement
neuromodulation
brain injury
disorder classification
EEG biomarkers
internet of things
artificial intelligence
×
Please select your language
1
English