false
Catalog
TikTok, Tweets and…Trouble? - A Conversation about ...
View Presentation
View Presentation
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Good morning, everybody. I think we'll get started. In deference to those who came early and have been sitting in this freezing cold for way too long, in case you didn't hear, I and others have requested that building services be notified. Apparently, they have been multiple times and are working on it. So, I am very sorry for the temperature. The good news is, I hope we have a hot topic. And I hope we have a hot topic that you'll be eager to think and talk with folks about. So, let's get it rolling. So, my name is Sandra DeYoung. I am a child psychiatrist in Cambridge, Massachusetts, where I've primarily been a clinician educator. Now, it is advancing. Sorry, they switched the system on me. So, these are my disclosures. The main one is that I wrote a book on e-professionalism, and I get some royalties for that. So, this talk is going to be in part about the risks of technology, including social media, what I call the compliance issues. But we're also going to talk about what I think is an increasingly important topic, which is that of competence. The days of just say no are gone, and we all have to be prepared to be competent in the use of social media and technology in our practices. And then, finally, I'm going to invite you to think with me about some what I call meta-ethical issues, which are broad, overarching issues that I think sort of actually underscore everything that we do. So, the first thing I like to say is that you could argue that technology is just another tool, and that is, in fact, what the author Margaret Atwood has written. And I need to quote Margaret Atwood because she's Canadian and I'm Canadian, and when I grew up, we were required to do a percentage of Canadian content in everything that we did. So, that's my Canadian content. But as you can see, she compares it to an ax and says, you know, it's like an ax, you can cut down a tree, you can murder somebody with it, and then there's the stupid side you hadn't thought about, which is you can accidentally cut your foot off with it. And I'm going to argue today that that's true, but it's not sufficiently true. So, in terms of compliance, the first thing I just want to point out is how quickly we have all had to adapt to technology. And this is just to remind you of the timeline that we have been dealing with. But it's really been 35 years of very rapid change. Compliance really first became an issue in medical education when a group of prescient researchers did a survey of medical school deans back in 2010, and they asked them, excuse me, 29, and asked them, have you had any professionalism issues among your students? And here are some examples. And in that survey, 60% of medical school deans said that yes, indeed, they had, and it included things like profanity, discriminatory language and behavior, intoxication, and sexual content. So, that's really what got the field started, thinking about this new horizon. A similar study of state medical boards found that 90% actually had had some kind of e-professionalism complaint. So, not a lot of complaints per board, but a lot of boards had had at least one complaint. And then, of course, we started to see in the news stories that the media loved to drag up of doctors doing things they shouldn't be doing, and this was the case of a Rhode Island pediatrician, excuse me, who was busted for having patient-identifying information on her Facebook page. She lost her job, and she lost her license. So, a group of us in 2010 who were training directors at the American Association of Directors of Psychiatry Residency Training got together and said, we really need to be thinking about doing something about this, this is clearly a new big area in our field, and I just wanted to recognize those folks who were involved in that task force. So, the first thing that we did was to try to think about what was the domain of compliance, what were the domains of compliance that we were really talking about here. And so, I like to think of professionalism as this intersection between what's professional behavior, what's legal, and what's ethical. And, obviously, the three have a major intersection. There are many definitions of professionalism. There's no unifying or gold standard definition, but this is one I like from a group that got together in the early part of the century, and one of the important things that they say is that professionalism is the basis of medicine's contract with society. It aspires to altruism, accountability, excellence, duty, honor, integrity, and respect for others. So, I think this notion of the social contract is going to come back as we think about our use of social media. The other thing I would say is that our clinical practice as psychiatrists, I would argue, is quite unique when it comes to technology. So, our patients are particularly vulnerable and their struggles are very much, you know, in their thoughts and feelings and behaviors, right? And, those are absolutely perfect things to get transmitted on technology. At the same time, our work is highly intimate, right? We come to know very intimate details of our patients' lives. And, at the same time, I would argue that technology itself has unique properties. So, it's accessible 24-7. It's extremely fast at disseminating information. You can synthesize different kinds of communication all in one missive, and there are no really obvious sort of physical boundaries to it. You can do it in bed, in your pajamas at midnight. You can do it at your desk in the morning, whatever, right? So, there's a whole bunch of context here. In this country, there's been a debate, I would say, between forces that have been in favor of more regulation of technology and forces that have been advocating for less regulation. So, the folks who argue less regulation have really talked about the importance of open communication, better access to care, and innovation as things that we wouldn't want to hinder. And, I think, in terms of ethical principles, what they're really getting at is the principle of distributive justice, right, this idea of access, and of beneficence, of doing good. Those who have argued for more regulation have emphasized the importance of privacy and security of the information, and also the particular vulnerability of our populations, right? So, as psychiatry patients, our patients are vulnerable. I see kids. I'm a child psychiatrist. So, my patients are doubly vulnerable. And, those folks would, I think, are really espousing the ethical principles of autonomy, confidentiality, and non-maleficence, do no harm. So, we've had this problem over the last 35 years, which has been that we are always racing to catch up, that as new technologies come along, we're always just lagging behind in terms of understanding them, thinking about them, and knowing how to use them. And, in fact, real-life practice is always outpacing, particularly our patients' real-life practice, but also many clinicians' as well. So, what we felt when we were thinking about this issue was that there really needs to be a conceptual approach. You can't worry too much about the details of every new technology. What's needed is a conceptual framework that invokes these ethical professionalism and legal standards to help people avoid pitfalls, but also to develop competency over time. So, we put our heads together. We collected a whole bunch of vignettes from workshops like this and lectures like this. And, we came up with a list of eight potential professionalism pitfalls. So, you can see them here on the screen. I'm not going to read through them all. We don't have time to go through all of them today, but I'm going to try to give you a flavor of how we try to teach how to approach these issues by focusing on these topics that are in red here. So, let's start with a vignette. A nurse leaves a busy emergency room at a community hospital and posts on her Facebook page about the difficult day she has had. She comments particularly about a 20-year-old patient who's been seen over many months at the clinic due to cutting behaviors. In her post, she refers to the patient as a, quote, flaming borderline and complete nutter with a huge tattoo of a skull and bones on her left arm. The next day, the emergency physician, who is a Facebook friend of the nurse, is called in to evaluate a 20-year-old girl who has just cut herself. On entering the interview room, the ER attending notes a tattoo on the patient's arm. Let that sink in for a second. Anybody have concerns here? Anybody think there are professionalism concerns here? None. Shaking hands. Raise your hand if you think there's a professionalism concern here. Great. Okay. So, the primary, well, there are many concerns here, but let's see. Anybody want to shout out one of the things that they think is a concern here? Identifier. So, what's the identifier here? Tattoo, yes. And it turns out in the medical literature, a tattoo that is described as specifically this, this is legally considered a patient identifier. Yeah. So, confidentiality, privacy concerns. Anybody have any other kinds of concerns? Unprofessional derogatory comments, yes, yes. Made on a Facebook post. And what do we think is the impact, for example, if members of the public also on her Facebook page sites see that? Is that going to help us as psychiatrists in our profession with our reputation? It's certainly negative, exactly. So, you can see how, although there are some obvious concerns here around confidentiality, there are other kinds of concerns here as well. So, let's try another one. If I can click this. A psychiatrist seeing a female patient with a chronic medical illness notices that the patient is unforthcoming in sessions, but is very active online. With her permission, he decides to follow her on Tumblr, so this is an old anecdote, under a pseudonym so that he can better see how she's doing by monitoring her re-blogs and posts. Soon, he starts to message her and she responds. Anybody feeling a little nervous? Okay. Initially, the messages are about minor daily events in the patient's life and the clinician responds supportively. However, they soon become more intimate. The patient asks the clinician questions about his family and interests. The psychiatrist responds. The patient describes how she thinks about the clinician while she's lying in bed at night. The psychiatrist describes thinking about the patient while in the shower. One day, the psychiatrist attends a workshop on e-professionalism, not unlike this one, and becomes uncomfortable that his Tumblr contact with the patient may be inappropriate. He stops responding to her messages. The patient feels abandoned and abruptly discontinues the treatment. Anybody want to name a concern here? Boundaries, yes. And boundaries are sort of an interesting theme that tend to sort of permeate through those eight professionalism pitfalls that I was describing. Should we be on social media with our patients? I see some shaking of heads. Anybody think we should be? Want to argue that we should be? Okay. I will say there have been some very moving cases of physicians doing end-of-life care who have used social media with their patients and they're described in the literature. So I think there are certain exceptions, but in general, the Federation of State Medical Boards says we should not, and I'll come back to that as a guideline. So here's another example. An early career psychiatrist discovers that on an online rating site, someone has submitted a negative review of him. The writer alleges that the clinician, quote, occasionally violated my civil rights. The psychiatrist suspects the writer is a former patient with chronic mental illness who presented regularly to the emergency room during the clinician's training and was occasionally committed to the inpatient unit. The psychiatrist considers whether to submit positive reviews under various pseudonyms pretending that they are written by real patients to create a more favorable impression of the psychiatrist on the website. Anybody got concerns here? Shouldn't do that. Why not? What's wrong? Anywhere from nefarious to downright not a good idea. Yeah. So I had an interesting discussion after I presented this anecdote to an audience, and one of the people was a lawyer, and she questioned whether this, in fact, constituted any kind of slander or potentially legal action, and I'll come back to that. So, but generally not a good idea to pose as somebody else on your social media sites, and we'll come back to that, too, and I would argue that, like, two wrongs don't make a right, right? That lying is just not a great idea here. So I have some very practical tips here about avoiding trouble. One of the ones that has been going around for a long time is the grandmother rule, which is don't post anything that you wouldn't be comfortable with your grandmother seeing, but in general, one of the themes of guidance around use of tech is that it can be used to support but not replace the doctor-patient relationship, and so we are increasingly using tech for a whole variety of reasons, but in general, the expectation is that face-to-face care is still happening, so that's one of the reasons where, for example, in telehealth, there's an expectation that there be a face-to-face visit before start initiating care. I also like to consider a written informed consent. So, for example, in my practice, I do use texting. I don't use social media sites, but I do use texting, and I have all my patients sign a texting consent and outline all the potential pros and cons. I would also just remind people that there are a lot of different ways to communicate something, and so it's important to use the appropriate means of communication for the task that you're at hand and that you know how to use that particular form of communication safety and effectively. Encryption is our friend and should be used as much as possible. And it's really important. I'll talk about some national guidelines, but it's really important to know your own institutions, departments, or IT guidelines. Those are the ones that are the most likely to get you into trouble. Your malpractice carrier also will have thoughts about what you should be doing on social media and technology, especially with your patients, and so knowing their view on it and their policy is also important. And then, finally, your state board, and so knowing whether your state board is one that likes to go after folks around technology issues. This is more of a conceptual frame. This was an article written about Googling your patients many years ago, but I love these questions. I find them incredibly helpful in my own practice. So asking yourself why you want to use technology with a patient, whether your use of technology would advance or compromise the treatment, whether you do need that to get that informed consent, and then are you going to share the results of technology with the patient? So, for example, if you're on their Facebook page or you're following their Twitter feed or on TikTok with them, are you going to bring into the room stuff that concerns you that you have seen on it? Should I document the findings from the use of technology in the medical record? And the answer to that is almost always yes, but it has to be up for discussion. And how do I monitor my motivations and the ongoing risk-benefit profile of using technology? And I think that one is probably the most important because we all have certain curiosities, right? We all like to know what people are up to. If we have a patient who tells us they've just written a book, we kind of want to know more about it and all kinds of stuff, right? And we're all sort of a little addicted to tech, right? I mean, and that's just the bottom line. I mean, we're all on our phones. We're all on it all the time for compelling reasons. And so it's easy to click, click, click, and we may fall into practices that we don't consciously intend to. So in terms of... Whoops, sorry, I'm just... This is... So one of the things that I would like us to think about together, and I'm gonna ask you, if you can, to turn to a neighbour and spend a few minutes thinking about some of these questions. Is it okay for a psychiatrist to post photographs and other content of themselves that is personal on a dating site? Is it okay to express controversial political opinions? Is it okay to present themselves intoxicated, semi-nude, completely nude, to advertise their sexual services? These are all based, as all of these vignettes are, are on actual cases that I've been consulted on, asked to consult on, and committing violent acts. So turn to your neighbour and see if you can sort... Are any of these okay? And are any of these... And that was a quick one. Any not okay? Okay. Lots of heated discussion. I knew we would heat up this room. Let's go through these. How many of you think it's okay to be on a dating site? Lots of you, not all of you, but quite a few of you. How about to express controversial political positions? This is particularly relevant in the current context, right? Okay. Okay. One says it's okay. Really? That's it? Some shaking. Okay. So less comfort. Anybody think it's just downright wrong? Okay. So this is a grey zone. That's really interesting. Okay. What about intoxicated? That's a no. Semi-nude? No. Bathing suit? You could argue that that could be... Okay. Completely nude? Okay. Advertising sexual services? Committing violent acts? No. Okay. I like your guys. Okay. So I truly have been... The violent acts one, I haven't been consulted on because I think most people know that that's just wrong, right? But everything else has come up. And the sexual services one is another one that actually has been more contested when we have talked about it, that it's... Is it our job to monitor and regulate what someone does in addition to their full-time job? So just throwing it out there. So here's my two cents about these personal professional boundaries. I still think it's a really good idea to have separate usernames and accounts for personal and professional. It's not okay to use pseudonyms for your professional content. If you are posting anything about psychiatry, mental health, then the reader of your post has a right to know who you are. But if you're posting stuff on a dating site or whatever, I don't know what the rules are. I've never been on a dating site. I'm a happily 32-year-old married woman, but I think... But you have to follow the rules, obviously. But I think pseudonyms are a way to protect your professional identity. FSMB, the Federation for State Medical Boards, reminds us to stay professional even on personal sites, and the importance of thinking through unintended consequences before launching an account. So this is probably the biggest issue I see, is that people haven't thought through... And I work with a lot of trainees, and they haven't necessarily thought through what's gonna happen when they're applying for jobs, and people are gonna be looking at their social media sites, which is now routine practice. And so trying to help them think it through. And then so important to do regular inventories of your own content online and seeing what comes up when you Google yourself, and to try to be aware of that. So here's what I was referring to earlier, that there are national guidelines. Among the oldest is from the American Medical Association, and the American College of Physicians and the Federation of State Medical Boards also issued standards on online medical professionalism in 2013. The American Medical Association has also issued a very nice document about the use of apps, and there's even a healthcare blogger's code of ethics, which I love to show my trainees, because they had no clue that that existed. It's not a national standard, but it's an agreed-upon set of standards. I'm gonna be talking more about the APA's app advisor and app evaluation model. Of course, we all should be familiar with our codes of ethics, both AMA, APA, and ACAP. And then the most recent guideline is from FSMB on social media and electronic communication, which they issued as an update to their earlier statement. So these are things that I think we all should know, because I think these are the kinds of things that may live to haunt us if we ever, God forbid, end up before a board or something else having to explain our behaviors. I also want to say that I think that leaders need to take responsibility for this as well. So I would highly recommend that training programs, departments, clinical services, institutions think about having their own policy around these issues, and that it be a policy that is applied equitably with clear and reasonable consequences and remediation. So this can't be a policy that just sits on the staff folder on the platform that you use. It has to be something that is actively used and talked about. I'm a huge believer in teaching about this stuff, teaching your peers, thinking about it as peers, talking about it in peer supervision, and providing opportunities to reflect. So sharing cases, obviously, in a de-identified way about struggles that we have had. And to focus on competencies, not just compliance, not just the horror stories, but struggling with how best to use technology in our practice. It's incredibly useful, I find, to have social media mentors. Those are typically people who are younger than I am, and they are really helpful in not only figuring out how to use stuff creatively, but also how to be protective about things like privacy settings. And then I think we need to be learning about integrating digital health into patient care, teaching, mentoring, and administration to model best practices so that we're all learning as we face this new frontier. So let's shift gears and talk a little bit about competence. So, technology is now a useful professional tool, I think we would all agree. It's used in education, it's used in a whole variety of ways to monitor data, including sensor-based data, and it's used to deliver care, right? We have cognitive and behavior-based interventions delivered. It can enhance adherence to medications and appointments. I'm sure many of you use texting to remind your patients about appointments and refills due. It can be important for lifestyle medicine, which is sort of the current term for good old health, you know, maintenance health issues, so things like diet and exercise. Very importantly, we have now the capacity to analyze and do machine learning for research and QI, and there are many sessions at this meeting on that topic. We use technology, very importantly, to disseminate evidence and scholarship, right? So that's how we learn about the state of the field. And if you submit an article to a journal these days, you'll be asked typically for your Twitter handle, right? Because that's how these things get sent out. Advocacy is as important as ever, and using our social media to advocate, I think, is for all of us an important part of what we do. And then, of course, we have our own professional identities. So many, you know, I'm imagining many of you have websites and maybe Facebook pages and other professional Facebook pages, I mean, and other kinds of ways of having a presence and sharing your identity online, and in fact, thinking about your own identity and how you want to be portraying yourself online. I'm a clinician educator and former training director and was involved in the development of the standards for psychiatric education. And in 2022, when the milestones for child psychiatry were being revised, we added a new digital health sub-competency. This is the first time anything so technological has actually been in the requirements for training. These are not in the ACGME written requirements. These are in the milestones, which is a part of the evaluation of fellows and residents. So moving even beyond telehealth and the electronic medical record, this sub-competency talks about incorporating technology into clinical care, education, and research, and that the graduating fellows should be able to integrate multiple different digital technologies to augment clinical experience appropriately. So this is really, I think, entering a whole new area, and this is the cap milestone. Patient care seven is the sub-competency. There's also been work among psychiatric educators to try to develop entrustable professional activities in digital technology. So an entrustable professional activity is something, it's a skill that we would expect a graduating resident to have, that is something that tells us we would entrust them with our care. And a group got together to develop an expert consensus on EPAs in digital technology and psychiatry, and this is what they say. Establishing an authentic digital identity, building a diverse professional network using social media, disseminating one's own and another's work, sharing perspectives to further professional development goals, which is, I think, an interesting one, managing conversations online professionally, so being able to maintain that frame, amplify advocacy efforts for improving healthcare, using social media for education, and to provide the latest evidence-based health information to peers, patients, and learners. So you can see some themes here. So here I have another question for you. There's no current answer to this question, but I'd love to hear your thoughts, and so we'll do another sort of turn-to-your-neighbor activity. And here's the question. Should technological competencies in psychiatry be assessed as part of your initial licensing, so that would be in medicine, in continuing medical education for under ACCME, and in continuing certification with the American Board of Psychiatry and Neurology? All of those? None of those? Some of those? You? Tell me. Let's have five minutes to chat with your neighbor. Okay, let's see what you all have to say, if we could bring it back together. And for those of you who are online, I just wanted to let you know, we will be taking questions at the end of the session, but please do keep your questions coming, and I will monitor them and sit down to answer them when I'm done speaking. So, should tech competencies be assessed as part of licensing requirements? Yes? Anybody want to raise their hand? Yes, we have one yes. A few yeses. No? More noes. Okay. How about in continuing medical education? A lot of yeses. Okay. Can someone just say why they felt that that was the case? Okay, so tech is changing constantly, and for us to keep up and be assessed on it, we need to have it as part of our CMA. How about for continuing certification in ABPN? A few people raising their hands. Do you want to say why? Okay, so continuing certification in ABPN, we need to have it as part of our CMA. Same reason? Yes, same reason. Yeah, yeah, okay. Okay, that's really interesting. I happen to agree with you. I teach fellows, and one of the things I see is that a lot of them are going off into practice, and a lot of it's on telehealth, and they're using tech, and I'm not sure any of us knows what they're doing, so I would like to see some continuing education around that. So let's think a little bit about resources for ongoing learning. So one of the things that I, as an educator, really believe is that we all need to be engaged in lifelong learning, and so full disclosure, I am not a techie. I happen to be married to a techie, but I am not a techie, and I came to this field really because I could see how tech was rapidly becoming integrated, and so as with every other part of our field, I would say that we need to engage in lifelong learning. So the APA has really tried to make inroads in this area, and so they have some, I think, really nice resources online. One of them is this Digital Mental Health 101, which is what clinicians need to know when getting started, so that's for the digital immigrants in the room, not those of you who are digital natives. There's also a Social Media Best Practices for Psychiatrists, which is also incredibly useful, and this is what that website looks like, and you can see, there's a video to watch, and then there are further resources down the page. This is the AppAdvisor model, which I alluded to earlier, and what it does is, instead of taking the approach of trying to say, you know, here are 50 good apps, it says that we, as clinicians, ought to be able to evaluate the apps that we use, and it gives you a series of questions to ask about an app before you adopt its use in your practice. So for example, things like, you know, what is their funding for this app? Who's funding it? And what are their privacy practices? You may, John Torres and others did a study a while back looking at the privacy practices of apps, and most of them were not private, so that's one of the very big concerns that we have to have. Oh yeah, sorry, I forgot I put this here. So here are the eight questions to ask yourself when you're wanting to think about using an app. There are also constantly stuff coming out in the scholarly literature about this. Howard Liu and others wrote this very nice piece on social media skills for professional development in psychiatry and medicine, which was in the psychiatric clinics. There's also other universities have started to develop websites and online learning centers for this, so there's the Center for Innovative Teaching and Learning at Indiana. Stanford has a very nice source of learning that they keep updated as well. So here are some thoughts in terms of lifelong learning. I would say start by knowing how to use your EMR effectively, and I say that because EMR has been, in my view, one of the most challenging technologies that has come along for us in the last 10 to 15 years, and it's a huge source of burnout. And so if you have time to spend on technology, I would say get really good at your EMR, because that's going to prevent you from writing your notes up until midnight on weekdays and so And increasingly, EMRs are getting to develop to include other kinds of elements and are becoming increasingly important for research, and so producing a really good online note is actually going to be increasingly important. If you're new to social media, I suggest considering one social media platform for professional use. I happen to use Twitter, now X, but you can choose one. I also use LinkedIn, full disclosure. Consider one app and one online resource for clinical use. So don't start using 20 different apps in your practice. Choose one that you think you're going to get a lot of mileage out of with a lot of patients and get to know that app well, similarly with online resources. Master those first, and then expand your toolkit gradually, and I use this phrase to stay within your zone of proximal development. So that's a child psychiatrist phrase from the neuropsychologist Vygotsky, and it means to stretch yourself, but to always be within reach in your new learning, so that you should be learning and growing, but don't take huge risks. Don't try to do things that are way beyond your comfort zone. Consider getting a social media or tech mentor, again, very helpful, and keep up with new concerns raised. So this stuff is, as you know, always in the media, and it is, I think, part of our jobs to be aware of what new concerns are. Okay, so now we're going to shift gears a little bit to some meta-ethical questions around media use. You remember I suggested this tension in our country about those who are pro-regulation, and those are anti-regulation when it comes to the technology space. I just want to point out that this has been a historic difference between us and Europe, and so in this country the decision has been very much to prefer industry-driven solutions and professional self-regulation, and the idea that we need voluntary codes of conduct. Those are my emphases here. So in other words, we're being left to our own devices and to monitor ourselves, and that is very different from Europe's approach, which is still true to this day, and in 2018 they passed the General Data Protection Act, which you all know about because it has shown up in various ways on all of the websites you use here if they have an international presence, and it's an interesting difference in assumptions. So the U.S. assumption has been that the default setting should be that everything's open. It's sort of, you know, you're in until you opt out, and the European mode has been exactly the opposite, that you're out until you opt in, and that the default setting should be safe, you know, protection of data. So it's a very interesting thing to consider, and as you know, there are lawsuits going on right now around this kind of issue in the States, and so we'll see how this plays out. I also just always need to underscore that at its heart technology is about money, right? So this is a an economist cover. It's from 2017, but I think it still holds true, where the oil storage tanks at this harbor have been replaced by data storage tanks from Amazon, Uber, and Google, and I think that we can never forget that our data is a hot commodity, right? Lots of people want our data, and the more you give it to them, the more they'll go after you for it, right? So it's just important not to be misled, I think, about what some of the goals of these products are really all about. Okay, so having said that, let's talk a little bit about some meta-ethical issues. So one is the privacy issue. So we know, for example, by studies that have been done on Facebook's data that, in fact, their data storage is not private. It can be accessed. As I mentioned, apps are not private. That data can be accessed. There's also, frankly, some varying evidence base about the use of technology, and so some apps, it turns out, have a good evidence base. Some apps have no evidence base. Is there an evidence base for texting your patients about, you know, alcohol consumption? Mostly, we don't know that data. There are just not enough studies have been done. I think we also have to, as physicians, be concerned about the impact of tech on patients' mental health and well-being. This is particularly true of youth who literally go to bed with their phones, and where there's lots of evidence that there's decreased sleep and decreased nutrition and decreased physical exercise among youth that's associated with social media. But I think it's true for all of our patients, and so I think that we need to be, as physicians, thinking about that as well. I mentioned youth. You know, we do need to pay even more attention to vulnerable populations. The elderly is obviously another senior citizen or another risk group, and the amount of scamming that goes on via technology is just quite extraordinary. So we need to pay special attention to them. And I would say we have to advocate and take a duty to protect and promote safe use. So it is, if you have a patient who is using technology, social media, unsafely, I would say that falls in our bailiwick, that we need to be knowing about that, asking about that, and helping them with that. And then, interestingly, there are some legal, developing legal issues in this area, and I'll give you a couple of examples, so that there's, I think, an emerging question of how liable psychiatrists will be if they know about dangerous use of media, and they don't do anything about it with their patients. And of course, underscoring all of this is the fact that we have a workforce shortage and access problems in our field, right? So there is a huge push to use technology to address some of these issues, and I would say that's appropriate. But again, we have to go back to that framework of trying to balance the clinical usefulness with the potential risks. So, anybody seen websites like this, these kinds of online therapy sites, you know, advertisements for Call Your Therapist, you know, so these are increasingly on online. And then, of course, gazoodles of therapy apps, and you know, there is a real problem that not all of these meet specific criteria for FDA approval. The FDA has historically not taken a position on approving apps, saying that they are not under their jurisdiction. But as I mentioned, not all have an evidence base, and that many of them breach privacy. Again, looking for the evidence. So this is a very nice paper that came out at the beginning of last year, and it's a very thorough summary of where the evidence base is to date in using technology and mental health. And these are the kinds of articles that I think we need to be sharing with each other, apprising each other of, certainly teaching to our students, because it is an evolving field. And, you know, if something has a solid evidence base for it, sure, we ought to be considering using it. And then there's this whole question of doing no harm and the impact on health and well-being. And again, I say this as a child psychiatrist, where this has been, I think, a particularly contentious issue. And we have, for the first time, a surgeon general who has sounded the alarm around the relationship between technology and social media and youth mental health. These are all books that I took pictures of from my bookshelf, but, you know, we've been worrying about this issue for a long time. And the whole notion of that maybe social media isn't good for us has been around for a long time, but it's been hard to kind of really nail down and get clear about and establish standards around. So I think it's still an evolving field, but I think many of us would say that the mental health crisis that we're facing isn't probably just due to COVID, and that when you look at rising rates of suicide, anxiety, and depression, they do there's an association with the rise of technologies that we can't deny. We also know that, you know, online habits have an impact on mental health in certain ways. So the y-axis here is mental well-being, the x-axis is daily digital screen engagement, and you can see for the yellow, it is TV and movies, the red is video games, green is computers, and the blue is smartphones, that as you use more hours, spend more hours on screen using digital devices, your mental well-being goes down. Similarly, we know that there are actually changes in kids when we look at their screen habits and what's happening in their brain. And for example, a 2018 study by Dowling demonstrated that once you got beyond seven hours a day, of screen use, that there was evidence of thinning of the cerebral cortex. So the evidence is by far most robust in heavy users of technology. And this again gets back to this question of balance, right? That if somebody isn't using technology at all, I worry a little bit, just a little bit. If they're worrying at using a ton of technology, I worry a lot. What I'm looking for is a balance. And then of course, I think we can't deny this issue of persuasive design. There have been a number of studies and very persuasive documentaries now about this problem, that the technology itself is designed to get you hooked. So that is, you know, so they can get access to more and more of your data. So the idea that any one of us has the wherewithal to resist these forces, I think is not fair on us. I also happen to think it's not fair to place the burden solely on parents for their kids. I think this is actually needs to be a community wide issue. Happily, there has been a movement recently to disallow smartphones from schools during educational time, and I think that's the kind of thing we need to be thinking about. So what are our ethical obligations for kids and other vulnerable populations? So the ACAP Code of Ethics clearly states that we have a role in promoting the optimal well-being, functioning and development of youth, both as individuals, our individual patients and as a group, and that it's our job to minimize youth exposure to injustice. So that's the part of the advocacy mandate that I'm suggesting to you. So I think we need to be assessing technology use in our patients. Nick Carson and others have written really nicely about some doing a digital inventory, but you want to know about the kinds of things that might be worrisome. So are they practicing in ways that you think are leaving them open to privacy breaches through things like hacking and ransomware or just lack of privacy protection? Are they aware of the risks that they might be facing? Are they aware of issues like ownership of personal data? Are you concerned that actually technology is having a negative impact on their mental health or behaviors? Are they becoming, for lack of a better word, addicted? In some cases, there's tracking, there's spying. So some parents, you could say they're spying on their kids, and they're spying on their spouses and so on, and so that's important to be asking about. Enforcing legal parameters and preventing illicit technology use. So we do have age limits. For example, Facebook has an age limit of 13. It's not enforced. We need to be asking about that. And then, of course, there's the dark net. And we know that one of the ways that sex trafficking occurs and that illicit substances are sold is through the dark net. So another thing to be asking about when you have concerns. There are some great organizations out there that provide both education and advocacy. So Jim Steyer is a professor of education at Stanford who started a not-for-profit called Common Sense Media. And they have a whole digital citizenship curriculum, which I think is a great model for us. And they also rate videos and things like that, so that's worth looking at. There's an organization based in New York called Children and Screens, and they do focus on the effect on youth. And then... So lots of things to learn from. I'm going to switch. So in terms of technology and well-being, I think we do need to be asking about the lifestyle medicine issues, like what is the opportunity cost of screen time for our patients? What about the psychic importance of privacy, peace, being in nature? There's, you know, Japanese talk about forest bathing, right? And there's evidence about the therapeutic effect of nature on mental health. If people are on their screens, they're mostly not in nature. And then, of course, the potential for addiction. So an emerging question, I think, is can social media use actually be unsafe? And these are these legal cases that I wanted to share with you. So in 2017, there was a case of a young woman who was found guilty of involuntary manslaughter for persuading her boyfriend through a text message to die by suicide. And this was a case from my home state of Massachusetts. And so when I saw this, I was thinking, huh, what if Michelle Carter was my patient? Should I have known that she was sending these? And should I have been asking her about it? And what's my liability going to be if I know, in fact, that she is doing this when I take, you know, I take my technology inventory and I learn that she's texting a lot? Another case occurred in the UK, where this is considered a huge victory for parents, where the blame for teen suicide was actually placed on social media itself. And so that's also, I think, going to be an interesting area. As I mentioned, the Surgeon General has actually put out this advisory around social media and youth mental health. It affects all of us. How many people think that the APA has a position statement on social media and technology? Is that the kind of thing the APA would do? Yes? Raise your hand if you think yes. Oh, OK. How many people think no, that's not something they would touch? OK, so a few more yeses than noes. Well, in fact, we do. So in 2018, the APA issued a statement on the risks of adolescent online behaviour, and the last sentence reads that psychiatrists can play a role in community efforts to promote safe engagement with social media and other online activities. So this is something that our Communications Council and our Council on Children, Adolescents and Their Families has taken up. We now also have a position statement on the regulatory oversight of data apps and novel technologies. And I'm sorry, I don't know what happened in the formatting here, but it went a little berserk. But the idea is that privacy and patient protection frameworks should be updated frequently, that any platform that hosts sensitive information should meet data privacy standards, and that treatment-focused apps should have high standards of evidence-based practice and function as a supplement, but not a substitute for care. We also just passed a statement on promoting health and protecting vulnerable populations from social media and online harm. So we took a position supporting federal oversight of security and privacy standards by setting reasonable content standards and transparent self-policing efforts. And the last sentence reads, the APA supports targeted funding for culturally informed research on the impact of social media on health, the difference between passive versus active consumption of social media, and mental health messaging facilitated by these platforms. So this is considered by the APA to be a very important area of research. Let me just say a word about this question of active versus passive use. So some people have argued that when you're using technology in an active way, like you're playing a video game with your child, for example, or you're using a technology to teach, for example, that that's OK. And it's really the passive stuff where you're kind of a drifter on technology and social media that is the problem. I will say that recent work from England suggests that the results have been mixed, and we really don't know the answer to that question yet, and that, in fact, according to this study, there was no support for the hypothesized association. So it's an area of ongoing research. So this is my last question for you, and I'm, again, very interested in your answers. What do you think is psychiatrists' ethical responsibility in advocating for protections from the potential negative impacts of technology? So let's take a few minutes and share your thoughts with you. OK. Let's hear some thoughts. I'm going to start with a simple yes now. How many people think it is our ethical responsibility to advocate for these protections? Raise your hand. OK. And how many people think it's not? It's beyond our purview? OK. Interesting. Can I pick on you and ask you first why you would say it's beyond our purview and what's... Okay, so I'm just going to repeat that for those online so let me know if I'm not getting it right, but the idea being that there's so much we already have to do in psychiatric care that to do this as well feels beyond our purview, and you might do it in an initial evaluation or talk about it a little bit, but to keep following up and following through on that seems excessive. Did I catch it? Okay. Anybody want to give voice to the other side? A yes person? Yeah. Could you, I ask you just to step up to the mic since you're so close. That would be lovely, and then I won't have to repeat it and get it. I was just, okay, I was just going to say I think it's a pretty nuanced question, but I also think, I tend to think of it more like a screening for sleep, to be honest. I think we will not necessarily be able to change that, but I think screening for it, understanding who is at risk of problematic social media use in particular, I think there's a really big impact. I think it's a daily thing for many, many of our patients, and I think it's a bit of a blind spot for us, and I think at least being able to give some psychoeducation around it, being able to identify people that might be at higher risk for vulnerability and misinformation, things like that, and just having that, even as part of our understanding of patients, I think would be helpful, and also I think in the future, I think considering things like restricting or asking patients to scale back their social media use after intervention might be something to consider. In the same way one might ask them to scale back their alcohol use? Or having social media hygiene or mindful consumption around in the same way that we might counsel them to do sleep hygiene. I think that would be kind of our role. Great, thank you. I think I'm hearing another Canadian accent here, yes? Yes. Great, so thank you for that, and I think there are arguments to be made on both sides. I will say that having seen patients who have been deeply harmed by technology use, I tend to want to include it. So let me just summarize here, and then we'll go into some questions. So e-professionalism encompasses compliance, competence, and attention to meta-ethical issues. The guidelines are out there, but really it is this conceptual thought process of thinking through the potential risks and benefits, including the unforeseen risks and benefits, and particularly being aware of local guidelines for where you practice. That e-professionalism includes using technology for education, quality, and improvement of care, research, and advocacies, and that trainees and practitioners are now being required to show competency and that lifelong learning may, in fact, follow. And it sounds like many of you would agree with that, that it should be part of continuing sort of on CME. Competencies assessments, exactly what those standards should be and how they would be evaluated are really being worked through now. For sure, technology is here to stay, and just historically I can let you know, when I first gave a talk on this topic 15 years ago, most of the psychiatrists in the room said just say no. Nobody should be using technology. So oh, how far we have come. And then I just wanted to put out there this question of artificial intelligence. Obviously this is now the hottest question in technology, and there are some interesting presentations on this meeting, but I'm going to suggest that I think what we have learned about technology and digital media has given us some tools to think about AI, and that the very same things that I was talking about in terms of professionalism, legal requirements and ethics are really the same principles that one would apply to artificial intelligence. And I hope that as psychiatrists we take this on, because I think if we don't, we live in a land where the culture has historically supported lack of regulation. So as that earlier slide said, we are going to have to monitor ourselves. So let's go on to some questions and discussion. I'm moving over here because our online folks have some questions, and since they've not been able to participate so far, I'm going to start with them, and then I'll be happy to... A couple of them. Can you hear me? Everyone can hear me? Yeah. So we have some questions from the audience. So if you want to have a seat, I'll be with you in two seconds. So this question is, what can be done to mitigate the chances of negative reputation of a psychiatrist from a negative online review by angry patients who did not get what meds or services they wanted? So this is talking about those physician rating sites, but also postings online and all of that. So the issue here is one of libel, like that one, that vignette that we looked at. So libel is written slander, and if a posting, an electronic posting is considered under the category of libel. I'm not a lawyer, but my understanding of libel is that it has to be, that the claims made have to be true, and that they have, excuse me, have to be false, and that they have to have had a negative impact on the person. And they cannot be an opinion. So if somebody goes online and says, I thought this was the worst doctor I've ever seen, they are protected under free speech. That is an opinion, and they can say that. If they say, my doctor has systematically given me medications that have caused me heart disease, and that is true, right? So in other words, they've been on antipsychotics and things that have impacted their heart. We may not like that, but, you know, you can't argue that it's a false statement. If they say something that is patently untrue, that has, that is a sort of systematic and not a one-time deal, and you can demonstrate harm, and your lawyer can demonstrate harm in court, then you may have a case for libel. If you have negative reviews online and you don't want to take legal action, the most common and successful strategy is to try to generate your own online content so that when people search you, that's what comes up to the top of the Google search list. I shouldn't say Google, now that they're under a lawsuit, I'll say on the search engine search list. So that's one strategy. It is very difficult to get content taken down unless you think, I mean, you can try. There's nothing to stop you from trying, but my understanding from people who have tried to do it is that it's very difficult. You cannot coerce patients into giving positive reviews online, so that strategy is considered unprofessional and in some places illegal. There was a legal case of dentists who tried to do that. Let me take one more from them. Yeah? Do you want to use the microphone? Yeah. It'll be easier to hear you. The opinion of a patient giving a review like, oh, this is the worst doctor I've been to, for instance, or something like that. And the patient would continue to come to the doctor, so there's still a patient-doctor relationship. Maybe it's sort of been anonymous, an anonymous review, or it's open, but you found out without the patient telling you, and there's still this relationship going on. How would you advise, or how would you have to think about this in a legal way? Okay, so I'm interested that you say, how would you think about this in a legal way? The first thing I would do is... The relationship way that... Yeah. ...bounders to legal... Can you just say, well, if you don't trust me, or you don't want to come to me, then let's just end this relationship, and I send you to someone else, or you maybe suggest you can take down your review, or how do you... Yeah. Yeah. So the first thing I would say is to address it clinically, that there's something going on, right? Why is this patient posting negative stuff about you, and yet continues to see you, right? So there's some conflict, whatever language, however you're oriented in your practice, there's something going on for this patient, right? And then I think you explore it with them, and you work it through to the point where you have some kind of resolution. I think that's exactly what we're trained to do, and that's what we should do, yeah. We had a question up here. Go ahead. Also in the line of negative reviews, assuming that you have a patient who divulged their name as they put in their reviews, and spilled all their spew because they didn't get their medication. As a clinician, can we vehemently defend ourselves by responding to their email, to their review? Can we defend ourselves? Yeah. And since they mentioned their name, would that throw away the HIPAA violation? Right. Right. If they mention their name, it's a problem. If they don't mention their name, if it's an anonymous posting, then I would say you can, but I would get legal consultation before doing anything. That's what I would do myself. I would ask about the specifics of the situation. But if they have identified themselves, and you respond, then you are confirming that they are your patient, and then you have revealed private information. So I would not advise against that. But even though they saw you, they went to your office, this is my name, so they exposed themselves already. Yeah. They're allowed to expose themselves, as people do that all the time online. But we're not allowed to publicly call them out as our patient, right? That's the difference. Yeah. Thank you. We have a similar question here from the online folks. What can be done to mitigate the chances of negative reputation of a psychiatrist from a negative online review by angry patients who did not get meds or services they want? So if you develop a negative reputation based on a negative online review, I think you've got more work cut ahead. That shouldn't happen. I mean, so our best defense against all of these things is always providing the best care we can provide. That's true ethically, legally, and professionally, right? So if you are following standards of care and doing your best and can demonstrate that you've been doing your best, that will always be your best defense. And so, you know, we always wish that those patients who loved us posted more about us online, right? It doesn't always happen, and we can't force them to. But that's, I still think, our best defense. Next question. Hi. Good morning. Thank you for your talk. I am a second-year resident planning to fast-track into child and adolescent psychiatry. And something I see often with young, this generation in particular, Gen Z and Gen Alpha, they love TikTok. And oftentimes, I'm running into patients telling me that they have an ADHD or autism spectrum or some sort of neurodivergent diagnosis. How do we, as a larger community of psychiatrists, maybe combat that? Something we were just talking about in our discussion is maybe even having a disclaimer, such as when someone presents news and it may be false, there's a disclaimer that this headline or whatever might be untrue, you know, please go here for more information or whatever, like CDC or whatever. Can we as psychiatrists advocate for something like that, or what is your approach to those? Yeah. Did everybody hear the question? So, it is really tricky in terms of what to do with disclosures online that you wish had never happened. As I understand it, that's what your question is sort of about. This is why I think educating people early and often is so important. And as you will learn in child and adolescent psychiatry, young people do not have the sense of time as adults do, right? They're often not thinking way down the road. They're thinking in the moment, and so they do things that they might later regret. Disclaimers, unfortunately, in general, are not effective. So you can have disclaimers of all sorts, and in general, they don't seem to hold up in court. Again, I'm not a lawyer, and you'd need to get specific consultation around any particular use of a disclaimer, but I'm sort of wondering, have you thought of other solutions besides disclaimers that you have, or have you seen people use other solutions to that? I mean, to clarify more so, there are individuals online who are not board certified, or who are not actually licensed, and are providing this information to individuals. So I guess, how do we combat that? Yeah. Misinformation is a huge problem online, and again, one of the ways I think we combat that is by being very clear about our credentials. So if you read Howard Liu's article on using social media, one of the things he'll say is that it's really important for us to be online and in social media because we are trained and educated in mental health, and we can say, you know, this is, you know, this is Dr. DeYoung, I'm a board-certified child and adolescent psychiatrist, here's some information about using aripiprazole in young people, you know, that kind of thing. And that that quality of information needs to be increased in volume so that we can offset some of the misinformation. Mostly, we don't have the capacity to bring down other people's posted misinformation. That has changed a little bit, for example, in COVID, because it was a public health emergency, there were doctors who were sanctioned in various ways for providing misinformation about COVID. We don't have that about mental health. I really wish we did. It's a free country, people, you know, can do that. I personally think it's really important for all of us, both online and in person, if we hear information that we have a different understanding of, we have knowledge or evidence that doesn't, you know, support that, that we speak up, right, because we in a sense are sort of the protector of the best truths that we have about mental health, and it's going to be our job to really disseminate it. Thanks for your question. Thank you. A lot of good food for thought. My question is, do you have any particular questions you might ask a patient to get a sense of if their technology use is inappropriate, just to get a sense of, like, are they using too much, the way they're using it? And secondly, is there any red flags we should be looking out for, like that one you said about the girlfriend texting him? Yeah. I guess, how would you even know that that's happening? I guess I just, like, is there any questions or things we should be looking for? Yeah. Good, great question. So I think one of the things I would, I generally start with, it's a little bit, those of you who are child psychiatrists, we use the S2BI for substance use, and it's a quick way of surveying for substance use. So I tend to ask things like, are you on social media? Do you like the, you know, do you like technology? You know, what sorts of platforms are you using? What do you use them for? And then I ask them about time, like roughly how much time do you think you spend? And then I'm always interested in the impairment question. Do you think it's getting, do you ever have trouble falling asleep, or does your friend texting you every five minutes keep you up at night, or that kind of stuff? And then it's like everything else. I always think the psychiatric interviewing, you're sort of, you're a, you're kind of snorkeling along the surface of the water, scanning, and then when you find something that sort of perks you up, you take a deep dive and you explore it more, right, and then you come back up and keep scanning, and that's, I think, what we need to do. So I would just have it on your radar, be thinking about it. If they're talking about a new boyfriend, new girlfriend, how did they meet that person? Is that person, like, is it a real, you know, an in-person relationship? Is it just online? Do they want to meet up with that person? All of that stuff is grist for the mill, all needs to be pursued, yeah. There are, around addiction, there are some specific screening instruments now being used, so you can Google those, and there's actually some good scholarly literature about the use of addiction instruments in this area, yeah. Yes? Thank you very much for your excellent presentation and very comprehensive. I am from Venezuela, a Venezuelan clinical psychologist, and I also represent the Association of Behavioral and Cognitive Therapies, and we are, I'm also at the UN Media and Mental Health, where we work with several things like TikTok, I was, you know, the title of your talk is TikTok, Tweets, and Trouble, and now TikTok is really, I'd like to hear your opinion. I'm also in the Academy of Child Psychiatry social media group, so. Oh, good. With the TikTok, so I really would like to know your opinion about that. Thank you very much. So, I did, thank you. So, I didn't talk too much about TikTok specifically today, and part of the reason is that it's one of those politically hot issues, right? And the whole question of whether TikTok should be banned or not. I have, I'm not going to lie, I've had particular concerns about TikTok, particularly during the pandemic, where its use seemed to just really skyrocket. There was all this concern about kids, you know, these TikTok tics, you know, motor tics affecting kids, and so it seemed to be having almost a life of its own, and so I have sort of additional concerns about TikTok, I guess I would say. But look, it's a moving target, so for example, I said I'm on Twitter now X, you know, some people say to me, well, look who owns it. How could you use a social media site when that's the owner of the site? And I think that's a legitimate concern, right? We need to be thinking about that, and so these are, again, I don't think there's a single right answer here, but I think it's so important for us to be talking with each other and with our colleagues and coming up with decisions, maybe in your peer supervision group or in your clinical practice group, about how you'd like to think about approaching these things until something, you know, some new question comes along. Sure. Question. So I have a question about the social media is kind of universal, and for us to start screening each and every one, everybody would be on social media. My thing is, do we have a hotline number, like we have a crisis that we can give so that either parents can reach out to that hotline when they're seeing excessive use, or the kids when they are being cyberbullied, that they can reach out themselves, they don't want to speak about it, so there should be a hotline, which probably could be helpful. There is a hotline for cyberbullying. In fact, there are a number of different organizations that provide hotlines for cyberbullying. I don't have them right with me, but if you give me your email address, I can send them to you. Not so much about these other issues, and it's an interesting question, you know, sort of like call your tech consultant, not to help you fix the tech problems, but to help you figure out what to do with a patient use or a colleague use that you're concerned about. But it's an interesting question, and this is also where I think having policies in programs and practices is so important, thinking it through ahead of time, right? So I'm going to take another online question here. This is from a child and adolescent psychiatrist in Ohio who says that on a state level, they're trying to restrict social media use to a certain age by law. What are my thoughts on this? And this person does not see it as realistically being enacted, even if passed. What are the things that we can encourage our lawmakers to do instead? So yes, I mentioned we've had an age limit for Facebook use for some time now. The cutoff is 13, and it has not been enforced. And that has been a huge issue in a lot of these regulations, is that enforcement is so difficult to do. I think the only way we're really going to have success with this is if we embrace these issues as communities, and that schools, law enforcement, health professionals, and government folks get together and think about how we're going to keep each other accountable. And that there needs to be some kind of system in place for raising issues and concerns and learning how to manage it. Pure regulation and saying no, or you can't do this above 13, it just isn't working. But I also want to reiterate what I said before, which is I think it is absolutely unfair to expect parents to be able to manage them on their own. The literature will say that the best predictor of someone's online and social media use is their parents' behavior. And that seems to be true based on the literature, and I think we've all seen this, right? But parents need help. It's an impossible job to do on their own. And so this is where I think schools and law enforcement and policy makers need to get together. It can't be, it has to be from the ground up. It really can't be from the top down. Yes. Question from the audience. Again, the same line about reviews. What's the most that we can write on a response from a patient that you know was just substance seeking or wasn't happy with the service, and they wrote a negative review about you. What's the most that we can respond other than saying, I'm sorry for your experience? I'm sorry, and this was a patient who does or does not include their name? They included their name, yeah, I saw this doctor, and she or he didn't give me my Xanax, and now I'm writing this bad review about this person? Yeah. I think if you respond, you are acknowledging that this person is your patient. So it's better just not to respond to that? I would let it go. Okay. You could bring it up with them. You could call them individually and say, I'm so sorry you had this experience. Let's talk about it. Right. Right? But you can't post it. Okay. Yeah. One more online question. Do you think that the type of video games influence behavior, or is it more so the conversations that are brought up in online play with others and other ideologies that people may not have been previously exposed to, or is it a combination of both? So I didn't address video games. This is a very hot topic. If you ever have a chance to hear my colleague Paul Weigel on video games, he is really the expert or an expert in this area. I will say that there is literature specifically about aggressive behavior and video games, that exposure to very violent and aggressive video games does seem to increase the aggressive behavior of the player. Beyond that, that's not my area of expertise, and I refer you to Paul. Question from the audience. Thank you. More of a comment. Yes, go ahead. So first of all, I'm a proud parent of a 16-year-old, because when I asked him a month ago what social media he uses, he told me none. Ah. Awesome. But on the bigger picture, I am sitting on the statewide committee on school safety, and we focus a lot on social media used by school students. We had a couple of panels of students, anyway, from sixth grade through 12th grade, and most of them, I think, much more skilled than their parents or school officials in use of social media. So it's not even unusual for some of them to have multiple social media accounts on the same platform. Oh, yeah. When they use one for their business of selling clothes, another one to chat with their parents, and they honestly don't think that there is anything by policy or law that would stop them doing from whatever they are doing. We're actually conducting pretty large survey now on social media. We already have more than 3,000 responses, so pretty eager to know what the results are going to be. But considering that youth really not seeing school rules or policies as anything influencing in your own conversations, you said parents model their behavior, but again, the students seem to think that they're way past their parents in that. Is there anything in the conversation that you could have with the larger group, not on an individual level, like with your patient, but when addressing the community, when you could actually influence social media behavior? Wow. Okay. That's a big question. First of all, thanks for sharing your thoughts. I hope you'll share your survey results with us all. And I also refer you to some of the interesting literature coming out of Norway, where they have banned smartphone use in schools, and they have some interesting data coming out now. I think we should be talking about social media in all kinds of community meetings. And the first thing I want people to know is that the Surgeon General has issued, and this has never happened before, that a Surgeon General of the United States has issued this kind of alarm. And so it's important that everybody know that that has happened. There are increasing lawsuits happening, and they are, again, these are still pending. There are a couple of them are under appeal. But it's going to be important to follow that. So what I'm getting at is that the consensus seems to be that while technology is useful, it can have bad effects on mental health, and that we need to be educating the public about that. And in terms of how we choose to address that, I think that's where it gets much more complicated, that it can't just be regulations, that it has to be shifting cultural norms. You know, from studies we know that the best way to change behavior in youth is to get the coolest kids to adopt what you want to have adopted, and then everybody else wants to emulate them, right? And so those kids with lots of social capital, and that's what I think we need to do, that we need to get people with lots of social capital educating people about the potential harms, and that that's ultimately what's going to shift. And that's why it's so important to have, you know, the testimony of former Facebook employees and things like that, because I think that with time that will help shift things a little bit. I see we're out of time. Thank you for being such a terrific audience and for attending today.
Video Summary
In a lecture by Dr. Sandra DeYoung, a child psychiatrist, the risks and responsibilities associated with technology, especially social media, in medical practices were discussed. Dr. DeYoung, who authored a book on e-professionalism, highlighted the rapid shift and adaptation to technology over the past 35 years, emphasizing the necessity for professionals to remain competent in its use. She addressed concerns regarding privacy, compliance, and the ethical implications of technology in psychiatry. A key focus was on the impact of social media on professionalism, with examples of past professionalism pitfalls and the significance of having a conceptual framework to navigate tech-related issues in medical practice.<br /><br />Dr. DeYoung emphasized the unique challenges faced by psychiatrists due to the intimate nature of their work and the vulnerability of their patients, especially when tech is entwined. She outlined the conflict between advocating for regulatory oversight to protect privacy and autonomy versus encouraging open communication and innovation.<br /><br />The session included discussions about the appropriateness of personal content on social media for professionals, emphasizing the separation of personal and professional identities online. It also touched on the necessity of ongoing education in tech use, suggesting that technological competencies be integrated into medical training and continuing education.<br /><br />Furthermore, the ethical responsibility of psychiatrists to advocate for safe tech use among vulnerable populations was underscored, alongside the importance of balancing tech benefits with its potential harms, especially concerning children's mental health.Overall, Dr. DeYoung advocated for a collaborative, informed approach to integrating technology in psychiatry, emphasizing guidelines, education, and ethical practice.
Keywords
Dr. Sandra DeYoung
child psychiatry
technology risks
social media
e-professionalism
privacy compliance
ethical implications
professionalism pitfalls
regulatory oversight
tech education
vulnerable populations
mental health
×
Please select your language
1
English