
Sign up to save your podcasts
Or
As artificial intelligence (AI) tools become increasingly mainstream, they can potentially transform neurology clinical practice by improving patient care and reducing clinician workload. Critically evaluating these AI tools for clinical practice is important for successful implementation.
In this episode, Katie Grouse, MD, FAAN speaks with Peter Hadar, MD, MS, coauthor of the article “Clinical Applications of Artificial Intelligence in Neurology Practice” in the Continuum® April 2025 Neuro-ophthalmology issue.
Dr. Grouse is a Continuum® Audio interviewer and a clinical assistant professor at the University of California San Francisco in San Francisco, California.
Dr. Hadar is an instructor of neurology at Harvard Medical School and an attending physician at the Massachusetts General Hospital in Boston, Massachusetts.
Additional Resources
Read the article: Clinical Applications of Artificial Intelligence in Neurology Practice
Subscribe to Continuum®: shop.lww.com/Continuum
Continuum® Aloud (verbatim audio-book style recordings of articles available only to Continuum® subscribers): continpub.com/Aloud
More about the American Academy of Neurology: aan.com
Social Media
facebook.com/continuumcme
@ContinuumAAN
Guest: @PeterNHadar
Full episode transcript available here
Dr Jones: This is Dr Lyell Jones, Editor-in-Chief of Continuum. Thank you for listening to Continuum Audio. Be sure to visit the links in the episode notes for information about subscribing to the journal, listening to verbatim recordings of the articles, and exclusive access to interviews not featured on the podcast.
Dr Grouse: This is Dr Katie Grouse. Today I'm interviewing Dr Peter Hadar about his article on clinical applications of artificial intelligence in neurology practice, which he wrote with Dr Lydia Moura. This article appears in the April 2025 Continuum issue on neuro-ophthalmology. Welcome to the podcast, and please introduce yourself to our audience.
Dr Hadar: Hi, thanks for having me on, Katie. My name is Dr Peter Hadar. I'm currently an instructor over at Mass General Hospital, Harvard Medical School, and I'm excited to talk more about AI and how it's going to change our world, hopefully for the better.
Dr Grouse: We're so excited to have you. The application of AI in clinical practice is such an exciting and rapidly developing topic, and I'm so pleased to have you here to talk about your article, which I found to be absolutely fascinating. To start, I'd like to hear what you hope will be the key takeaway from your article with our listeners.
Dr Hadar: Yeah, thank you. The main point of the article is that AI in medicine is a tool. It's a wonderful tool that we should be cautiously optimistic about. But the important thing is for doctors, providers to be advocates on their behalf and on behalf of their patients for the appropriate use of this tool, because there are promises and pitfalls just with any tool. And I think in the article we detail a couple ways that it can be used in diagnostics, in clinical documentation, in the workflow, all ways that can really help providers. But sometimes the devil is in the details. So, we get into that as well.
Dr Grouse: How did you become interested in AI and its application, specifically in the practice of neurology?
Dr Hadar: When I was a kid, as most neurologists are, I was- I nerded out on a lot of sci-fi books, and I was really into Isaac Asimov and some of his robotics, which kind of talks about the philosophy of AI and how AI will be integrated in the future. As I got into neurology, I started doing research neurology and a lot of folks, if you're familiar with AI and machine learning, statistics can overlap a lot with machine learning. So slowly but surely, I started using statistical methods, machine learning methods, in some of my neurology research and kind of what brought me to where I am today.
Dr Grouse: And thinking about and talking about AI, could you briefly summarize a few important terms that we might be talking about, such as artificial intelligence, generative AI, machine learning, etcetera?
Dr Hadar: It's a little difficult, because some of these terms are nebulous and some of these terms are used in the lay public differently than other folks would use it. But in general, artificial intelligence is kind of the ability of machines or computers to communicate independently. It’s similar to as humans would do so. And there are kind of different levels of AI. There's this very hard AI where people are worried about with kind of terminator-full ability to replicate a human, effectively. And there are other forms of narrow AI, which are actually more of what we're talking about today, and where it's very kind of specific, task-based applications of machine learning in which even if it's very complex, the AI tools, the machine learning tools are able to give you a result. And just some other terms, I guess out there. You hear a lot about generative AI. There's a lot of these companies and different algorithms that incorporate generative AI, and that usually kind of creates something, kind of from scratch, based on a lot of data. So, it can create pictures, it can create new text if you just ask it. Other terms that can be used are natural language processing, which is a big part of some of the hospital records. When AI tools read hospital records and can summarize something, if it can translate things. So, it turns human speech into these results that you look for. And I guess other terms like large language models are something that also have come into prominence and they rely a lot on natural language processing, being able to understand human speech, interpret it and come up with the results that you want.
Dr Grouse: Thank you, that's really helpful. Building on that, what are some of the current clinical applications of AI that we may already be using in our neurologic practice and may not even be aware that that's what that is?
Dr Hadar: It depends on which medical record system you use, but a very common one are some of the clinical alerts that people might get, although some of them are pretty basic and they can say, you know, if the sodium is this level, you get an alert. But sometimes they do incorporate fancier machine learning tools to say, here's a red flag. You really should think about contacting the patient about this. And we can talk about it as well. It might encourage burnout with all the different flags. So, it's not a perfect tool. But these sorts of things, typically in the setting of alerts, are the most common use. Sorry, and another one is in folks who do stroke, there are a lot of stroke algorithms with imaging that can help detect where the strokes occur. And that's a heavy machine learning field of image processing, image analysis for rapid detection of stroke.
Dr Grouse: That's really interesting. I think my understanding is that AI has been used specifically for radiology interpretation applications for some time now. Is that right?
Dr Hadar: In some ways. Actually, my background is in neuroimaging analysis, and we've been doing a lot of it. I've been doing it for years. There's still a lot of room to go, but it's really getting there in some ways. My suspicion is that in the coming years, it’s going to be similar to how anesthesiologists at one point were actively bagging people in the fifties, and then you develop machines that can kind of do it for you. At some point there's going to be a prelim radiology read that is not just done by the resident or fellow, but is done by the machine. And then another radiologist would double check it and make sure. And I think that's going to happen in our lifetime.
Dr Grouse: Wow, that's absolutely fascinating. What are some potential applications of AI in neurologic practice that may be most high-yield to improve patient care, patient access, and even reduce physician burnout?
Dr Hadar: These are separate sort of questions, but they're all sort of interlinked. I think one of the big aspects of patient care in the last few years, especially with the electronic medical record, is patients have become much more their own advocates and we focus a lot more on patient autonomy. So, they are reaching out to providers outside of appointments. This can kind of lead to physician burnout. You have to answer all these messages through the electronic medical record. And so having, effectively, digital twins of yourself, AI version of yourself, that can answer the questions for the patient on your off times is one of the things that can definitely help with patient care. In terms of access, I think another aspect is having integrated workflows. So, being able to schedule patients efficiently, effectively, where more difficult patients automatically get one-hour appointments, patients who have fewer medical difficulties might get shorter appointments. That's another big improvement. Then finally, in terms of physician burnout, having ambient intelligence where notes can be written on your behalf and you just need to double-check them after allows you to really have a much better relationship with the patients. You can actually talk with them one on one and just focus on kind of the holistic care of the patient. And I think that's- being less of a cog in the machine and focusing on your role as a healer would be actually very helpful with the implementation of some of these AI tools.
Dr Grouse: You mentioned ambient technology and specifically ambient documentation. And certainly, this is an area that I feel a lot of excitement about from many physicians, a lot of anticipation to be able to have access to this technology. And you mentioned already some of the potential benefits. What are some of the potential… the big wins, but then also potential drawbacks of ambient documentation?
Dr Hadar: Just to kind of summarize, the ambient intelligence idea is using kind of an environmental AI system that, without being very obtrusive, just is able to record, able to detect language and process it, usually into notes. So, effectively like an AI scribe that is not actually in the appointment. So, the clear one is that---and I've seen this as well in my practice---it's very difficult to really engage with the patient and truly listen to what they're saying and form that relationship when you're behind a computer and behind a desk. And having that one-on-one interaction where you just focus on the patient, learn everything, and basically someone else takes notes for you is a very helpful component of it.
Some of the drawbacks, though, some of it has to do with the existing technology. It's still not at the stage where it can do everything. It can have errors in writing down the medication, writing down the exact doses. It can't really, at this point, detect some of the apprehensions and some of the nonverbal cues that patients and providers may kind of state. Then there's also the big one where a lot of these are still done by startups and other companies where privacy may be an issue, and a lot of patients may feel very uncomfortable with having ambient intelligence tools introduced into their clinical visit, having a machine basically come between the doctor and the patient. But I think that over time these apprehensions will lessen. A lot of the security will improve and be strengthened, and I think that it's going to be incorporated a lot more into clinical practice.
Dr Grouse: Yeah, well, we'll all be really excited to see how that technology develops. It certainly seems like it has a lot of promise. You mentioned in your article a lot about how AI can be used to improve screening for patients for certain types of conditions, and that certainly seems like an obvious win. But as I was reading the article, I couldn't help but worry that, at least in the short term, these tools could translate into more work for busy neurologists and more demand for access, which is, you know, already, you know, big problems in our field. How can tools like these, such as, like, for instance, the AI fundoscopic screening for vascular cognitive risk factors help without adding to these existing burdens?
Dr Hadar: It's a very good point. And I think it's one of the central points of why we wanted to write the article is that these AI in medicine, it's, it's a tool like any other. And just like when the electronic medical record came into being, a lot of folks thought that this was going to save a lot of time. And you know, some people would say that it actually worsened things in a way. And when you use these diagnostic screening tools, there is an improvement in efficiency, there is an improvement in patient care. But it's important that doctors, patients advocate for this to be value-based and not revenue-based, necessarily. And it doesn't mean that suddenly the appointments are shorter, that now physicians have to see twice as many patients and then patients just have less of a relationship with their provider. So, it's important to just be able to integrate these tools in an appropriate way in which the provider and the patient both benefit.
Dr Grouse: You mentioned earlier about the digital twin. Certainly, in your article you mentioned, you know, that idea along with the idea of the potential of development of virtual chatbot visits or in-person visits with a robot neurologist. And I read all this with equal parts, I think excitement, but horror and and fear. Can you tell us more about what these concepts are, and how far are we from seeing technology like this in our clinics, and maybe even, what are the risks we need to be thinking about with these?
Dr Hadar: Yeah. So, I mean, I definitely think that we will see implementation of some of these tools in our lifetime. I'm not sure if we're going to have a full walking, talking robot doing some of the clinical visits. But I do think that, especially as we start doing a lot more virtual visits, it is very easy to imagine that there will be some sort of video AI doctor that can serve as, effectively, a digital twin of me or someone else, that can see patients and diagnose them. The idea behind the digital twin is that it's kind of like an AI version of yourself. So, while you only see one patient, an AI twin can go and see two or three other patients. They could also, if the patients send you messages, can respond to those messages in a way that you would, based on your training and that sort of thing. So, it allows for the ability to be in multiple places at once.
One of the risks of this is, I guess, overreliance on the technology, where if you just say, we're just going to have a chatbot do everything for us and then not look at the results, you really run the risk of the chatbot just recommending really bad things. And there is training to be had. Maybe in fifty years the chatbot will be at the same level as a physician, but there's still a lot of room for improvement. I personally, I think that my suspicion as to where things will go are for very simple visits in the future and in our lifetime. If someone is having a cold or something like that and it goes to their primary care physician, a chatbot or something like that may be of really beneficial use. And it'll help segment out the different groups of simple diagnosis, simple treatments can be seen by these robots, these AI, these machine learning tools; and some of the more complex ones, at least for the early implementation of this will be seen by more specialized providers like neurologists and subspecialist neurologists too.
Dr Grouse: That certainly seems reasonable, and it does seem that the more simple algorithmic things are always where these technologies will start, but it'll be interesting to see where things can go with more complex areas. Now I wanted to switch gears a little bit in the article- and I thought this was really important because I see it as being certainly one of the bigger drawbacks of AI, is that despite the many benefits of artificial intelligence, AI can unfortunately perpetuate systemic bias. And I'm wondering if you could tell us a little bit more about how this happened?
Dr Hadar: I know I'm beating a dead horse on this, but AI is a tool like any other. And the problem with it is that what you put in is very similar to what you get out. And there's this idea in computer science of “garbage in, garbage out”. If you include a lot of data that has a lot of systemic biases already in the data, you're going to get results that perpetuate these things. So, for instance, if in dermatologic practices, if you just had a data set that included people of one skin color or one race and you attempted to train a model that would be able to detect skin cancer lesions, that model may not be easily applicable to people of other races, other ethnicities, other skin colors. And that can be very damaging for care. And it can actually really, really hurt the treatments for a lot of the patients.
So that is one of the, kind of, main components of the systemic biases in AI. The way we mitigate them is by being aware of it and actually implementing, I guess, really hard stops on a lot of these tools before they get into practice. Being sure, did your data set include this breakdown of sex and gender, of race and ethnicity? So that the stuff you have in the AI tool is not just a very narrow, focused application, but can be generalized to a large population, not just of one community, one ethnic group, racial group, one country, but can really be generalized throughout the world for many patients.
Dr Grouse: The first step is being aware of it, and hopefully these models will be built thoughtfully to help mitigate this as much as possible. I wanted to ask as well, another concern about AI is the safety of private data. And I'm wondering, as we're starting to do things like use ambient documentation, AI scribe, and other types of technologies like this, what can we tell our patients who are concerned about the safety of their personal data collected via these programs, particularly when they're being stored or used with outside companies that aren't even in our own electronic medical records system?
Dr Hadar: Yeah, it's a very good question, and I think it's one of the major limitations of the current implementation of AI into clinical practice, because we still don't really have great standards---medical standards, at least---for storing this data, how to analyze this data. And my suspicion is that at some point in the future, we're going to need to have a HIPAA compliance that's going to be updated for the 21st century, that will incorporate the appropriate use of these tools, the appropriate use of these data storage, of data storage beyond just PHI. Because there's a lot more that goes into it. I would say that the important thing for how to implement this, and for patients to be aware of, is being very clear and very open with informed consent. If you're using a company that isn't really transparent about their data security and their data sharing practices, that needs to be clearly stated to the patient. If their data is going to be shared with other people, reanalyzed in a different way, many patients will potentially consider not participating in an AI implementation in clinic. And I think the other key thing is that this should be, at least initially, an opt-in approach as opposed to an opt-out approach. So patients really have- can really decide and have an informed opinion about whether or not they want to participate in the AI implementation in medicine.
Dr Grouse: Well, thank you so much for explaining that. And it does certainly sound like there's a lot of development that's going to happen in that space as we are learning more about this and the use of it becomes more prevalent. Now, I also wanted to ask, another good point that you made in your article---and I don't think comes up enough in this area, but likely will as we're using it more---AI has a cost, and some of that cost is just the high amount of data and computational processing needed to use it, as well as the effects on the environment from all this energy usage. Given this drawback of AI, how can we think about potential costs versus the benefits, the more widespread use of this technology? Or how should we be thinking about it?
Dr Hadar: It's part of a balance of the costs and benefits, effectively, is that AI---and just to kind of name some of them, when you have these larger data centers that are storing all this data, it requires a lot of energy consumption. It requires actually a lot of water to cool these things because they get really hot. So, these are some of the key environmental factors. And at this point, it's not as extreme as it could be, but you can imagine, as the world transitions towards an AI future, these data centers will become huge, massive, require a lot of energy. And as long as we still use a lot of nonrenewable resources to power our world, our civilization, I think this is going to be very difficult. It's going to allow for more carbon in the atmosphere, potentially more climate change.
So, being very clear about using sustainable practices for AI usage, whether it be having data centers specifically use renewable resources, have clear water management guidelines, that sort of thing will allow for AI to grow, but in a sustainable way that doesn't damage our planet. In terms of the financial costs… so, AI is not free. However, on a given computer, if you want to run some basic AI analysis, you can definitely do it on any laptop you have and sometimes even on your phone. But for some of these larger models, kind of the ones that we're talking about in the medical field, it really requires a lot of computational power. And this stuff can be very expensive and can get very expensive very quickly, as anyone who's used any of these web service providers can attest to. So, it's very important to be clear-eyed about problems with implementation because some of these costs can be very prohibitive. You can run thousands and you can quickly rack up a lot of money for some very basic analysis if you want to do it in a very rapid way, in a very effective way.
Dr Grouse: That's a great overview. You know, something that I think we're all going to be having to think about a lot more as we're incorporating these technologies. So, important conversations I hope we're all having, and in our institutions as we're making these decisions. I wanted to ask, certainly, as some of our listeners who may be still in the training process are hearing you talk about this and are really excited about AI and implementation of technology in medicine, what would you recommend to people who want to pursue a career in this area as you have done?
Dr Hadar: So, I think one of the important things for trainees to understand are, there are different ways that they can incorporate AI into their lives going forward as they become more seasoned doctors. There are clinical ways, there are research ways, there are educational ways. A lot of the research ways, I'm one of the researchers, you can definitely incorporate AI. You can learn online. You can learn through books about how to use machine learning tools to do your analysis, and it can be very helpful. But I think one of the things that is lacking is a clinician who can traverse both the AI and patient care fields and be able to introduce AI in a very effective way that really provides value to the patients and improves the care of patients. So that means if a hospital system that a trainee is eventually part of wants to implement ambient technology, it's important for physicians to understand the risks, the benefits, how they may need to adapt to this. And to really advocate and say, just because we have this ambient technology doesn't mean now we see fifty different patients, and then you're stuck with the same issue of a worse patient-provider relationship. One of the reasons I got into medicine was to have that patient-provider interaction to not only be kind of a cog in the hospital machine, but to really take on a role as a healer and a physician. And one of the benefits of these AI tools is that in putting the machine in medicine, you can also put the humanity back in medicine at times. And I think that's a key component that trainees need to take to heart.
Dr Grouse: I really appreciate you going into that, and sounds like there's certainly need. Hoping some of our listeners today will consider careers in pursuing AI and other types of technologies in medicine. I really appreciate you coming to talk with us today. I think this is just such a fascinating topic and an area that everybody's really excited about, and hoping that we'll be seeing more of this in our lives and hopefully improving our clinical practice. Thank you so much for talking to us about your article on AI in clinical neurology. It was a fascinating topic and I learned a lot.
Dr Hadar: Thank you very much. I really appreciate the conversation, and I hope that trainees, physicians, and others will gain a lot and really help our patients through this.
Dr Grouse: So again, today I've been interviewing Dr Peter Hadar about his article on clinical applications of artificial intelligence in neurology practice, which he wrote with Dr Lydia Moura. This article appears in the most recent issue of Continuum on neuro-ophthalmology. Be sure to check out Continuum Audio episodes from this and other issues. And thank you to our listeners for joining today.
Dr Monteith: This is Dr Teshamae Monteith, Associate Editor of Continuum Audio. If you've enjoyed this episode, you'll love the journal, which is full of in-depth and clinically relevant information important for neurology practitioners. Use the link in the episode notes to learn more and subscribe. Thank you for listening to Continuum Audio.
4.6
7474 ratings
As artificial intelligence (AI) tools become increasingly mainstream, they can potentially transform neurology clinical practice by improving patient care and reducing clinician workload. Critically evaluating these AI tools for clinical practice is important for successful implementation.
In this episode, Katie Grouse, MD, FAAN speaks with Peter Hadar, MD, MS, coauthor of the article “Clinical Applications of Artificial Intelligence in Neurology Practice” in the Continuum® April 2025 Neuro-ophthalmology issue.
Dr. Grouse is a Continuum® Audio interviewer and a clinical assistant professor at the University of California San Francisco in San Francisco, California.
Dr. Hadar is an instructor of neurology at Harvard Medical School and an attending physician at the Massachusetts General Hospital in Boston, Massachusetts.
Additional Resources
Read the article: Clinical Applications of Artificial Intelligence in Neurology Practice
Subscribe to Continuum®: shop.lww.com/Continuum
Continuum® Aloud (verbatim audio-book style recordings of articles available only to Continuum® subscribers): continpub.com/Aloud
More about the American Academy of Neurology: aan.com
Social Media
facebook.com/continuumcme
@ContinuumAAN
Guest: @PeterNHadar
Full episode transcript available here
Dr Jones: This is Dr Lyell Jones, Editor-in-Chief of Continuum. Thank you for listening to Continuum Audio. Be sure to visit the links in the episode notes for information about subscribing to the journal, listening to verbatim recordings of the articles, and exclusive access to interviews not featured on the podcast.
Dr Grouse: This is Dr Katie Grouse. Today I'm interviewing Dr Peter Hadar about his article on clinical applications of artificial intelligence in neurology practice, which he wrote with Dr Lydia Moura. This article appears in the April 2025 Continuum issue on neuro-ophthalmology. Welcome to the podcast, and please introduce yourself to our audience.
Dr Hadar: Hi, thanks for having me on, Katie. My name is Dr Peter Hadar. I'm currently an instructor over at Mass General Hospital, Harvard Medical School, and I'm excited to talk more about AI and how it's going to change our world, hopefully for the better.
Dr Grouse: We're so excited to have you. The application of AI in clinical practice is such an exciting and rapidly developing topic, and I'm so pleased to have you here to talk about your article, which I found to be absolutely fascinating. To start, I'd like to hear what you hope will be the key takeaway from your article with our listeners.
Dr Hadar: Yeah, thank you. The main point of the article is that AI in medicine is a tool. It's a wonderful tool that we should be cautiously optimistic about. But the important thing is for doctors, providers to be advocates on their behalf and on behalf of their patients for the appropriate use of this tool, because there are promises and pitfalls just with any tool. And I think in the article we detail a couple ways that it can be used in diagnostics, in clinical documentation, in the workflow, all ways that can really help providers. But sometimes the devil is in the details. So, we get into that as well.
Dr Grouse: How did you become interested in AI and its application, specifically in the practice of neurology?
Dr Hadar: When I was a kid, as most neurologists are, I was- I nerded out on a lot of sci-fi books, and I was really into Isaac Asimov and some of his robotics, which kind of talks about the philosophy of AI and how AI will be integrated in the future. As I got into neurology, I started doing research neurology and a lot of folks, if you're familiar with AI and machine learning, statistics can overlap a lot with machine learning. So slowly but surely, I started using statistical methods, machine learning methods, in some of my neurology research and kind of what brought me to where I am today.
Dr Grouse: And thinking about and talking about AI, could you briefly summarize a few important terms that we might be talking about, such as artificial intelligence, generative AI, machine learning, etcetera?
Dr Hadar: It's a little difficult, because some of these terms are nebulous and some of these terms are used in the lay public differently than other folks would use it. But in general, artificial intelligence is kind of the ability of machines or computers to communicate independently. It’s similar to as humans would do so. And there are kind of different levels of AI. There's this very hard AI where people are worried about with kind of terminator-full ability to replicate a human, effectively. And there are other forms of narrow AI, which are actually more of what we're talking about today, and where it's very kind of specific, task-based applications of machine learning in which even if it's very complex, the AI tools, the machine learning tools are able to give you a result. And just some other terms, I guess out there. You hear a lot about generative AI. There's a lot of these companies and different algorithms that incorporate generative AI, and that usually kind of creates something, kind of from scratch, based on a lot of data. So, it can create pictures, it can create new text if you just ask it. Other terms that can be used are natural language processing, which is a big part of some of the hospital records. When AI tools read hospital records and can summarize something, if it can translate things. So, it turns human speech into these results that you look for. And I guess other terms like large language models are something that also have come into prominence and they rely a lot on natural language processing, being able to understand human speech, interpret it and come up with the results that you want.
Dr Grouse: Thank you, that's really helpful. Building on that, what are some of the current clinical applications of AI that we may already be using in our neurologic practice and may not even be aware that that's what that is?
Dr Hadar: It depends on which medical record system you use, but a very common one are some of the clinical alerts that people might get, although some of them are pretty basic and they can say, you know, if the sodium is this level, you get an alert. But sometimes they do incorporate fancier machine learning tools to say, here's a red flag. You really should think about contacting the patient about this. And we can talk about it as well. It might encourage burnout with all the different flags. So, it's not a perfect tool. But these sorts of things, typically in the setting of alerts, are the most common use. Sorry, and another one is in folks who do stroke, there are a lot of stroke algorithms with imaging that can help detect where the strokes occur. And that's a heavy machine learning field of image processing, image analysis for rapid detection of stroke.
Dr Grouse: That's really interesting. I think my understanding is that AI has been used specifically for radiology interpretation applications for some time now. Is that right?
Dr Hadar: In some ways. Actually, my background is in neuroimaging analysis, and we've been doing a lot of it. I've been doing it for years. There's still a lot of room to go, but it's really getting there in some ways. My suspicion is that in the coming years, it’s going to be similar to how anesthesiologists at one point were actively bagging people in the fifties, and then you develop machines that can kind of do it for you. At some point there's going to be a prelim radiology read that is not just done by the resident or fellow, but is done by the machine. And then another radiologist would double check it and make sure. And I think that's going to happen in our lifetime.
Dr Grouse: Wow, that's absolutely fascinating. What are some potential applications of AI in neurologic practice that may be most high-yield to improve patient care, patient access, and even reduce physician burnout?
Dr Hadar: These are separate sort of questions, but they're all sort of interlinked. I think one of the big aspects of patient care in the last few years, especially with the electronic medical record, is patients have become much more their own advocates and we focus a lot more on patient autonomy. So, they are reaching out to providers outside of appointments. This can kind of lead to physician burnout. You have to answer all these messages through the electronic medical record. And so having, effectively, digital twins of yourself, AI version of yourself, that can answer the questions for the patient on your off times is one of the things that can definitely help with patient care. In terms of access, I think another aspect is having integrated workflows. So, being able to schedule patients efficiently, effectively, where more difficult patients automatically get one-hour appointments, patients who have fewer medical difficulties might get shorter appointments. That's another big improvement. Then finally, in terms of physician burnout, having ambient intelligence where notes can be written on your behalf and you just need to double-check them after allows you to really have a much better relationship with the patients. You can actually talk with them one on one and just focus on kind of the holistic care of the patient. And I think that's- being less of a cog in the machine and focusing on your role as a healer would be actually very helpful with the implementation of some of these AI tools.
Dr Grouse: You mentioned ambient technology and specifically ambient documentation. And certainly, this is an area that I feel a lot of excitement about from many physicians, a lot of anticipation to be able to have access to this technology. And you mentioned already some of the potential benefits. What are some of the potential… the big wins, but then also potential drawbacks of ambient documentation?
Dr Hadar: Just to kind of summarize, the ambient intelligence idea is using kind of an environmental AI system that, without being very obtrusive, just is able to record, able to detect language and process it, usually into notes. So, effectively like an AI scribe that is not actually in the appointment. So, the clear one is that---and I've seen this as well in my practice---it's very difficult to really engage with the patient and truly listen to what they're saying and form that relationship when you're behind a computer and behind a desk. And having that one-on-one interaction where you just focus on the patient, learn everything, and basically someone else takes notes for you is a very helpful component of it.
Some of the drawbacks, though, some of it has to do with the existing technology. It's still not at the stage where it can do everything. It can have errors in writing down the medication, writing down the exact doses. It can't really, at this point, detect some of the apprehensions and some of the nonverbal cues that patients and providers may kind of state. Then there's also the big one where a lot of these are still done by startups and other companies where privacy may be an issue, and a lot of patients may feel very uncomfortable with having ambient intelligence tools introduced into their clinical visit, having a machine basically come between the doctor and the patient. But I think that over time these apprehensions will lessen. A lot of the security will improve and be strengthened, and I think that it's going to be incorporated a lot more into clinical practice.
Dr Grouse: Yeah, well, we'll all be really excited to see how that technology develops. It certainly seems like it has a lot of promise. You mentioned in your article a lot about how AI can be used to improve screening for patients for certain types of conditions, and that certainly seems like an obvious win. But as I was reading the article, I couldn't help but worry that, at least in the short term, these tools could translate into more work for busy neurologists and more demand for access, which is, you know, already, you know, big problems in our field. How can tools like these, such as, like, for instance, the AI fundoscopic screening for vascular cognitive risk factors help without adding to these existing burdens?
Dr Hadar: It's a very good point. And I think it's one of the central points of why we wanted to write the article is that these AI in medicine, it's, it's a tool like any other. And just like when the electronic medical record came into being, a lot of folks thought that this was going to save a lot of time. And you know, some people would say that it actually worsened things in a way. And when you use these diagnostic screening tools, there is an improvement in efficiency, there is an improvement in patient care. But it's important that doctors, patients advocate for this to be value-based and not revenue-based, necessarily. And it doesn't mean that suddenly the appointments are shorter, that now physicians have to see twice as many patients and then patients just have less of a relationship with their provider. So, it's important to just be able to integrate these tools in an appropriate way in which the provider and the patient both benefit.
Dr Grouse: You mentioned earlier about the digital twin. Certainly, in your article you mentioned, you know, that idea along with the idea of the potential of development of virtual chatbot visits or in-person visits with a robot neurologist. And I read all this with equal parts, I think excitement, but horror and and fear. Can you tell us more about what these concepts are, and how far are we from seeing technology like this in our clinics, and maybe even, what are the risks we need to be thinking about with these?
Dr Hadar: Yeah. So, I mean, I definitely think that we will see implementation of some of these tools in our lifetime. I'm not sure if we're going to have a full walking, talking robot doing some of the clinical visits. But I do think that, especially as we start doing a lot more virtual visits, it is very easy to imagine that there will be some sort of video AI doctor that can serve as, effectively, a digital twin of me or someone else, that can see patients and diagnose them. The idea behind the digital twin is that it's kind of like an AI version of yourself. So, while you only see one patient, an AI twin can go and see two or three other patients. They could also, if the patients send you messages, can respond to those messages in a way that you would, based on your training and that sort of thing. So, it allows for the ability to be in multiple places at once.
One of the risks of this is, I guess, overreliance on the technology, where if you just say, we're just going to have a chatbot do everything for us and then not look at the results, you really run the risk of the chatbot just recommending really bad things. And there is training to be had. Maybe in fifty years the chatbot will be at the same level as a physician, but there's still a lot of room for improvement. I personally, I think that my suspicion as to where things will go are for very simple visits in the future and in our lifetime. If someone is having a cold or something like that and it goes to their primary care physician, a chatbot or something like that may be of really beneficial use. And it'll help segment out the different groups of simple diagnosis, simple treatments can be seen by these robots, these AI, these machine learning tools; and some of the more complex ones, at least for the early implementation of this will be seen by more specialized providers like neurologists and subspecialist neurologists too.
Dr Grouse: That certainly seems reasonable, and it does seem that the more simple algorithmic things are always where these technologies will start, but it'll be interesting to see where things can go with more complex areas. Now I wanted to switch gears a little bit in the article- and I thought this was really important because I see it as being certainly one of the bigger drawbacks of AI, is that despite the many benefits of artificial intelligence, AI can unfortunately perpetuate systemic bias. And I'm wondering if you could tell us a little bit more about how this happened?
Dr Hadar: I know I'm beating a dead horse on this, but AI is a tool like any other. And the problem with it is that what you put in is very similar to what you get out. And there's this idea in computer science of “garbage in, garbage out”. If you include a lot of data that has a lot of systemic biases already in the data, you're going to get results that perpetuate these things. So, for instance, if in dermatologic practices, if you just had a data set that included people of one skin color or one race and you attempted to train a model that would be able to detect skin cancer lesions, that model may not be easily applicable to people of other races, other ethnicities, other skin colors. And that can be very damaging for care. And it can actually really, really hurt the treatments for a lot of the patients.
So that is one of the, kind of, main components of the systemic biases in AI. The way we mitigate them is by being aware of it and actually implementing, I guess, really hard stops on a lot of these tools before they get into practice. Being sure, did your data set include this breakdown of sex and gender, of race and ethnicity? So that the stuff you have in the AI tool is not just a very narrow, focused application, but can be generalized to a large population, not just of one community, one ethnic group, racial group, one country, but can really be generalized throughout the world for many patients.
Dr Grouse: The first step is being aware of it, and hopefully these models will be built thoughtfully to help mitigate this as much as possible. I wanted to ask as well, another concern about AI is the safety of private data. And I'm wondering, as we're starting to do things like use ambient documentation, AI scribe, and other types of technologies like this, what can we tell our patients who are concerned about the safety of their personal data collected via these programs, particularly when they're being stored or used with outside companies that aren't even in our own electronic medical records system?
Dr Hadar: Yeah, it's a very good question, and I think it's one of the major limitations of the current implementation of AI into clinical practice, because we still don't really have great standards---medical standards, at least---for storing this data, how to analyze this data. And my suspicion is that at some point in the future, we're going to need to have a HIPAA compliance that's going to be updated for the 21st century, that will incorporate the appropriate use of these tools, the appropriate use of these data storage, of data storage beyond just PHI. Because there's a lot more that goes into it. I would say that the important thing for how to implement this, and for patients to be aware of, is being very clear and very open with informed consent. If you're using a company that isn't really transparent about their data security and their data sharing practices, that needs to be clearly stated to the patient. If their data is going to be shared with other people, reanalyzed in a different way, many patients will potentially consider not participating in an AI implementation in clinic. And I think the other key thing is that this should be, at least initially, an opt-in approach as opposed to an opt-out approach. So patients really have- can really decide and have an informed opinion about whether or not they want to participate in the AI implementation in medicine.
Dr Grouse: Well, thank you so much for explaining that. And it does certainly sound like there's a lot of development that's going to happen in that space as we are learning more about this and the use of it becomes more prevalent. Now, I also wanted to ask, another good point that you made in your article---and I don't think comes up enough in this area, but likely will as we're using it more---AI has a cost, and some of that cost is just the high amount of data and computational processing needed to use it, as well as the effects on the environment from all this energy usage. Given this drawback of AI, how can we think about potential costs versus the benefits, the more widespread use of this technology? Or how should we be thinking about it?
Dr Hadar: It's part of a balance of the costs and benefits, effectively, is that AI---and just to kind of name some of them, when you have these larger data centers that are storing all this data, it requires a lot of energy consumption. It requires actually a lot of water to cool these things because they get really hot. So, these are some of the key environmental factors. And at this point, it's not as extreme as it could be, but you can imagine, as the world transitions towards an AI future, these data centers will become huge, massive, require a lot of energy. And as long as we still use a lot of nonrenewable resources to power our world, our civilization, I think this is going to be very difficult. It's going to allow for more carbon in the atmosphere, potentially more climate change.
So, being very clear about using sustainable practices for AI usage, whether it be having data centers specifically use renewable resources, have clear water management guidelines, that sort of thing will allow for AI to grow, but in a sustainable way that doesn't damage our planet. In terms of the financial costs… so, AI is not free. However, on a given computer, if you want to run some basic AI analysis, you can definitely do it on any laptop you have and sometimes even on your phone. But for some of these larger models, kind of the ones that we're talking about in the medical field, it really requires a lot of computational power. And this stuff can be very expensive and can get very expensive very quickly, as anyone who's used any of these web service providers can attest to. So, it's very important to be clear-eyed about problems with implementation because some of these costs can be very prohibitive. You can run thousands and you can quickly rack up a lot of money for some very basic analysis if you want to do it in a very rapid way, in a very effective way.
Dr Grouse: That's a great overview. You know, something that I think we're all going to be having to think about a lot more as we're incorporating these technologies. So, important conversations I hope we're all having, and in our institutions as we're making these decisions. I wanted to ask, certainly, as some of our listeners who may be still in the training process are hearing you talk about this and are really excited about AI and implementation of technology in medicine, what would you recommend to people who want to pursue a career in this area as you have done?
Dr Hadar: So, I think one of the important things for trainees to understand are, there are different ways that they can incorporate AI into their lives going forward as they become more seasoned doctors. There are clinical ways, there are research ways, there are educational ways. A lot of the research ways, I'm one of the researchers, you can definitely incorporate AI. You can learn online. You can learn through books about how to use machine learning tools to do your analysis, and it can be very helpful. But I think one of the things that is lacking is a clinician who can traverse both the AI and patient care fields and be able to introduce AI in a very effective way that really provides value to the patients and improves the care of patients. So that means if a hospital system that a trainee is eventually part of wants to implement ambient technology, it's important for physicians to understand the risks, the benefits, how they may need to adapt to this. And to really advocate and say, just because we have this ambient technology doesn't mean now we see fifty different patients, and then you're stuck with the same issue of a worse patient-provider relationship. One of the reasons I got into medicine was to have that patient-provider interaction to not only be kind of a cog in the hospital machine, but to really take on a role as a healer and a physician. And one of the benefits of these AI tools is that in putting the machine in medicine, you can also put the humanity back in medicine at times. And I think that's a key component that trainees need to take to heart.
Dr Grouse: I really appreciate you going into that, and sounds like there's certainly need. Hoping some of our listeners today will consider careers in pursuing AI and other types of technologies in medicine. I really appreciate you coming to talk with us today. I think this is just such a fascinating topic and an area that everybody's really excited about, and hoping that we'll be seeing more of this in our lives and hopefully improving our clinical practice. Thank you so much for talking to us about your article on AI in clinical neurology. It was a fascinating topic and I learned a lot.
Dr Hadar: Thank you very much. I really appreciate the conversation, and I hope that trainees, physicians, and others will gain a lot and really help our patients through this.
Dr Grouse: So again, today I've been interviewing Dr Peter Hadar about his article on clinical applications of artificial intelligence in neurology practice, which he wrote with Dr Lydia Moura. This article appears in the most recent issue of Continuum on neuro-ophthalmology. Be sure to check out Continuum Audio episodes from this and other issues. And thank you to our listeners for joining today.
Dr Monteith: This is Dr Teshamae Monteith, Associate Editor of Continuum Audio. If you've enjoyed this episode, you'll love the journal, which is full of in-depth and clinically relevant information important for neurology practitioners. Use the link in the episode notes to learn more and subscribe. Thank you for listening to Continuum Audio.
129 Listeners
283 Listeners
319 Listeners
45 Listeners
489 Listeners
3,323 Listeners
14 Listeners
1,086 Listeners
22 Listeners
186 Listeners
514 Listeners
132 Listeners
182 Listeners
362 Listeners
38 Listeners