
Sign up to save your podcasts
Or


Far from a future add-on, artificial intelligence is already embedded in the cycle of drug safety, from case processing to signal detection. Versatile generative AI models have raised the bar of possibilities but also increased the stakes. How do we use them without losing trust and where do we set the limits?
In this two-part episode, Niklas Norén, head of Research at Uppsala Monitoring Centre, unpacks how artificial intelligence can add value to pharmacovigilance and where it should – or shouldn’t – go next.
Tune in to find out:
Want to know more?
Listen to the first part of the interview here.
In May 2025, the CIOMS Working Group XIV drafted guidelines for the use of AI in pharmacovigilance. The draft report received more than a thousand comments during public consultation and is now being finalised.
Earlier this year, the World Health Organization issued guidance on large multi-modal models – a type of generative AI – when used in healthcare.
Niklas has spoken extensively on the potential and risks of AI in pharmacovigilance, including in this presentation at the University of Verona and in this Uppsala Reports article.
Other recent UMC publications cited in the interview or relevant to the topic include:
For more on the ‘black box’ issue and maintaining trust in AI, revisit this interview with GSK’s Michael Glaser from the Drug Safety Matters archive.
Join the conversation on social media
Follow us on Facebook, LinkedIn, X, or Bluesky and share your thoughts about the show with the hashtag #DrugSafetyMatters.
Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!
About UMC
Read more about Uppsala Monitoring Centre and how we promote safer use of medicines and vaccines for everyone everywhere.
By Uppsala Monitoring Centre5
22 ratings
Far from a future add-on, artificial intelligence is already embedded in the cycle of drug safety, from case processing to signal detection. Versatile generative AI models have raised the bar of possibilities but also increased the stakes. How do we use them without losing trust and where do we set the limits?
In this two-part episode, Niklas Norén, head of Research at Uppsala Monitoring Centre, unpacks how artificial intelligence can add value to pharmacovigilance and where it should – or shouldn’t – go next.
Tune in to find out:
Want to know more?
Listen to the first part of the interview here.
In May 2025, the CIOMS Working Group XIV drafted guidelines for the use of AI in pharmacovigilance. The draft report received more than a thousand comments during public consultation and is now being finalised.
Earlier this year, the World Health Organization issued guidance on large multi-modal models – a type of generative AI – when used in healthcare.
Niklas has spoken extensively on the potential and risks of AI in pharmacovigilance, including in this presentation at the University of Verona and in this Uppsala Reports article.
Other recent UMC publications cited in the interview or relevant to the topic include:
For more on the ‘black box’ issue and maintaining trust in AI, revisit this interview with GSK’s Michael Glaser from the Drug Safety Matters archive.
Join the conversation on social media
Follow us on Facebook, LinkedIn, X, or Bluesky and share your thoughts about the show with the hashtag #DrugSafetyMatters.
Got a story to share?
We’re always looking for new content and interesting people to interview. If you have a great idea for a show, get in touch!
About UMC
Read more about Uppsala Monitoring Centre and how we promote safer use of medicines and vaccines for everyone everywhere.

7,683 Listeners

324 Listeners