Evidence-Based Health Care

Artificial Intelligence and Health Security, managing the risks


Listen Later

Professor Karl Roberts, University of New England, NSW, Australia gives a talk on generative AI and large language models as applied to healthcare. Dr Karl Roberts is the Head of the School of Health and Professor of Health and Wellbeing at the University of New England, NSW, Australia. Karl has over thirty years-experience working in academia at institutions in Australia, the UK and USA. He has also acted as an advisor for various international bodies and governments on issues related to wellbeing, violence prevention and professional practice. Notably, this has included working with policing agencies, developing policy and practice on suicide, stalking, and homicide prevention. Interpol developing guidance for organisational responses to deliberate events such as biological weapon use. The UK government SAGE advisory group throughout the Covid19 pandemic focusing upon security planning. The European Union advising on biological terrorism, and extremist use of AI. World Health Organisation where he worked in a unit developing policy and practice related to deliberate biological threat events.
There has been substantial recent interest in the benefits and risks of artificial intelligence (AI). This has ranged from extolling its virtues as a harmless aid to decision making, as a tool in research, and as a means of improving economic productivity. To those claiming that unchecked AI is a significant threat to human wellbeing and could be an existential threat to humanity. One area of significant recent advancement in AI has been the field of Large Language Models (LLMs). Exemplified by tools such as Chat-GPT, or DALL-E, these so-called generative AI models allow individuals to generate new outputs through interacting with the models using simple natural language inputs. Various versions of LLMs have been applied to healthcare, and have variously been shown to be useful in areas as diverse as case formulation, diagnosis, novel drug discovery, and policy development. However, as with any new technology, there is a potential 'darkside,' and it is possible to utilise these tools for nefarious purposes. This talk will give a brief introduction to generative AI and large language models as applied to healthcare. It will then discuss the potential for misuse of these models, seeking to highlight how they may be misused and how significant a threat they could pose to health security. Finally we will consider strategies for managing the risks set against the possible benefits of generative AI. This talk is based on work carried out by the author and colleagues at the World Health Organisation and the Royal United Services Institute.
...more
View all episodesView all episodes
Download on the App Store

Evidence-Based Health CareBy Oxford University

  • 4.5
  • 4.5
  • 4.5
  • 4.5
  • 4.5

4.5

2 ratings


More shows like Evidence-Based Health Care

View all
Philosophy for Beginners by Oxford University

Philosophy for Beginners

332 Listeners

Approaching Shakespeare by Oxford University

Approaching Shakespeare

332 Listeners

General Philosophy by Oxford University

General Philosophy

71 Listeners

Anthropology by Oxford University

Anthropology

74 Listeners

Aesthetics and Philosophy of Art lectures by Oxford University

Aesthetics and Philosophy of Art lectures

77 Listeners

Theoretical Physics - From Outer Space to Plasma by Oxford University

Theoretical Physics - From Outer Space to Plasma

60 Listeners

Critical Reasoning: A Romp Through the Foothills of Logic by Oxford University

Critical Reasoning: A Romp Through the Foothills of Logic

37 Listeners

The Secrets of Mathematics by Oxford University

The Secrets of Mathematics

41 Listeners

Kant's Critique of Pure Reason by Oxford University

Kant's Critique of Pure Reason

76 Listeners

Poetry with Alice Oswald by Oxford University

Poetry with Alice Oswald

26 Listeners