
Sign up to save your podcasts
Or
In this eye-opening episode, we dive deep into the world of Large Language Models (LLMs) with Joel Rorseth, a computer science PhD student from the University of Waterloo. Rorseth introduces us to RAGE (Retrieval-Augmented Generation Explainability), a groundbreaking tool that unveils the sources behind LLM-generated content. Our discussion covers the importance of prompt engineering, document upload order, and the surprising truth about "fuzzy citations." As we navigate the intersection of innovation and regulation in AI, this technically-rich episode offers crucial insights for educators looking to understand and responsibly implement LLM technologies in their classrooms.
Joel Rorseth is a Computer Science PhD student at the University of Waterloo, supervised by Dr. Lukasz Golab. His research focuses on explainable AI, a critical effort to rationalize the predictions and decision-making behaviours of increasingly complex AI models. Joel's work, which has recently focused on explaining large language models like ChatGPT, has been published at several top conferences and has earned him multiple scholarships. Joel earned a Bachelor of Computer Science from the University of Windsor in 2019, and has worked as a software engineer for a wide variety of clients. He leverages diverse expertise in AI, data, and software engineering to create real-world software tools that solve real-world research problems.
You can find Joel on:
LinkedIn: https://www.linkedin.com/in/joelrorseth/
Twitter: https://x.com/JoelExplainsAI
Article: Know your source: RAGE tool unveils ChatGPT’s sources
Feedback? You can ask your questions or give us feedback on the show here
Want to know more?
You can check out our: WCDSB GenAI Guidelines, infographics, and Innovation website: https://innovate.wcdsb.ca/
Hosted on Acast. See acast.com/privacy for more information.
In this eye-opening episode, we dive deep into the world of Large Language Models (LLMs) with Joel Rorseth, a computer science PhD student from the University of Waterloo. Rorseth introduces us to RAGE (Retrieval-Augmented Generation Explainability), a groundbreaking tool that unveils the sources behind LLM-generated content. Our discussion covers the importance of prompt engineering, document upload order, and the surprising truth about "fuzzy citations." As we navigate the intersection of innovation and regulation in AI, this technically-rich episode offers crucial insights for educators looking to understand and responsibly implement LLM technologies in their classrooms.
Joel Rorseth is a Computer Science PhD student at the University of Waterloo, supervised by Dr. Lukasz Golab. His research focuses on explainable AI, a critical effort to rationalize the predictions and decision-making behaviours of increasingly complex AI models. Joel's work, which has recently focused on explaining large language models like ChatGPT, has been published at several top conferences and has earned him multiple scholarships. Joel earned a Bachelor of Computer Science from the University of Windsor in 2019, and has worked as a software engineer for a wide variety of clients. He leverages diverse expertise in AI, data, and software engineering to create real-world software tools that solve real-world research problems.
You can find Joel on:
LinkedIn: https://www.linkedin.com/in/joelrorseth/
Twitter: https://x.com/JoelExplainsAI
Article: Know your source: RAGE tool unveils ChatGPT’s sources
Feedback? You can ask your questions or give us feedback on the show here
Want to know more?
You can check out our: WCDSB GenAI Guidelines, infographics, and Innovation website: https://innovate.wcdsb.ca/
Hosted on Acast. See acast.com/privacy for more information.