Share ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |
Share to email
Share to Facebook
Share to X
By ForHumanity Center
The podcast currently has 53 episodes available.
About the episode:
In this podcast me and Heramb dwelved into the discussions around why alignment is a critical factor to examine in the current times of LLM? and What things you urges policymakers and developers to explore together?
Heramb tackles these questions with his experience in these topics.
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
About the episode:
In this podcast me and Heramb dwelved into the discussions around why alignment is a critical factor to examine in the current times of LLM? and What things you urges policymakers and developers to explore together?
Heramb tackles these questions with his experience in these topics.
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
Welcome to OPENBOX
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
Today, we have with us Gemma. Gemma Galdon-Clavell is a leading voice on technology ethics and algorithmic accountability. Her focus areas include Responsible AI, algorithmic auditing, data policy, educational technology, and ethics oversight. She is a pioneer in AI safety and auditing, ensuring that machine learning tools truly serve society. Founder and CEO of Eticas.AI, a venture-backed organization that identifies, measures, and corrects algorithmic vulnerabilities, bias, and inefficiencies in predictive and LLM tools.
Her impactful work has earned her recognition as a Hispanic Star Awardee at the United Nations in 2023, as well as accolades from influential media outlets such as the BBC and Forbes.
Dr. Galdon-Clavell is an active advisor to international and regional institutions such as the United Nations (UN), the Organization for Economic Cooperation and Development (OECD), the European Institute of Innovation and Technology (EIT) and the European Commission, among others.
This is part 2 of the podcast
Welcome to OPENBOX
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
Today, we have with us Gemma. Gemma Galdon-Clavell is a leading voice on technology ethics and algorithmic accountability. Her focus areas include Responsible AI, algorithmic auditing, data policy, educational technology, and ethics oversight. She is a pioneer in AI safety and auditing, ensuring that machine learning tools truly serve society. Founder and CEO of Eticas.AI, a venture-backed organization that identifies, measures, and corrects algorithmic vulnerabilities, bias, and inefficiencies in predictive and LLM tools.
Her impactful work has earned her recognition as a Hispanic Star Awardee at the United Nations in 2023, as well as accolades from influential media outlets such as the BBC and Forbes.
Dr. Galdon-Clavell is an active advisor to international and regional institutions such as the United Nations (UN), the Organization for Economic Cooperation and Development (OECD), the European Institute of Innovation and Technology (EIT) and the European Commission, among others.
Welcome to OPENBOX
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
Today, we have with us Ismael.
Welcome to OPENBOX
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
Today, we have with us Yasmin. Yasmin is the co-founder of the initiative Responsible Technology Hub (RTH) and has further founded the non-profit organization Diverse Young Leaders e.V., which works toward making the youth leadership arena in Germany more accessible and inclusive to BPoC and other minority groups.
Yasmin worked with the American Institute for Contemporary German Studies (AICGS), the German Federal Foreign Office, the GIZ, the Welthungerhilfe as well as TechQuartier, and was named Young Global Changemaker 2022 by the Global Solutions Initiative. She was nominated for several awards for her engagement and received the ‘2021 Volunteer of the Year’ award. She is a Landecker democracy fellow.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
Aditya, an IIT Bombay and Monash university Phd, is a Lecturer/Assistant Professor at University of New South Wales's School of Computer Science & Engineering, specializing in natural language processing (NLP). he has extensive worked in multiple areas of NLP including designing and introducing a new NLP course at UNSW in 2024. He has authored book on NLP published by Wiley in 2023. In addition to my academic work, NLP techniques to epidemic intelligence and cybersecurity. He supervises undergraduate, Masters, and PhD students in their NLP research projects. He is an associate at the Human Rights Institute at UNSW, where he co-lead the Community of Practice for Inclusive Research with Queer, Trans & people with variations of sex characteristics.
Dipankar is a Master student at the University of New South Wales, specializing in Natural Language Processing, Large Language Models, Machine Learning, and Data Analytics. Aditya Joshi leads the Google research scholar grant in UNSW and Dipankar works as a Research Assistant which focuses on developing a benchmark for sentiment and sarcasm classification in Australian and Indian dialects of English. Dipankar is a tutor for Deep Learning based courses.
Our focus is a recent paper by both of them regarding ‘Evaluating Dialect Robustness of Language Models via Conversation Understanding’
In this Aditya and Srirag explain about the dialect nuances that are critical in the context of language models
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
Aditya, an IIT Bombay and Monash university Phd, is a Lecturer/Assistant Professor at University of New South Wales's School of Computer Science & Engineering, specializing in natural language processing (NLP). he has extensive worked in multiple areas of NLP including designing and introducing a new NLP course at UNSW in 2024. He has authored book on NLP published by Wiley in 2023. In addition to my academic work, NLP techniques to epidemic intelligence and cybersecurity. He supervises undergraduate, Masters, and PhD students in their NLP research projects. He is an associate at the Human Rights Institute at UNSW, where he co-lead the Community of Practice for Inclusive Research with Queer, Trans & people with variations of sex characteristics.
Dipankar is a Master student at the University of New South Wales, specializing in Natural Language Processing, Large Language Models, Machine Learning, and Data Analytics. Aditya Joshi leads the Google research scholar grant in UNSW and Dipankar works as a Research Assistant which focuses on developing a benchmark for sentiment and sarcasm classification in Australian and Indian dialects of English. Dipankar is a tutor for Deep Learning based courses.
Our focus is a recent paper by both of them regarding ‘Evaluating Dialect Robustness of Language Models via Conversation Understanding’
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
Today, we have with us Balagopal Uninikrishnan. AI researcher focused on building algorithms and tools aiming to advance healthcare through technology. I'm currently a PhD student at the University of Toronto, advised by Dr. Chris McIntosh and Dr. Michael Brudno where my research revolves around developing algorithms that address biases in AI models applied to medical data. He is also a Schwartz Reisman Institute graduate fellow.
He did his Masters from the National University of Singapore (NUS), specializing in Computational Intelligence. Before joining the PhD program, He has also worked as an AI Engineer developing IP and building tools for AI augmented healthcare applications. We are going to be speaking about a paper which he recently co-authored Shortcut learning in medical AI hinders generalization.
In this podcast we cover, causal effects of the data acquisition bias in healthcare environments.
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.
My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.
Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.
This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.
Today, we have with us Balagopal Uninikrishnan. AI researcher focused on building algorithms and tools aiming to advance healthcare through technology. I'm currently a PhD student at the University of Toronto, advised by Dr. Chris McIntosh and Dr. Michael Brudno where my research revolves around developing algorithms that address biases in AI models applied to medical data. He is also a Schwartz Reisman Institute graduate fellow.
He did his Masters from the National University of Singapore (NUS), specializing in Computational Intelligence. Before joining the PhD program, He has also worked as an AI Engineer developing IP and building tools for AI augmented healthcare applications. We are going to be speaking about a paper which he recently co-authored Shortcut learning in medical AI hinders generalization.
The podcast currently has 53 episodes available.