
Sign up to save your podcasts
Or
In this episode of TechnologIST Talks, IST CEO Philip Reiner is joined by Dr. Margaret Mitchell, a computer scientist and researcher focused on machine learning and ethics informed AI development. She currently serves as Chief Ethics Scientist at Hugging Face, where she studies ML data processing, responsible AI development, and AI ethics. In her previous role at Google, she founded and co-led Google's Ethical AI group, which seeks to advance foundational AI ethics research and operationalize AI ethics internally at Google. With Hugging Face, Margaret helps to contribute to IST’s AI Risk Reduction Initiative Working Group work developing technical- and policy-oriented risk mitigation strategies associated with AI foundation models.
Philip and Margaret sat down to discuss agentic, autonomous, and transparent models – and the pathway to truly secure AI.
“I think that we can do a lot of really great work with AI by having rigorous data curation that corresponds to specific kinds of use cases, and building models based on that,” Margaret said.
What’s the current state of AI agents in the marketplace? How autonomous are so-called autonomous models? And once you’ve identified a model’s vulnerabilities, how do you go about protecting against unauthorized actions? Join us for this and more on this episode of TechnologIST Talks.
Learn more about IST: https://securityandtechnology.org/
In this episode of TechnologIST Talks, IST CEO Philip Reiner is joined by Dr. Margaret Mitchell, a computer scientist and researcher focused on machine learning and ethics informed AI development. She currently serves as Chief Ethics Scientist at Hugging Face, where she studies ML data processing, responsible AI development, and AI ethics. In her previous role at Google, she founded and co-led Google's Ethical AI group, which seeks to advance foundational AI ethics research and operationalize AI ethics internally at Google. With Hugging Face, Margaret helps to contribute to IST’s AI Risk Reduction Initiative Working Group work developing technical- and policy-oriented risk mitigation strategies associated with AI foundation models.
Philip and Margaret sat down to discuss agentic, autonomous, and transparent models – and the pathway to truly secure AI.
“I think that we can do a lot of really great work with AI by having rigorous data curation that corresponds to specific kinds of use cases, and building models based on that,” Margaret said.
What’s the current state of AI agents in the marketplace? How autonomous are so-called autonomous models? And once you’ve identified a model’s vulnerabilities, how do you go about protecting against unauthorized actions? Join us for this and more on this episode of TechnologIST Talks.
Learn more about IST: https://securityandtechnology.org/