
Sign up to save your podcasts
Or
Do you trust the AI tools that you use? Are they ethical and safe? We often overlook the safety behind AI and it's something we should pay attention to. Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Mark Surman and Jordan questions about AI safety
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Timestamps:
[00:01:05] Daily AI news
[00:03:15] About Mark and Mozilla Foundation
[00:06:20] Big Tech and ethical AI
[00:09:20] Is AI unsafe?
[00:11:05] Responsible AI regulation
[00:16:33] Creating balanced government regulation
[00:20:25] Is AI too accessible?
[00:23:00] Resources for AI best practices
[00:25:30] AI concerns to be aware of
[00:30:00] Mark's final takeaway
Topics Covered in This Episode:
1. Future of AI regulation
2. Balancing interests of humanity and government
3. How to make and use AI responsibly
4. Concerns with AI
Keywords:
AI space, risks, guardrails, AI development, misinformation, national elections, deep fake voices, fake content, sophisticated AI tools, generative AI systems, regulatory challenges, government accountability, expertise, company incentives, Meta's responsible AI team, ethical considerations, faster development, friction, balance, innovation, governments, regulations, public interest, technology, government involvement, society, progress, politically motivated, Jordan Wilson, Mozilla, show notes, Mark Surman, societal concerns, individual concerns, misinformation, authenticity, shared content, data, generative AI, control, interests, transparency, open source AI, regulation, accuracy, trustworthiness, hallucinations, discrimination, reports, software, OpenAI, CEO, rumors, high-ranking employees, Microsoft, discussions, Facebook, responsible AI team, Germany, France, Italy, agreement, future AI regulation, public interest, humanity, safety, profit-making interests
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
4.7
7979 ratings
Do you trust the AI tools that you use? Are they ethical and safe? We often overlook the safety behind AI and it's something we should pay attention to. Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Mark Surman and Jordan questions about AI safety
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Timestamps:
[00:01:05] Daily AI news
[00:03:15] About Mark and Mozilla Foundation
[00:06:20] Big Tech and ethical AI
[00:09:20] Is AI unsafe?
[00:11:05] Responsible AI regulation
[00:16:33] Creating balanced government regulation
[00:20:25] Is AI too accessible?
[00:23:00] Resources for AI best practices
[00:25:30] AI concerns to be aware of
[00:30:00] Mark's final takeaway
Topics Covered in This Episode:
1. Future of AI regulation
2. Balancing interests of humanity and government
3. How to make and use AI responsibly
4. Concerns with AI
Keywords:
AI space, risks, guardrails, AI development, misinformation, national elections, deep fake voices, fake content, sophisticated AI tools, generative AI systems, regulatory challenges, government accountability, expertise, company incentives, Meta's responsible AI team, ethical considerations, faster development, friction, balance, innovation, governments, regulations, public interest, technology, government involvement, society, progress, politically motivated, Jordan Wilson, Mozilla, show notes, Mark Surman, societal concerns, individual concerns, misinformation, authenticity, shared content, data, generative AI, control, interests, transparency, open source AI, regulation, accuracy, trustworthiness, hallucinations, discrimination, reports, software, OpenAI, CEO, rumors, high-ranking employees, Microsoft, discussions, Facebook, responsible AI team, Germany, France, Italy, agreement, future AI regulation, public interest, humanity, safety, profit-making interests
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
Try Google Veo 3 today! Sign up at gemini.google to get started.
323 Listeners
146 Listeners
190 Listeners
288 Listeners
172 Listeners
127 Listeners
142 Listeners
67 Listeners
199 Listeners
54 Listeners
455 Listeners
30 Listeners
54 Listeners
39 Listeners
62 Listeners