
Sign up to save your podcasts
Or


The term ‘Responsible AI’ is more than a buzzword; it’s a call to action. When we talk about responsible AI, it’s not about some fancy tech tools; it’s about power, ethics, leadership, and long-term consequences. The question we need to ask is ‘Who defines what’s safe? Who decides what’s ethical?’. At present a handful of tech giants shape the answers to those questions. While using AI might feel like a moderate impact, developing these models comes with a great environmental cost. We live in a world that moves at hyperspeed where it’s not healthy to treat AI as just a tech tool. The time has come for nonprofits and leaders to step up and lead with responsibility.
In this week’s episode, Scott and Nathan talk about the ever-evolving landscape of AI and the foundation of AI governance. AI technologies are generally developing at a remarkable rate, but the governance aspect is only slowly progressing. This mismatch shows how the regulations are trying to catch up instead of preventing harm. Starting the conversation, Nathan shares his thoughts on the need for an adaptive, forward-thinking governance framework for AI that is able to anticipate risk and not just to respond. Next, Nathan and Scott discuss the technological and geopolitical power in AI, where a handful of tech giants control the system, leaving the rest of us to decide if their definition of ‘Responsible AI’ matches ours. Nathan kindly explains why responsible AI should be everyone’s responsibility. We are in a dire need of drawing ethical lines, defining values, and demanding a transparent AI governance system before harm scales beyond our grasp. Further down in the conversation, Nathan and Scott pay attention to the following topics: challenges in AI governance, guardrails for using AI, the role of leadership in responsible AI use, environmental impact on developing AI, and more.
HIGHLIGHTS
[01:06] Governance and AI.
[04:04] The lack of progress in the governance framework.
[09:11] Challenges in AI governance.
[13:02] The concentration of technological and geopolitical power in AI.
[15:10] The importance of having guardrails of how to use AI.
[20:21] The role of leadership in responsible AI.
[25:31] The choice between acting in the fog or becoming irrelevant.
[27:50] Navigating the ethics and safety in AI.
[30:15] Environmental impact of AI.
RESOURCES
Laying the Foundation for AI Governance.
aiforgood.itu.int/event/building-secure-and-trustworthy-ai-foundations-for-ai-governance/
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/
By Nathan Chappell, Scott Rosenkrans4.9
1515 ratings
The term ‘Responsible AI’ is more than a buzzword; it’s a call to action. When we talk about responsible AI, it’s not about some fancy tech tools; it’s about power, ethics, leadership, and long-term consequences. The question we need to ask is ‘Who defines what’s safe? Who decides what’s ethical?’. At present a handful of tech giants shape the answers to those questions. While using AI might feel like a moderate impact, developing these models comes with a great environmental cost. We live in a world that moves at hyperspeed where it’s not healthy to treat AI as just a tech tool. The time has come for nonprofits and leaders to step up and lead with responsibility.
In this week’s episode, Scott and Nathan talk about the ever-evolving landscape of AI and the foundation of AI governance. AI technologies are generally developing at a remarkable rate, but the governance aspect is only slowly progressing. This mismatch shows how the regulations are trying to catch up instead of preventing harm. Starting the conversation, Nathan shares his thoughts on the need for an adaptive, forward-thinking governance framework for AI that is able to anticipate risk and not just to respond. Next, Nathan and Scott discuss the technological and geopolitical power in AI, where a handful of tech giants control the system, leaving the rest of us to decide if their definition of ‘Responsible AI’ matches ours. Nathan kindly explains why responsible AI should be everyone’s responsibility. We are in a dire need of drawing ethical lines, defining values, and demanding a transparent AI governance system before harm scales beyond our grasp. Further down in the conversation, Nathan and Scott pay attention to the following topics: challenges in AI governance, guardrails for using AI, the role of leadership in responsible AI use, environmental impact on developing AI, and more.
HIGHLIGHTS
[01:06] Governance and AI.
[04:04] The lack of progress in the governance framework.
[09:11] Challenges in AI governance.
[13:02] The concentration of technological and geopolitical power in AI.
[15:10] The importance of having guardrails of how to use AI.
[20:21] The role of leadership in responsible AI.
[25:31] The choice between acting in the fog or becoming irrelevant.
[27:50] Navigating the ethics and safety in AI.
[30:15] Environmental impact of AI.
RESOURCES
Laying the Foundation for AI Governance.
aiforgood.itu.int/event/building-secure-and-trustworthy-ai-foundations-for-ai-governance/
LinkedIn (Nathan): linkedin.com/in/nathanchappell/
LinkedIn (Scott): linkedin.com/in/scott-rosenkrans
Website: fundraising.ai/

9,517 Listeners

1,448 Listeners

87,126 Listeners

111,888 Listeners

56,489 Listeners

8,386 Listeners

4,247 Listeners

47,571 Listeners

58,032 Listeners

5,522 Listeners

15,815 Listeners

197 Listeners

3,535 Listeners

633 Listeners

24 Listeners