
Sign up to save your podcasts
Or
Join Jeff Zweerink and computer scientist Dustin Morley as they discuss new discoveries taking place at the frontiers of science that have theological and philosophical implications, including the reality of God’s existence.
Self-Supervised Learning
Recent major breakthroughs in public-facing artificial intelligence (AI) such as OpenAI's ChatGPT and Tesla's self-driving software have achieved success in part due to complex, multi-component deep learning model architectures where each of the components can be trained or fine-tuned while leaving the other components fixed—effectively decoupling different steps or subtasks from each other. A new paper (still in preprint) has demonstrated significant success with self-supervised learning, pushing the envelope on this level of AI versatility even further. What does this mean for the near-term future of AI, and what implications does it have for the age-old comparison between AI and human intelligence?
AI with an Off-Switch?
As we contemplate what a world with true AI (general or super, rather than narrow, artificial intelligence) looks like, the question of how we interact with AI inevitably arises. Specifically, what do we do when AI pursues a path that is harmful to humanity? One scenario put forth is to install an off switch that we control, but would the AI leave the off switch enabled? One study showed that programming uncertainty into the AI about its objective may provide incentives for the AI to leave the off switch functional. However, that uncertainty diminishes the AI’s effectiveness in obtaining its purpose. We discuss some of the apologetic implications of this study.
Links and Resources:
4.8
1919 ratings
Join Jeff Zweerink and computer scientist Dustin Morley as they discuss new discoveries taking place at the frontiers of science that have theological and philosophical implications, including the reality of God’s existence.
Self-Supervised Learning
Recent major breakthroughs in public-facing artificial intelligence (AI) such as OpenAI's ChatGPT and Tesla's self-driving software have achieved success in part due to complex, multi-component deep learning model architectures where each of the components can be trained or fine-tuned while leaving the other components fixed—effectively decoupling different steps or subtasks from each other. A new paper (still in preprint) has demonstrated significant success with self-supervised learning, pushing the envelope on this level of AI versatility even further. What does this mean for the near-term future of AI, and what implications does it have for the age-old comparison between AI and human intelligence?
AI with an Off-Switch?
As we contemplate what a world with true AI (general or super, rather than narrow, artificial intelligence) looks like, the question of how we interact with AI inevitably arises. Specifically, what do we do when AI pursues a path that is harmful to humanity? One scenario put forth is to install an off switch that we control, but would the AI leave the off switch enabled? One study showed that programming uncertainty into the AI about its objective may provide incentives for the AI to leave the off switch functional. However, that uncertainty diminishes the AI’s effectiveness in obtaining its purpose. We discuss some of the apologetic implications of this study.
Links and Resources:
928 Listeners
1,436 Listeners
1,484 Listeners
2,025 Listeners
2,968 Listeners
1,231 Listeners
152 Listeners
573 Listeners
366 Listeners
349 Listeners
5,056 Listeners
1,224 Listeners
401 Listeners
184 Listeners
7 Listeners