
Sign up to save your podcasts
Or
New @greenpillnet pod! Kevin chats with Joe Edelman, founder of the Meaning Alignment Institute, about his Full Stack Alignment paper. They dive into why current AI alignment methods fall short, explore richer “thick” models of value, lessons from social media, and four bold moonshots for AI and institutions that support human flourishing.
Links: https://meaningalignment.substack.com/p/introducing-full-stack-alignment https://meaninglabs.notion.site/The-Full-Stack-Alignment-Project-List-21cc5bada1d08016a496ca729476d970 @edelwax @meaningaligned @greenpillnet @owocki
Timestamps: 00:00 – Introduction to Green Pill’s new season and Joe Edelman 01:59 – Joe’s background and the Meaning Alignment Institute 03:43 – Why alignment matters for AI and institutions 05:46 – Lessons from social media and the attention economy 09:06 – Critique of shallow AI alignment approaches (RLHF, values-as-text) 13:20 – Thick models of value: going deeper than abstract ideals 15:11 – Full stack alignment across models, metrics, and institutions 17:00 – Reconciling values with capitalist incentive structures 19:17 – Avoiding dystopian economies and building value-driven markets 21:32 – Four moonshots: super negotiators, public resource regulators, market intermediaries, value stewardship agents 27:32 – Intermediaries vs. value stewardship agents explained 29:09 – How builders and academics can get involved in full stack alignment projects 31:10 – Why cross-institutional collaboration is critical 32:46 – Joe’s vision of the world in 10 years with full stack alignment 34:51 – Food system analogy: from “sugar” to nourishing AI 36:40 – Long-term vs. short-term incentives in markets 38:25 – Hopeful outlook: building integrity into AI and institutions 39:04 – Closing remarks and links to Joe’s work
5
5151 ratings
New @greenpillnet pod! Kevin chats with Joe Edelman, founder of the Meaning Alignment Institute, about his Full Stack Alignment paper. They dive into why current AI alignment methods fall short, explore richer “thick” models of value, lessons from social media, and four bold moonshots for AI and institutions that support human flourishing.
Links: https://meaningalignment.substack.com/p/introducing-full-stack-alignment https://meaninglabs.notion.site/The-Full-Stack-Alignment-Project-List-21cc5bada1d08016a496ca729476d970 @edelwax @meaningaligned @greenpillnet @owocki
Timestamps: 00:00 – Introduction to Green Pill’s new season and Joe Edelman 01:59 – Joe’s background and the Meaning Alignment Institute 03:43 – Why alignment matters for AI and institutions 05:46 – Lessons from social media and the attention economy 09:06 – Critique of shallow AI alignment approaches (RLHF, values-as-text) 13:20 – Thick models of value: going deeper than abstract ideals 15:11 – Full stack alignment across models, metrics, and institutions 17:00 – Reconciling values with capitalist incentive structures 19:17 – Avoiding dystopian economies and building value-driven markets 21:32 – Four moonshots: super negotiators, public resource regulators, market intermediaries, value stewardship agents 27:32 – Intermediaries vs. value stewardship agents explained 29:09 – How builders and academics can get involved in full stack alignment projects 31:10 – Why cross-institutional collaboration is critical 32:46 – Joe’s vision of the world in 10 years with full stack alignment 34:51 – Food system analogy: from “sugar” to nourishing AI 36:40 – Long-term vs. short-term incentives in markets 38:25 – Hopeful outlook: building integrity into AI and institutions 39:04 – Closing remarks and links to Joe’s work
30,773 Listeners
9,634 Listeners