Share The Valmy
Share to email
Share to Facebook
Share to X
Andrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and how AI capabilities could be further integrated with human cognition. Andrej shares more about his new company Eureka Labs and his insights into AI-driven education, peer networks, and what young people should study to prepare for the reality ahead.
Sign up for new podcasts every week. Email feedback to [email protected]
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Karpathy
Show Notes:
(0:00) Introduction
(0:33) Evolution of self-driving cars
(2:23) The Tesla vs. Waymo approach to self-driving
(6:32) Training Optimus with automotive models
(10:26) Reasoning behind the humanoid form factor
(13:22) Existing challenges in robotics
(16:12) Bottlenecks of AI progress
(20:27) Parallels between human cognition and AI models
(22:12) Merging human cognition with AI capabilities
(27:10) Building high performance small models
(30:33) Andrej’s current work in AI-enabled education
(36:17) How AI-driven education reshapes knowledge networks and status
(41:26) Eureka Labs
(42:25) What young people study to prepare for the future
Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems.
MLST is sponsored by Brave:
The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at https://brave.com/api.
Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible.
Dr. Joscha Bach
https://x.com/Plinz
This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/
Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls)
TOC:
00:00:00 Introduction: AGI and Cyberanimism
00:03:57 The Nature of Consciousness
00:08:46 Aristotle's Concepts of Mind and Consciousness
00:13:23 The Hard Problem of Consciousness
00:16:17 Functional Definition of Consciousness
00:20:24 Comparing LLMs and Human Consciousness
00:26:52 Testing for Consciousness in AI Systems
00:30:00 Animism and Software Agents in Nature
00:37:02 Plant Consciousness and Ecosystem Intelligence
00:40:36 The California Institute for Machine Consciousness
00:44:52 Ethics of Conscious AI and Suffering
00:46:29 Philosophical Perspectives on Consciousness
00:49:55 Q&A: Formalisms for Conscious Systems
00:53:27 Coherence, Self-Organization, and Compute Resources
YT version (very high quality, filmed by us live)
https://youtu.be/34VOI_oo-qM
Refs:
Aristotle's work on the soul and consciousness
Richard Dawkins' work on genes and evolution
Gerald Edelman's concept of Neural Darwinism
Thomas Metzinger's book "Being No One"
Yoshua Bengio's concept of the "consciousness prior"
Stuart Hameroff's theories on microtubules and consciousness
Christof Koch's work on consciousness
Daniel Dennett's "Cartesian Theater" concept
Giulio Tononi's Integrated Information Theory
Mike Levin's work on organismal intelligence
The concept of animism in various cultures
Freud's model of the mind
Buddhist perspectives on consciousness and meditation
The Genesis creation narrative (for its metaphorical interpretation)
California Institute for Machine Consciousness
This is an interview with Nick Bostrom, the Founding Director of Future of Humanity Institute Oxford.
This is the first installment of The Worthy Successor series - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.
This episode referred to the following other essays and resources:
-- The Intelligence Trajectory Political Matrix: danfaggella.com/itpm
-- Natural Selection Favors AIs over Humans: https://arxiv.org/abs/2303.16200
-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/
Watch this episode on The Trajectory YouTube channel: https://youtu.be/_ZCE4XZ9doc?si=RXptg0y6JcxelXkF
Read the Nick Bostrom's episode highlight: danfaggella.com/bostrom1/
...
There are three main questions we cover here on the Trajectory:
1. Who are the power players in AGI and what are their incentives?
2. What kind of posthuman future are we moving towards, or should we be moving towards?
3. What should we do about it?
If this sounds like it's up your alley, then be sure to stick around and connect:
Blog: danfaggella.com/trajectory
X: x.com/danfaggella
LinkedIn: linkedin.com/in/danfaggella
Newsletter: bit.ly/TrajectoryTw
I talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives.
Enjoy!
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.
Follow me on Twitter for updates on future episodes.
Sponsor
This episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.
Timestamps
(00:00:00) – Why hackers on Discord had to save thousands of lives
(00:17:26) – How politics crippled vaccine distribution
(00:38:19) – Fundraising for VaccinateCA
(00:51:09) – Why tech needs to understand how government works
(00:58:58) – What is crypto good for?
(01:13:07) – How the US government leverages big tech to violate rights
(01:24:36) – Can the US have nice things like Japan?
(01:26:41) – Financial plumbing & money laundering: a how-not-to guide
(01:37:42) – Maximizing your value: why some people negotiate better
(01:42:14) – Are young people too busy playing Factorio to found startups?
(01:57:30) – The need for a post-mortem
"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people.
"But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel Glennerster
In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.
Links to learn more, highlights, and full transcript.
They cover:
Chapters:
Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore
Salman Rushdie’s 1988 novel, “The Satanic Verses,” made him the target of Ayatollah Ruhollah Khomeini, who denounced the book as blasphemous and issued a fatwa calling for his assassination. Rushdie spent years trying to escape the shadow the fatwa cast on him, and for some time, he thought he succeeded. But in 2022, an assailant attacked him onstage at a speaking engagement in western New York and nearly killed him.
“I think now I’ll never be able to escape it. No matter what I’ve already written or may now write, I’ll always be the guy who got knifed,” he writes in his new memoir, “Knife: Meditations After an Attempted Murder.”
In this conversation, I asked Rushdie to reflect on his desire to escape the fatwa; the gap between the reputation of his novels and their actual merits; how his “shadow selves” became more real to millions than he was; how many of us in the internet age also have to contend with our many shadow selves; what Rushdie lives for now; and more.
Mentioned:
Midnight’s Children by Salman Rushdie
Book Recommendations:
Don Quixote by Miguel de Cervantes, translated by Edith Grossman
One Hundred Years of Solitude by Gabriel García Márquez
The Trial by Franz Kafka
The Castle by Franz Kafka
Thoughts? Guest suggestions? Email us at [email protected].
You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.
This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Isaac Jones. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Kristin Lin and Aman Sahota. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Mrinalini Chakravorty.
Episode 122
I spoke with Professor David Thorstad about:
* The practical difficulties of doing interdisciplinary work
* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations
* why EA epistemics suck (ok, it’s a little more nuanced than that)
Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.
Reach me at [email protected] for feedback, ideas, guest suggestions.
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Intro
* (01:15) David’s interest in rationality
* (02:45) David’s crisis of confidence, models abstracted from psychology
* (05:00) Blending formal models with studies of the mind
* (06:25) Interaction between academic communities
* (08:24) Recognition of and incentives for interdisciplinary work
* (09:40) Movement towards interdisciplinary work
* (12:10) The Standard Picture of rationality
* (14:11) Why the Standard Picture was attractive
* (16:30) Violations of and rebellion against the Standard Picture
* (19:32) Mistakes made by critics of the Standard Picture
* (22:35) Other competing programs vs Standard Picture
* (26:27) Characterizing Bounded Rationality
* (27:00) A worry: faculties criticizing themselves
* (29:28) Self-improving critique and longtermism
* (30:25) Central claims in bounded rationality and controversies
* (32:33) Heuristics and formal theorizing
* (35:02) Violations of Standard Picture, vindicatory epistemology
* (37:03) The Reason Responsive Consequentialist View (RRCV)
* (38:30) Objective and subjective pictures
* (41:35) Reason responsiveness
* (43:37) There are no epistemic norms for inquiry
* (44:00) Norms vs reasons
* (45:15) Arguments against epistemic nihilism for belief
* (47:30) Norms and self-delusion
* (49:55) Difficulty of holding beliefs for pragmatic reasons
* (50:50) The Gibbardian picture, inquiry as an action
* (52:15) Thinking how to act and thinking how to live — the power of inquiry
* (53:55) Overthinking and conducting inquiry
* (56:30) Is thinking how to inquire as an all-things-considered matter?
* (58:00) Arguments for the RRCV
* (1:00:40) Deciding on minimal criteria for the view, stereotyping
* (1:02:15) Eliminating stereotypes from the theory
* (1:04:20) Theory construction in epistemology and moral intuition
* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries
* (1:10:30) The argument from minimal criteria, evaluating against competing views
* (1:13:45) Comparing to other theories
* (1:15:00) The explanatory argument
* (1:17:53) Parfit and Railton, norms of friendship vs utility
* (1:20:00) Should you call out your friend for being a womanizer
* (1:22:00) Vindicatory Epistemology
* (1:23:05) Panglossianism and meliorative epistemology
* (1:24:42) Heuristics and recognition-driven investigation
* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing
* (1:29:08) Stakes of inquiry and costs of metacognitive processing
* (1:30:00) When agents are incoherent, focuses on inquiry
* (1:32:05) Indirect normative assessment and its consequences
* (1:37:47) Against the Singularity Hypothesis
* (1:39:00) Superintelligence and the ontological argument
* (1:41:50) Hardware growth and general intelligence growth, AGI definitions
* (1:43:55) Difficulties in arguing for hyperbolic growth
* (1:46:07) Chalmers and the proportionality argument
* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore’s Law
* (1:50:08) On progress studies
* (1:52:40) Improving research productivity and technology growth
* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics
* (1:55:30) Cumulative and per-unit risk
* (1:57:37) Back and forth with longtermists, time of perils
* (1:59:05) Background risk — risks we can and can’t intervene on, total existential risk
* (2:00:56) The case for longtermism is inflated
* (2:01:40) Epistemic humility and longtermism
* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review
* (2:04:50) Compounding potential errors in knowledge
* (2:06:38) Group deliberation dynamics, academic consensus
* (2:08:30) The scope of longtermism
* (2:08:30) Money in effective altruism and processes of inquiry
* (2:10:15) Swamping longtermist options
* (2:12:00) Washing out arguments and justified belief
* (2:13:50) The difficulty of long-term forecasting and interventions
* (2:15:50) Theory of change in the bounded rationality program
* (2:18:45) Outro
Links:
* David’s homepage and Twitter and blog
* Papers mentioned/read
* Bounded rationality and inquiry
* Why bounded rationality (in epistemology)?
* Against the newer evidentialists
* The accuracy-coherence tradeoff in cognition
* There are no epistemic norms of inquiry
* Permissive metaepistemology
* Global priorities and effective altruism
* What David likes about EA
* Against the singularity hypothesis (+ blog posts
* Three mistakes in the moral mathematics of existential risk (+ blog posts
* The scope of longtermism
* Epistemics
In this conversation recorded live in Miami, Tyler and Peter Thiel dive deep into the complexities of political theology, including why it’s a concept we still need today, why Peter’s against Calvinism (and rationalism), whether the Old Testament should lead us to be woke, why Carl Schmitt is enjoying a resurgence, whether we’re entering a new age of millenarian thought, the one existential risk Peter thinks we’re overlooking, why everyone just muddling through leads to disaster, the role of the katechon, the political vision in Shakespeare, how AI will affect the influence of wordcels, Straussian messages in the Bible, what worries Peter about Miami, and more.
Read a full transcript enhanced with helpful links, or watch the full video.
Recorded February 21st, 2024.
Other ways to connect
Sam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics.
If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
The podcast currently has 144 episodes available.