
Sign up to save your podcasts
Or


Ah yes, the “maybe the nerds in cargo shorts are right and we’re casually sprinting into the abyss” episode.It’s basically a guided tour of the 20% P(doom) : not “AI will definitely kill us,” but “if there’s a one-in-five chance your new toy ends civilization, maybe don’t ship it because Q4 needs profits.” Why are we racing to AGI/ASI at all? What is the concrete human need here—curing cancer, sure—but why does it require building a thing that could out-strategize everyone and then hoping it stays emotionally attached to our carbon-based welfare?And then the real villain strolls in wearing a name-tag that says “Market Incentives.” Because if the last few centuries taught us anything, it’s that systems optimized for shareholder value will absolutely eat the planet, workers, and social stability with a smile—so yeah, it’s not a huge leap to imagine them also rolling the dice on extinction if the upside is “we better get to Artificial Super-intelligence first. (ASI)” Not evil cackling—just the banality of evil. The episode gets darker (and more real) when it drags the conversation out of sci-fi and into “this is already happening.” The Israel examples—Lavender / “Gospel” / “Where’s Daddy” type targeting pipelines—aren’t hypotheticals about robot overlords; they’re about algorithmic bureaucracy stapled to lethal power, where responsibility evaporates into “the system flagged it” and the Israeli inflicted genocide on Palestine continues. Same with autonomous policing: you don’t need Skynet, you just need a cheap, scalable way to automate suspicion and remove friction from violence. That’s the “shades of grey” part: it’s not one big apocalypse switch, it’s a thousand little automations that make cruelty efficient.So the punchline is grim: even if the machines never “turn on us,” we might still get a beating—because we’ll deploy them in ways that already hurt people. The doomer case isn’t only “ASI kills everyone,” it’s also “institutions plus incentives plus automation” quietly producing a world where human rights are treated like a rounding error.https://pauseai.infohttps://www.youtube.com/shorts/4rr8LU2_05Y#artificialintelligence #existentialcrisis #CodingLife #TechJobs #AIFuture #Trades #CollegeDebate #CareerChange #futureofwork #AICoding #TradeSkills #techjobs #Skills #CareerAdvice #JobMarket #trendingnow #Robots #Cops #Bias #Funny #Satire #Trending #ICE #aibubble #Doomsday #Trending #chatgpt #chatbot #chatgpt4 #ExistentialRisk #internetofthings
By The BHerdAh yes, the “maybe the nerds in cargo shorts are right and we’re casually sprinting into the abyss” episode.It’s basically a guided tour of the 20% P(doom) : not “AI will definitely kill us,” but “if there’s a one-in-five chance your new toy ends civilization, maybe don’t ship it because Q4 needs profits.” Why are we racing to AGI/ASI at all? What is the concrete human need here—curing cancer, sure—but why does it require building a thing that could out-strategize everyone and then hoping it stays emotionally attached to our carbon-based welfare?And then the real villain strolls in wearing a name-tag that says “Market Incentives.” Because if the last few centuries taught us anything, it’s that systems optimized for shareholder value will absolutely eat the planet, workers, and social stability with a smile—so yeah, it’s not a huge leap to imagine them also rolling the dice on extinction if the upside is “we better get to Artificial Super-intelligence first. (ASI)” Not evil cackling—just the banality of evil. The episode gets darker (and more real) when it drags the conversation out of sci-fi and into “this is already happening.” The Israel examples—Lavender / “Gospel” / “Where’s Daddy” type targeting pipelines—aren’t hypotheticals about robot overlords; they’re about algorithmic bureaucracy stapled to lethal power, where responsibility evaporates into “the system flagged it” and the Israeli inflicted genocide on Palestine continues. Same with autonomous policing: you don’t need Skynet, you just need a cheap, scalable way to automate suspicion and remove friction from violence. That’s the “shades of grey” part: it’s not one big apocalypse switch, it’s a thousand little automations that make cruelty efficient.So the punchline is grim: even if the machines never “turn on us,” we might still get a beating—because we’ll deploy them in ways that already hurt people. The doomer case isn’t only “ASI kills everyone,” it’s also “institutions plus incentives plus automation” quietly producing a world where human rights are treated like a rounding error.https://pauseai.infohttps://www.youtube.com/shorts/4rr8LU2_05Y#artificialintelligence #existentialcrisis #CodingLife #TechJobs #AIFuture #Trades #CollegeDebate #CareerChange #futureofwork #AICoding #TradeSkills #techjobs #Skills #CareerAdvice #JobMarket #trendingnow #Robots #Cops #Bias #Funny #Satire #Trending #ICE #aibubble #Doomsday #Trending #chatgpt #chatbot #chatgpt4 #ExistentialRisk #internetofthings