Please support this podcast by checking out our sponsors:
- Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron
- Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily
- Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad
Support The Automated Daily directly:
Buy me a coffee: https://buymeacoffee.com/theautomateddaily
Today's topics:
Device attestation threatens open access - GrapheneOS warns Apple App Attest and Google Play Integrity are becoming de facto requirements for banking, government, payments, and web verification—tightening platform control and reducing OS choice.
On-device AI versus cloud dependencies - A developer argues many apps bolt on AI via cloud API calls, creating privacy, uptime, and compliance risks; on-device models can handle common tasks like summarization and classification without sending user data away.
Vibe-coding fallout and rewrites - A Kubernetes TUI author explains how AI-assisted “vibe-coding” accelerated features but collapsed architecture into a fragile ‘god object,’ prompting a Rust rewrite and clearer design guardrails.
AI agents and maintenance economics - Software consultant James Shore says AI coding agents only help long-term if they reduce maintenance cost per unit of code; higher output alone can create lasting productivity drag via growing maintenance load.
Obsidian plugin attack with blockchain C2 - Researchers tracked REF6598, a targeted campaign that weaponizes Obsidian shared vaults and trojanized community plugins to install the PHANTOMPULSE RAT, using Ethereum transactions to hide command-and-control.
GPU terminals and richer workflows - Ratty is a GPU-rendered terminal experiment that can show inline 3D graphics, signaling a push beyond text-only terminals toward hardware-accelerated visualization inside developer workflows.
Running local LLMs on M4 - A hands-on report finds local LLMs on a 24GB M4 MacBook Pro can be useful with the right model and settings, but still struggle with reliability on longer autonomous tasks compared to hosted AI.
Phone accelerometer guitar tuning - A browser-based tool turns a phone’s accelerometer into a guitar tuner by sensing physical vibrations through the instrument body—useful where microphone-based pitch detection fails in noisy rooms.
James Burke’s timeless TV moment - A revisited 1978 ‘Connections’ clip shows James Burke delivering a perfectly timed, one-take rocket-launch explanation, a reminder of how strong storytelling can make technical history feel urgent again.
Satire of supply-chain disaster - A satirical incident report exaggerates a multi-ecosystem dependency compromise, mocking real problems like maintainer account security, transitive dependency sprawl, and automated updates in CI.
-Ratty Terminal Emulator Promises GPU Rendering and Inline 3D Graphics
-GrapheneOS warns Apple and Google device attestation is spreading to the web and locking out alternatives
-unix.foo
-After Seven Months of AI ‘Vibe-Coding,’ Developer Archives k10s and Rewrites It for Better Architecture
-Open Culture Revisits James Burke’s One-Take Rocket Launch Moment in "Connections"
-Qwen 3.5-9B Emerges as a Practical Local LLM Choice on a 24GB M4 Mac
-Web App Uses Phone Accelerometer to Tune Guitar Strings
-Obsidian Shared Vaults Used in Social Engineering Campaign to Deploy PHANTOMPULSE RAT
-James Shore Warns AI Coding Speedups Fail Without Lower Maintenance Costs
-Satirical Report Mocks a Multi-Ecosystem Supply-Chain Attack That ‘Resolves’ by Accident
Episode Transcript
Device attestation threatens open access
Let’s start with a big-picture warning from GrapheneOS about hardware-based device attestation—checks like Google’s Play Integrity API and Apple’s App Attest. The argument is simple: these systems are increasingly pitched as “security,” but they also give platforms and service providers a switch that can deny access to people using non-approved devices or operating systems.
What makes this especially consequential is the direction of travel. GrapheneOS says banks, governments, and payment-related services are being nudged toward making attestation mandatory. And it’s not just apps: they’re also pointing to a push toward the web, where desktop users might be forced to verify with a certified iOS or Android device—sometimes by scanning a QR code—just to proceed.
If this becomes normal for essentials like payments, digital IDs, or age verification, it changes the nature of open computing. The risk isn’t only privacy—it’s the possibility that access itself becomes gated by two vendors’ approval pipelines.
On-device AI versus cloud dependencies
Staying in security, researchers described a targeted social-engineering campaign—tracked as REF6598—that uses the Obsidian note-taking app as a delivery mechanism for a newly identified remote access trojan called PHANTOMPULSE.
The playbook is painfully modern: attackers approach finance and crypto professionals on LinkedIn, migrate the conversation to Telegram, then invite the target into a shared Obsidian vault. The trap is hidden in trust and convenience—victims are coaxed into enabling synchronization for community plugins, and those plugins turn out to be trojanized.
The standout detail is resilience: PHANTOMPULSE reportedly uses the Ethereum blockchain to retrieve command-and-control information from transaction data, which can make takedowns and simple blocking harder. The lesson here isn’t just “don’t click links.” It’s that collaboration features and plugin ecosystems are now prime real estate for high-value compromises—especially when the workflow feels routine.
Vibe-coding fallout and rewrites
On a lighter—but still pointed—note, one Hacker News item making the rounds is a satirical incident report about a cascading supply-chain compromise. It begins with a popular npm package maintainer losing a hardware 2FA key and getting phished, and then spirals across ecosystems—JavaScript to Rust to Python—until “millions” of developer machines are supposedly owned via ordinary installs and CI builds.
It’s satire, but it lands because it’s built out of real ingredients: maintainer account security as a single point of failure, deep transitive dependency trees, and the fact that routine automation can spread a bad update with incredible speed. The comedy is a reminder that, structurally, we’re still not great at answering a simple question: what exactly is running inside our build pipeline today?
AI agents and maintenance economics
Now, a theme that showed up in multiple posts: the growing backlash against “AI-by-API” as the default product decision.
One author argues developers are being lazy—shipping AI features by calling cloud models for tasks that could run locally. The criticism isn’t anti-AI; it’s pro-reliability. When a basic UX enhancement depends on an external vendor, you inherit outages, rate limits, account problems, and billing failures. And when you ship user content off-device, you also inherit a very different privacy and compliance posture—retention questions, consent, audit trails, breach risk, and government requests.
The more interesting counterexample in that same discussion: building summarization directly on-device on iOS using Apple’s local model APIs. The takeaway is practical—summarize, classify, extract, rewrite, normalize… many of these are transformations of user-owned data that don’t necessarily need a round trip to someone else’s servers. Cloud models still matter for the truly heavy work, but the argument is that we should stop turning simple features into distributed systems by default.
Obsidian plugin attack with blockchain C2
That dovetails nicely with another hands-on report: trying to run useful local LLMs on a 24GB M4 MacBook Pro. The author walked through the reality behind the hype—figuring out runtimes, testing models that technically fit, and discovering that “fits in memory” doesn’t mean “pleasant to use.”
They ultimately landed on a smaller quantized model—Qwen 3.5 at 9B parameters—as a good balance of responsiveness and capability, and wired it into local, OpenAI-compatible endpoints for tooling.
The conclusion is grounded: local models can be great for interactive work, offline use, and reducing dependence on big cloud providers. But for longer autonomous tasks, reliability still lags behind state-of-the-art hosted systems. It’s a useful reminder to match the deployment to the job, instead of treating “local” or “cloud” as ideology.
GPU terminals and richer workflows
AI also showed up in a more introspective way: a developer archived and began rewriting their GPU-aware Kubernetes TUI dashboard after months of what they call “vibe-coding” with Claude.
Early on, it felt like a superpower—features arrived quickly. But over time, the codebase reportedly collapsed into a giant, tangled core: one mega model, one sprawling update handler, view-specific conditionals everywhere, and bugs from concurrency touching UI state in unsafe ways.
The point isn’t that AI can’t help. It’s that an agent often optimizes for the next visible feature, not for architecture that stays stable under change. The author’s response is also telling: rewrite in Rust, not as a trend move, but because they feel it helps them steer design and catch wrongness earlier. The practical advice here is to treat AI like a very fast junior contributor—powerful, but in need of clear boundaries, ownership rules, and a firm architectural map.
Running local LLMs on M4
And if you want the economic framing for that, software consultant James Shore offered it: AI coding agents only pay off long-term if they reduce maintenance costs, not just increase output.
His argument is that maintenance is the tax that always rises. If an agent doubles the amount of code you ship, even “good” code creates more surface area to support—bugs, upgrades, refactors, security fixes. If the generated code is even slightly harder to maintain, the math gets ugly fast: the early speed boost can flip into a lasting productivity penalty.
The most useful takeaway is a question teams can actually apply: are we measurably lowering maintenance effort per unit of software as we adopt AI, or are we simply producing more to maintain later?
Phone accelerometer guitar tuning
Switching gears to developer tools: a new terminal emulator called Ratty is getting attention for being GPU-rendered and for experimenting with inline 3D graphics inside the terminal.
This matters less as a must-install tool today and more as a signal. Terminals have been text-first for decades, and for good reasons—simplicity and predictability. But as GPUs become ubiquitous and developer workflows increasingly blend data, visualization, and interaction, there’s a plausible future where the terminal becomes a canvas for richer output without abandoning its core ergonomics.
Even if Ratty stays experimental, it’s part of a wider push: making foundational tools feel less stuck in the past, without turning them into bloated IDEs.
James Burke’s timeless TV moment
For a small, clever bit of everyday engineering: someone built a browser-based “Accel Tuner” that turns a phone’s accelerometer into a guitar tuner. Instead of listening through the microphone, you press the phone against the guitar body and read vibrations directly.
Why it’s interesting is the use case: noisy environments. Microphone tuners struggle in a loud room; vibration sensing can cut through that. It’s also a reminder that modern devices have powerful sensors that are often underused in web apps, as long as users explicitly grant permission and the experience is transparent.
Satire of supply-chain disaster
And finally, a cultural throwback that still resonates with technologists: an Open Culture piece revisited an 80-second clip from James Burke’s 1978 BBC series “Connections,” often described as one of the greatest shots in television.
Burke explains rocket propellants and cryogenic storage while a rocket launch unfolds behind him, timed like choreography. The real charm, though, is what the clip represents: the payoff of connecting mundane technologies to world-changing outcomes. In an era where tech explanations are often either too shallow or too long, it’s a neat reminder that clarity plus storytelling can make complex subjects feel both accessible and important.
Subscribe to edition specific feeds:
- Space news
* Apple Podcast English
* Spotify English
* RSS English Spanish French
- Top news
* Apple Podcast English Spanish French
* Spotify English Spanish French
* RSS English Spanish French
- Tech news
* Apple Podcast English Spanish French
* Spotify English Spanish Spanish
* RSS English Spanish French
- Hacker news
* Apple Podcast English Spanish French
* Spotify English Spanish French
* RSS English Spanish French
- AI news
* Apple Podcast English Spanish French
* Spotify English Spanish French
* RSS English Spanish French
Visit our website at https://theautomateddaily.com/
Send feedback to
[email protected]Youtube
LinkedIn
X (Twitter)