Why I Don’t Think AGI Is Right Around the Corner
Continual learning is the primary bottleneck; current LLMs lack human-like adaptive learning and improvement from feedback.
Without continual learning, AI’s automation of complex white-collar work is capped below 25% in the near term.
Sophisticated AI agents handling tasks like taxes face steep computational and data challenges.
Emerging models (e.g., Claude Code, Gemini 2.5) show early reasoning abilities indicating initial steps toward AGI.
Patel projects small business tax automation by around 2028 and human-level on-the-job AI learning by about 2032, contingent on breakthroughs in adaptive learning.
Long-term progress depends more on algorithmic innovation than scaling compute alone.Nobody Has A Personality Anymore: We Are Products With Labels
Society increasingly interprets personality traits via mental health diagnoses rather than unique human qualities.
Common behaviors (e.g., tardiness, shyness) are reframed as clinical symptoms like ADHD or autism.
This trend risks eroding individuality, romanticism, and acceptance of normal human complexity.
Mental health identity is particularly salient among younger generations, e.g., 72% of Gen Z girls view challenges as core identity components.
The article calls for embracing unknowable human aspects and resisting self-reduction to diagnostic labels.Jane Street barred from Indian markets over alleged Nifty 50 manipulation
India’s SEBI froze $566 million from Jane Street amid accusations of deliberately manipulating the Nifty 50 index using complex trades in banking sector stocks, futures, and options.
Tactics allegedly involved inflating stock positions early, then shorting the index later to profit on options, causing market distortions.
SEBI described the trades as lacking economic rationale beyond manipulation, continuing despite prior warnings.
This enforcement highlights challenges in regulating foreign algorithmic traders in emerging derivatives markets with large retail participants.
The case fuels debate on the boundary between aggressive market making, arbitrage, and manipulation.applegenerativemodelsafetydecrypted: Inside Apple’s AI safety filters
The article documents decrypted Apple generative model safety files used to filter content in on-device AI.
It covers methods to extract Apple’s encryption key via Xcode LLDB debugging and decrypt JSON-based safety overrides.
Filters include exact phrase rejects, replacements, removals, and regex patterns blocking offensive or harmful outputs.
Apple’s layered filter architecture enforces strict content moderation aligned with corporate safety policies.
The article appeals to technically adept readers interested in AI safety engineering and reverse engineering corporate AI controls.“The AI-flooding of Show HN is real”
Analysis of Hacker News Show HN data reveals that by 2025, over 20% of posts mention AI or GPT, rising sharply since 2023.
Despite volume, AI posts receive fewer votes and comments, suggesting less community engagement or interest.
The influx is described as disruptive to the original intent of Show HN as a platform for passion projects and hard work.
The author refrains from anti-AI rhetoric, focusing instead on a data-driven critique of community content quality and culture shift.
SQL queries and BigQuery analysis support the findings, inviting nuanced discussion about AI’s impact on developer communities.