
Sign up to save your podcasts
Or


In this episode of Stewart Squared, host Stewart Alsop explores the critical role of ontologies in computing with his father, guest Stewart Alsop II. The conversation covers how early internet pioneers like Yahoo and Amazon used ontologies to organize information, making it machine-readable, and examines whether companies like Apple might be leveraging ontological approaches for knowledge management. The discussion ranges from the historical Dewey Decimal System to modern applications in AI, the evolution of hardware-software integration, Apple's strategic positioning in the AI landscape, and the development of cloud computing infrastructure. Stewart Alsop II provides insights on technology readiness levels, the nature of LLMs as databases rather than active systems, and Apple's trust-focused strategy under Tim Cook's leadership. The hosts also touch on the geopolitical implications of cloud infrastructure, including China's data center investments in Brazil, and debate the future of personal computing devices in an AI-driven world.
Timestamps
00:00 Welcome and ontology introduction, discussing how Yahoo and Amazon created ontologies for search and product catalogs to make data machine-readable.
05:00 Dewey Decimal System analogy for ontologies, explaining how Yahoo used subject matter organization before LLMs eliminated directory needs.
10:00 AI limitations in structured domains like coding, law, and music versus inability to create genuinely new solutions independently.
15:00 Regulated industries using ontologies for documentation, challenges of AI handling unpredictable regulatory changes like RFK Jr's vaccine positions.
20:00 Hardware-software boundaries discussion, Apple's virtualization success across different processor architectures with minimal cathedral-like teams.
25:00 Apple's neural accelerators in M5 chips for local AI workloads, Apple Intelligence missteps and team restructuring away from Google-thinking.
30:00 LLMs as inert databases requiring tools for activation, distinguishing between large and small language models on devices.
35:00 Apple's personal computing vision with local LLMs, real-time data challenges versus static training model limitations.
40:00 Cloud computing evolution from company data centers to modern real-time databases, searching for original cloud terminology origins.
45:00 Technology readiness levels for hardware versus software's artistic squishiness, hardware fails hard while software fails soft principle.
Key Insights
1. Ontologies as Machine Reading Systems: Ontologies serve as structured frameworks that enable machines to read and understand data, similar to how the Dewey Decimal System organized libraries. Early internet companies like Yahoo and Amazon built ontologies for search and product catalogs, making information machine-readable. While LLMs have reduced reliance on traditional directories, ontologies remain crucial for regulated industries requiring extensive documentation.
2. AI Excels in Structured Domains: Large language models perform exceptionally well in highly structured environments like coding, law, and music because these domains follow predictable patterns. AI can convert legacy code across programming languages and help with legal document creation precisely because these fields have inherent logical structures that neural networks can learn and replicate effectively.
3. AI Cannot Innovate Beyond Structure: A fundamental limitation is that AI cannot create truly novel solutions outside existing structures. It excels at solving specific, well-defined problems within known frameworks but struggles with unstructured challenges requiring genuine innovation. This suggests AI will augment human capabilities rather than replace creative problem-solving entirely.
4. Apple's Device-Centric AI Strategy: Apple is uniquely positioned to fulfill the original personal computing vision by building AI directly into devices rather than relying on cloud-based solutions. Their integration of neural accelerators into M-series chips enables local LLM processing, potentially creating truly personal AI assistants that understand individual users while maintaining privacy.
5. The Trust Advantage in Personal AI: Trust becomes a critical differentiator as AI becomes more personal. Apple's long-term focus on privacy and user trust, formalized under Tim Cook's leadership, positions them favorably for personal AI applications. Unlike competitors focused on cloud-based solutions, Apple's device-centric approach aligns with growing privacy concerns about personal data.
6. LLMs as Intelligent Databases, Not Operating Systems: Rather than viewing LLMs as active agents, they're better understood as sophisticated databases where intelligence emerges from relationships between data points. LLMs are essentially inert until activated by tools or applications, similar to how a brain requires connection to a nervous system to function effectively.
7. Hardware-Software Integration Drives AI Performance: The boundary between hardware and software increasingly blurs as AI capabilities are built directly into silicon. Apple's ability to design custom chips with integrated neural processing units, communications chips, and optimized software creates performance advantages that pure software solutions cannot match, representing a return to tightly integrated system design.
By Stewart Alsop II, Stewart Alsop IIIIn this episode of Stewart Squared, host Stewart Alsop explores the critical role of ontologies in computing with his father, guest Stewart Alsop II. The conversation covers how early internet pioneers like Yahoo and Amazon used ontologies to organize information, making it machine-readable, and examines whether companies like Apple might be leveraging ontological approaches for knowledge management. The discussion ranges from the historical Dewey Decimal System to modern applications in AI, the evolution of hardware-software integration, Apple's strategic positioning in the AI landscape, and the development of cloud computing infrastructure. Stewart Alsop II provides insights on technology readiness levels, the nature of LLMs as databases rather than active systems, and Apple's trust-focused strategy under Tim Cook's leadership. The hosts also touch on the geopolitical implications of cloud infrastructure, including China's data center investments in Brazil, and debate the future of personal computing devices in an AI-driven world.
Timestamps
00:00 Welcome and ontology introduction, discussing how Yahoo and Amazon created ontologies for search and product catalogs to make data machine-readable.
05:00 Dewey Decimal System analogy for ontologies, explaining how Yahoo used subject matter organization before LLMs eliminated directory needs.
10:00 AI limitations in structured domains like coding, law, and music versus inability to create genuinely new solutions independently.
15:00 Regulated industries using ontologies for documentation, challenges of AI handling unpredictable regulatory changes like RFK Jr's vaccine positions.
20:00 Hardware-software boundaries discussion, Apple's virtualization success across different processor architectures with minimal cathedral-like teams.
25:00 Apple's neural accelerators in M5 chips for local AI workloads, Apple Intelligence missteps and team restructuring away from Google-thinking.
30:00 LLMs as inert databases requiring tools for activation, distinguishing between large and small language models on devices.
35:00 Apple's personal computing vision with local LLMs, real-time data challenges versus static training model limitations.
40:00 Cloud computing evolution from company data centers to modern real-time databases, searching for original cloud terminology origins.
45:00 Technology readiness levels for hardware versus software's artistic squishiness, hardware fails hard while software fails soft principle.
Key Insights
1. Ontologies as Machine Reading Systems: Ontologies serve as structured frameworks that enable machines to read and understand data, similar to how the Dewey Decimal System organized libraries. Early internet companies like Yahoo and Amazon built ontologies for search and product catalogs, making information machine-readable. While LLMs have reduced reliance on traditional directories, ontologies remain crucial for regulated industries requiring extensive documentation.
2. AI Excels in Structured Domains: Large language models perform exceptionally well in highly structured environments like coding, law, and music because these domains follow predictable patterns. AI can convert legacy code across programming languages and help with legal document creation precisely because these fields have inherent logical structures that neural networks can learn and replicate effectively.
3. AI Cannot Innovate Beyond Structure: A fundamental limitation is that AI cannot create truly novel solutions outside existing structures. It excels at solving specific, well-defined problems within known frameworks but struggles with unstructured challenges requiring genuine innovation. This suggests AI will augment human capabilities rather than replace creative problem-solving entirely.
4. Apple's Device-Centric AI Strategy: Apple is uniquely positioned to fulfill the original personal computing vision by building AI directly into devices rather than relying on cloud-based solutions. Their integration of neural accelerators into M-series chips enables local LLM processing, potentially creating truly personal AI assistants that understand individual users while maintaining privacy.
5. The Trust Advantage in Personal AI: Trust becomes a critical differentiator as AI becomes more personal. Apple's long-term focus on privacy and user trust, formalized under Tim Cook's leadership, positions them favorably for personal AI applications. Unlike competitors focused on cloud-based solutions, Apple's device-centric approach aligns with growing privacy concerns about personal data.
6. LLMs as Intelligent Databases, Not Operating Systems: Rather than viewing LLMs as active agents, they're better understood as sophisticated databases where intelligence emerges from relationships between data points. LLMs are essentially inert until activated by tools or applications, similar to how a brain requires connection to a nervous system to function effectively.
7. Hardware-Software Integration Drives AI Performance: The boundary between hardware and software increasingly blurs as AI capabilities are built directly into silicon. Apple's ability to design custom chips with integrated neural processing units, communications chips, and optimized software creates performance advantages that pure software solutions cannot match, representing a return to tightly integrated system design.