
Sign up to save your podcasts
Or


In this episode, host Mohsin Ali speaks with Myles Harrison, Company Principal at PRAKTIKAI, about how human-AI collaboration has evolved over the past year and why effective prompting is becoming a core organizational capability. They discuss reframing prompting as context engineering, balancing AI autonomy with human oversight, and avoiding bothover-control of over-trust and AI systems. Additionally, they explore how leadership can drive responsible and scalable AI adoption by focusing on culture, education, and practical governance.
PureLogics Pulse Episode Chapters
00:00 – 01:06 | Opening Hook: Human-AI Collaboration in a New Era
The episode opens by framing the rapid shift from asking what AI can do to the more strategic question of how humans and AI should work together, setting the stage for collaboration over automation hype.
01:06 – 02:16 | Podcast Welcome & Episode Focus
Host Mohsin Ali welcomes listeners to PureLogics Pulse and introduces the episode’s focus on human-AI collaboration, effective prompting, and achieving meaningful synergy between people and AI systems.
02:16 – 03:40 | Guest Introduction & Background
Mohsin introduces Myles Harrison, Company Principal at PRAKTIKAI, highlighting his 17 years of experience in data and AI, consulting leadership roles, and community-building work in the AI ecosystem.
03:40 – 05:30 | How Human-AI Collaboration Has Changed
Myles explains how collaboration has shifted with generative AI—from static, embedded machine learning systems to conversational, multimodal, and agent-assisted workflows that enable real back-and-forth interaction.
05:30 – 07:24 | From AI Evangelism to Pragmatic Adoption
The discussion explores how organizations are moving out of the hype cycle, recognizing AI as a force multiplier that automates low-level knowledge work while still requiring human judgment and accountability.
07:24 – 10:36 | Effective Prompting as Context Engineering
Myles reframes prompting as a collaboration skill, emphasizing the importance of context, specificity, iteration, and breaking problems into smaller components to prevent AI systems from going off track.
10:36 – 13:16 | AI Autonomy vs. Human Oversight The conversation examines agentic tools and AI-assisted coding, highlighting where organizations should draw clear boundaries and why human-in-the-loop oversight remains critical as stakes increase.
13:16 – 16:05 | Over-Trusting vs. Over-Restricting AI
Myles outlines the risks of both extremes—blind trust leading to fragile systems, and excessive restriction driving shadow AI usage—advocating for balanced, responsible adoption.
16:05 – 20:07 | Proactive AI Systems & Risk Management
The discussion shifts to proactive AI that predicts and intervenes, reinforcing the need for critical evaluation, bias awareness, and human review as AI-generated signals scale.
20:07 – 23:17 | Culture, Education & Incentivizing Adoption
Myles emphasizes change management, education, and empowerment, advising leaders to position AI as a job-enabling tool rather than a threat, using incentives instead of mandates.
23:17 – 26:09 | Defining AI’s Role in the Workforce
The episode covers how organizations should choose between collaboration, augmentation, and automation by grounding AI initiatives in clear use cases, clean data, and business readiness.
26:09 – 29:26 | Evaluation, Explainability & Scaling Responsibly
Myles highlights evaluation and transparency as key challenges, especially with large language models, and explains why responsible scaling from MVP to production requires strong governance.
29:26 – Conclusion | Final Takeaways
The episode concludes with a clear message: sustainable AI success depends on practical adoption, strong human oversight, and treating AI as a collaborative capability—not an autonomous decision-maker.
By PureLogicsIn this episode, host Mohsin Ali speaks with Myles Harrison, Company Principal at PRAKTIKAI, about how human-AI collaboration has evolved over the past year and why effective prompting is becoming a core organizational capability. They discuss reframing prompting as context engineering, balancing AI autonomy with human oversight, and avoiding bothover-control of over-trust and AI systems. Additionally, they explore how leadership can drive responsible and scalable AI adoption by focusing on culture, education, and practical governance.
PureLogics Pulse Episode Chapters
00:00 – 01:06 | Opening Hook: Human-AI Collaboration in a New Era
The episode opens by framing the rapid shift from asking what AI can do to the more strategic question of how humans and AI should work together, setting the stage for collaboration over automation hype.
01:06 – 02:16 | Podcast Welcome & Episode Focus
Host Mohsin Ali welcomes listeners to PureLogics Pulse and introduces the episode’s focus on human-AI collaboration, effective prompting, and achieving meaningful synergy between people and AI systems.
02:16 – 03:40 | Guest Introduction & Background
Mohsin introduces Myles Harrison, Company Principal at PRAKTIKAI, highlighting his 17 years of experience in data and AI, consulting leadership roles, and community-building work in the AI ecosystem.
03:40 – 05:30 | How Human-AI Collaboration Has Changed
Myles explains how collaboration has shifted with generative AI—from static, embedded machine learning systems to conversational, multimodal, and agent-assisted workflows that enable real back-and-forth interaction.
05:30 – 07:24 | From AI Evangelism to Pragmatic Adoption
The discussion explores how organizations are moving out of the hype cycle, recognizing AI as a force multiplier that automates low-level knowledge work while still requiring human judgment and accountability.
07:24 – 10:36 | Effective Prompting as Context Engineering
Myles reframes prompting as a collaboration skill, emphasizing the importance of context, specificity, iteration, and breaking problems into smaller components to prevent AI systems from going off track.
10:36 – 13:16 | AI Autonomy vs. Human Oversight The conversation examines agentic tools and AI-assisted coding, highlighting where organizations should draw clear boundaries and why human-in-the-loop oversight remains critical as stakes increase.
13:16 – 16:05 | Over-Trusting vs. Over-Restricting AI
Myles outlines the risks of both extremes—blind trust leading to fragile systems, and excessive restriction driving shadow AI usage—advocating for balanced, responsible adoption.
16:05 – 20:07 | Proactive AI Systems & Risk Management
The discussion shifts to proactive AI that predicts and intervenes, reinforcing the need for critical evaluation, bias awareness, and human review as AI-generated signals scale.
20:07 – 23:17 | Culture, Education & Incentivizing Adoption
Myles emphasizes change management, education, and empowerment, advising leaders to position AI as a job-enabling tool rather than a threat, using incentives instead of mandates.
23:17 – 26:09 | Defining AI’s Role in the Workforce
The episode covers how organizations should choose between collaboration, augmentation, and automation by grounding AI initiatives in clear use cases, clean data, and business readiness.
26:09 – 29:26 | Evaluation, Explainability & Scaling Responsibly
Myles highlights evaluation and transparency as key challenges, especially with large language models, and explains why responsible scaling from MVP to production requires strong governance.
29:26 – Conclusion | Final Takeaways
The episode concludes with a clear message: sustainable AI success depends on practical adoption, strong human oversight, and treating AI as a collaborative capability—not an autonomous decision-maker.