The Cyber Business Podcast

Where AI Helps, Where It Hurts, and Why Governance Matters with Olivia Phillips


Listen Later

Olivia Phillips is the founder of Wolfbyte Technologies, an AI focused consulting firm that helps organizations understand where artificial intelligence truly fits within their existing technology and security foundations. In addition to leading Wolfbyte Technologies, Olivia serves as Vice President of the USA Chapter for the Global Council for Responsible AI, where she works alongside global stakeholders to promote structured, ethical, and secure AI adoption. With a background spanning cybersecurity, intelligence, and hands on operations, Olivia brings a practical and security minded perspective to conversations that are often dominated by hype. Her work consistently centers on preparedness, responsible implementation, and protecting people as technology accelerates.

Here's a Glimpse of What You'll Learn
  • Why AI should be layered onto a strong foundation rather than rushed into production

  • How self learning AI differs from large language models in security use cases

  • Why responsible AI requires structure, governance, and human oversight

  • How deepfakes and AI driven fraud are impacting real people today

  • Why separation of systems and access still matters in a highly automated world

  • How AI can support security teams without replacing human judgment

  • What aspiring professionals should understand about careers, certifications, and networking

In This Episode

Olivia Phillips explains why many organizations are approaching AI backwards by focusing on tools before understanding their own environments. She describes how Wolfbyte Technologies helps clients inventory assets, understand dependencies, and ensure foundations are stable before introducing AI. Without that groundwork, she warns that AI can amplify existing weaknesses rather than solve problems.

The conversation dives deeply into AI and cybersecurity, particularly the difference between self learning machine learning systems and large language models. Olivia outlines why self learning systems are better suited for threat detection, while LLMs introduce risks such as hallucinations and prompt injection. She emphasizes that AI should reduce analyst workload, not create more busy work or new attack paths.

As Vice President of the Global Council for Responsible AI USA Chapter, Olivia shares real world examples of AI misuse, including deepfakes targeting family members. She stresses that responsible AI means placing structure around how systems are built, accessed, and monitored. Throughout the episode, she reinforces that technology alone cannot solve trust issues and that verification, separation, and human awareness remain essential.

...more
View all episodesView all episodes
Download on the App Store

The Cyber Business PodcastBy Matthew Connor