The Vernon Richard Show

Six Principles of Automation in Testing: Still Relevant in 2026?


Listen Later

In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies.



00:00 - Intro

01:47 - Welcome (Richard is not at home πŸ‘€)

02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷

04:01 - Today's topic: revisiting the AiT principles ahead of a keynote

04:58 - What is Automation in Testing (AiT)?

06:49 - Principle 1: Supporting Testing over Replicating Testing

07:01 - Vernon's take: testing is a performance, not a click sequence

08:22 - What the industry promised vs what automation actually does

08:49 - The serendipity you lose when a human isn't testing

09:59 - Agentic testing: observing more, but still not replicating humans

10:56 - The danger of anthropomorphising AI output

12:10 - LLMs always give an answer β€” and that's the problem

13:03 - Principle 2: Testability over Automatability

13:14 - Vernon's take: narrow vs broad β€” operate, control, observe

14:38 - Making apps automatable for the robots but not the humans

15:37 - The shiniest framework in a broken testing context

16:40 - If it's testable, it's probably automatable β€” but not vice versa

16:55 - Automation strategy vs testing strategy: when they compete, everyone loses

17:46 - The problem has always been testing, not automation

19:57 - Principle 3: Testing Expertise over Coding Expertise

20:18 - Vernon's take: testing expertise lets you leverage the tools

21:47 - The spoonfed tests problem: great at automating, lost without guidance

22:36 - The "code school" era: everyone told to learn to code

22:51 - Coding agents have changed the maths on this

26:01 - The new nuance: test design and framework knowledge over writing the code

28:44 - Evaluating code is a testing problem β€” and LLMs can help you do it

30:43 - Are agents as good as a junior developer?

31:42 - Outcome Engineering (O16G) and the race to write the AI principles

32:13 - Simon Wardley: we're in the wild west again

33:22 - Principle 4: Problems over Tools

33:29 - Vernon's take: the hammer and the nail

34:07 - Don't let your problems be shaped by the framework you have

34:36 - New automation opportunities beyond testing: PRs, logs, story review

35:30 - Principle 5: Risk over Coverage

36:12 - Vernon's take: 100% coverage β‰  100% risk coverage

38:00 - The one test case, one automated test fallacy

39:04 - Where in the system is the risk? Do you even know your layers?

39:49 - Probabilistic vs non-deterministic: refining the language around AI

40:53 - Coverage as intentional vs coverage as a number someone picked once

43:15 - Principle 6: Observability over Understanding

43:24 - Vernon's take: just-in-time understanding vs reading everything upfront

44:12 - What the principle was actually about: making automation results observable

47:00 - Does this principle belong in testing, or has it grown into quality?

49:00 - So... what's missing?

50:00 - The four pillars: Strategy, Creation, Usage, and Education

57:05 - Automation in Quality: the bigger opportunity

01:01:00 - Wrap up + Vern's Lead Dev panel


Links to stuff we mentioned during the pod:

  • 04:00 - Automation in Testing (AiT)
    • The principles live at automationintesting.com
    • AiT was co-created by Richard Bradshaw and Mark Winteringham
  • 04:00 - Test Automation Days
  • The conference where Richard is giving his keynote β€” testautomationdays.com
  • 24:48 - James Thomas
  • The "kid in a candy shop" himself β€” James's blog and LinkedIn
  • 31:42 - Outcome Engineering (016G)
  • The article Richard shared before recording β€” worth tracking down if you're interested in where agentic development practices are heading
  • 32:13 - Simon Wardley
  • If you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps and situational awareness in strategy is essential reading
  • Simon's LinkedIn
  • 43:30 - Abby Bangser
  • Vern's go-to person for all things observability. Abby's LinkedIn
  • 46:04 - Noah Susman
  • As it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007.
  • Noah's blog and LinkedIn
  • 59:30 - Angie Jones
  • Vern's been reading Angie's work on testing AI-enabled applications here and here.
  • Angie's website and LinkedIn
  • 01:01:30 - The Lead Dev panel Vernon will be part of
  • "How to Measure the Business Impact of AI" β€” happening 25th February, free to sign up
  • 01:02:00 - Richard's Selenium Conf talk
    • "Redefining Test Automation" β€” the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to.Β 

...more
View all episodesView all episodes
Download on the App Store

The Vernon Richard ShowBy Vernon Richards and Richard Bradshaw


More shows like The Vernon Richard Show

View all
More or Less by BBC Radio 4

More or Less

872 Listeners

The Martin Lewis Podcast by BBC Radio 5 Live

The Martin Lewis Podcast

82 Listeners

No Such Thing As A Fish by No Such Thing As A Fish

No Such Thing As A Fish

4,874 Listeners

Test Match Special by BBC Radio 5 Live

Test Match Special

207 Listeners

Testing Peers by Testing Peers

Testing Peers

0 Listeners

The Rest Is Politics by Goalhanger

The Rest Is Politics

3,565 Listeners

The Rest Is Politics: US by Goalhanger

The Rest Is Politics: US

2,395 Listeners