AI is transforming the world of work at speed—but with great potential comes a need for clarity, accountability, and smart, human-centred decisions. That’s why we developed the TRUSTED AI framework at LACE Partners: a practical, values-led model to help organisations adopt AI in ways that are responsible, secure, and truly useful.
In this episode of the HR on the Offensive podcast, Chris Howard sits down with LACE AI experts Martin Colyer and Charlie Frost to unpack the framework and what it means for forward-thinking HR teams wanting to trust in AI.
Introducing the TRUSTED AI Framework
We’ve also published two companion blogs that go deeper into each element of the model—but here’s a quick guide to what TRUSTED stands for:
T — Technology
Make sure your AI tools and infrastructure match the needs of your people and business—not the other way around.
R — Regulation
Stay compliant with evolving laws and ethical expectations. Responsible AI starts with knowing the rules.
U — Usability
If AI isn’t intuitive, it won’t get used. Prioritise design that blends seamlessly into everyday workflows.
S — Security
Keep systems and data safe. Robust protection is essential when dealing with sensitive HR information.
T — Transparency
Build trust by making it clear how AI decisions are made, and how tools are trained and evaluated.
E — Ethics
AI must be fair and accountable. Bias and unintended consequences must be actively addressed—not left to chance.
D — Data
High-quality, well-governed data is the fuel for reliable AI. Without it, everything else breaks down.