Guest post by Ronnie Hamilton, Pre Sales Director, Climb Channel Solutions Ireland
There have been hundreds of headlines about the AI skills gap. Analysts are warning that millions of roles could go unfilled. Universities and education providers are launching fast-track courses and bootcamps. And in the channel, partners are under pressure to bring in the right capabilities or risk being left behind.
But the challenge isn't always technical. Often, it's much more basic. The biggest question, for many, is where to begin? More often than not, organisations are keen to explore the potential of AI but they don't know how to approach it in a structured way. It's not a lack of intelligence or initiative or skill holding them back - far from it, it's the absence of a shared framework, a common language, or a clear starting point.
From marketing departments using ChatGPT to create content to developers trialling Copilot to streamline workflows, individuals are already experimenting with AI. However, these activities tend to happen in isolation, with such tools used informally rather than strategically. Without a roadmap or any kind of unifying policy, businesses are often left with a fragmented view or approach - the result of which is that AI becomes something that happens around the organisation rather than being a part of it.
This can also introduce more risks, particularly when employees input sensitive data into external tools without proper controls or oversight. As models become more integrated and capable, even seemingly innocuous actions, like granting access to an email inbox or uploading internal documents, can expose large volumes of confidential company data. Without visibility into how that data is handled and used, organisations may unknowingly be increasing their risk surface.
Rethinking what 'AI skills' means
The term "AI skills" is often used to describe high-end technical roles like data scientists, machine learning engineers, or prompt specialists. Such an interpretation has its drawbacks. After all, organisations don't just need deep technical expertise, they need an understanding of how AI can be applied in a business context to deliver value.
For example, organisations may want to consider how these tools can be used to support customers or identify ways of automating processes. Adopting AI in this way can encourage communication around it and allows people to engage with AI confidently and constructively, regardless of their technical background.
Unfortunately, the industry's obsession with large language models (LLMs) has narrowed the conversation. AI has become almost entirely associated with a select number of tools. The focus has moved to interacting with models, rather than applying AI to support and improve existing work.
Yet for many partners, the most valuable AI use cases will be far more understated - including automating support tickets, streamlining compliance checks, and improving threat detection. These outcomes won't come from prompt engineering, but from thoughtful experimentation with process optimisation and orchestration.
Removing the barriers to adoption
For many businesses, the real blocker to full-scale AI adoption isn't technical complexity, it's structural uncertainty. AI adoption is happening, but not in a coordinated way. There are few formal policies in place, and often no designated owner. In many cases, tools are actively blocked due to data security concerns or regulatory ambiguity.
That caution isn't misplaced. The EU AI Act, for example, requires any organisation operating within or doing business with the EU to ensure at least one trained individual is responsible for AI. By itself, this raises important questions in terms of accountability and strategy. This lack of ownership - as opposed to the technology itself - is where the real risk lies.
There's also an emotional barrier at play. We hear it all the time: the sense that others are further ahead, and that trying to catch...