Across EMEA, Artificial Intelligence (AI) is redefining industries, inspiring innovation, improving operations, and driving, growth. Government and Irish businesses are embracing and capitalising on AI's potential to enhance customer experiences and gain a competitive advantage. But as adoption accelerates, new security challenges arise, demanding vigilant attention to protect these investments.
Forecasts indicate that AI could contribute trillions to the global economy by 2030, with Ireland well-positioned to capture a significant share of this value. According to Dell Technologies' Innovation Catalyst Study, 76% say AI and Generative AI (GenAI) is a key part of their organisation's business strategy while 66% of organisations are already in early-to mid-stages of their AI and GenAI journey.
As AI becomes more embedded in everything from customer management to critical infrastructure, safeguarding these investments and tackling the evolving cyber threat landscape must be a priority. To that end the success of integrating AI in the region depends on addressing three critical security imperatives: managing risks associated with AI usage, proactively defend against AI-enhanced attacks, and employing AI to enhance their overall security posture.
Managing the Risks of AI Usage
Ireland as a digital hub within the EU, must navigate the complex regulatory environment like the Digital Operational Resilience Act (DORA), NIS2 Directive, the Cyber Resilience Act and the recently launched EU AI Act. These frameworks introduce stringent cybersecurity requirements that businesses leveraging AI must meet to ensure resilience and compliance.
AI's reliance on vast amounts of data presents unique challenges. AI models are built, trained, and fine-tuned with data sets, making protection paramount.
To meet these challenges, Irish organisations must embed cybersecurity principles such as least privilege access, robust authentication controls, and real-time monitoring into every stage of the AI lifecycle. However, technology
and implementing these measures effectively isn't enough. The Innovation Catalyst Study highlighted that a lack of skills and expertise ranks as one of the top three challenges faced by organisations looking to modernize their defenses. Bridging this skills gap is vital to delivering secure and scalable AI solutions because only with the right talent, governance, and security-first mindset can Ireland unlock the full potential of AI innovation in a resilient and responsible way.
A further step that Irish businesses can take to address AI risks, is to integrate risk considerations across ethical, safety, and cultural domains. A multidisciplinary approach can help ensure that AI is deployed responsibly. Establishing comprehensive AI governance frameworks is essential. These frameworks should include perspectives from experts across the organisation to balance security, compliance, and innovation within a single, cohesive risk management strategy.
Countering AI-Powered Threats
While AI has enormous potential, bad actors are leveraging AI to enhance the speed, scale, and sophistication of attacks. Social engineering schemes, advanced fraud tactics, and AI-generated phishing emails are becoming more difficult to detect, with some leading to significant financial losses. Deepfakes, for instance, are finding their way into targeted scams aimed at compromising organisations. A 2024 ENISA report highlighted that AI-enhanced phishing attacks have surged by 35% in the past year, underscoring the need for stronger cybersecurity measures.
To stay ahead organisations must prepare for an era where cyberattacks operate at machines' speed. Transitioning to a defensive approach anchored in automation is key to responding swiftly and effectively, minimizing the impact of advanced attacks. The future of AI agents in the cybersecurity domain may not be far off.
This means deploying AI-powered security tools that can detect anomalies in real time...