ISACA’s Chris Dimitriadis discusses the security concerns of mismanaged AI and why Ireland needs to prioritise effective AI governance.
“Good governance doesn’t slow innovation,” says Chris Dimitriadis. “It enables it.”
Dimitriadis is the chief global strategy officer at the Information Systems Audit and Control Association (ISACA), a professional association focused on IT governance that provides education, training, guidance and credentials to companies worldwide.
With the rise of AI in the workplace, ISACA has been working to equip companies with the skills necessary for proper AI governance, such as through the introduction of two new advanced credentials: the Advanced in AI Audit and the Advanced in AI Security Management certifications.
According to Dimitriadis, the importance of AI governance is significant, especially in Ireland.
“Ireland should treat AI governance as a strategic capability,” he tells SiliconRepublic.com. “The country has a world-class talent base and a pivotal role in the global technology ecosystem.
“But to harness AI safely, organisations must invest in the people who provide oversight.”
Here he talks about why AI governance is vital, especially for audit and security professionals.
What kind of skills and knowledge do governance professionals need to build in relation to AI?
They don’t need to become data scientists, but they do need to understand how AI systems behave, where risk accumulates, how to audit models, and how controls must evolve.
What has fundamentally changed is that governance can no longer be treated as a static, compliance-driven exercise. AI introduces systems that learn, adapt and sometimes behave in unexpected ways, which means governance professionals need to think in terms of resilience and decision-making under uncertainty, not just predefined controls.
This requires an understanding of how AI can amplify operational risk – for example, how automation can propagate errors at scale, or how reliance on AI outputs can weaken human judgement if oversight is not clearly defined. Governance professionals must be able to assess not only whether controls exist, but whether they remain effective as systems evolve over time.
Data governance becomes central in this context. AI forces organisations to confront long-standing issues around data ownership, quality and access. Weak data discipline is no longer a background problem – it directly affects the reliability, fairness and security of AI systems, and therefore the credibility of decisions made using them.
Finally, effective AI governance depends on professionals who can translate complexity into action. That means working across cybersecurity, privacy, legal and business functions to establish governance models that are practical, auditable and aligned with how the organisation actually operates. When done well, governance does not slow innovation; it enables organisations to deploy AI with confidence, knowing they can explain, defend and correct outcomes when things go wrong.
‘Mismanaged AI doesn’t just introduce new risks; it makes existing risks harder to see and harder to contain’
What specific AI-enabled threats are worrying security teams the most?
The shift we’re seeing is that traditional attacks are being supercharged by AI. Deepfake-enabled fraud, personalised phishing created in seconds, voice spoofing, these threats are becoming faster and more convincing. We also see the weaponisation of existing AI algorithms in order to become hacking tools in the hands of adversaries, as well as a marketplace of advanced hacking tools that can help one hack at the speed on intent.
What is particularly worrying for cybersecurity teams is not just the technical sophistication, but the democratisation of attack capability. AI has lowered the barrier to entry to the point where individuals with very limited skills can launch high-volume, highly credible attacks. This is driving a surge in opportunistic campaigns that target scale rather than precision, while at the same time enabling more advanced actors to operate with greater speed and persistence.
Another concern is that AI is amplifying the impact of existing weaknesses rather than introducing entirely new ones. Organisations at very different levels of cybersecurity maturity are being affected, because AI-driven attacks exploit gaps in processes, behaviour and decision-making, not just technology.
Even highly mature organisations are discovering new exposures as AI-driven techniques accelerate reconnaissance, automate lateral movement and identify weak points faster than human-led defence models can react.
ISACA’s findings reflect this: two-thirds of security professionals are very concerned that AI will be used against their organisations, and almost all expect attackers to exploit it.
For a country like Ireland – with high-value tech, finance and public-sector targets – this creates a disproportionate exposure. A single successful AI-enabled attack can ripple across supply chains, public services and international operations, well beyond the initial point of compromise.
What are the security risks of mismanaged AI integration?
The biggest risk of mismanaged AI integration is the false sense of security it can create. Organisations may assume that because AI is powerful, automated or “intelligent”, it inherently improves security. In reality, poorly governed AI can expand the attack surface, accelerate the spread of errors and obscure accountability.
AI systems often operate across multiple datasets, tools and third parties. Without clear governance, this creates blind spots around data exposure, access privileges and supplier risk. When something goes wrong, organisations may struggle to understand why a decision was made or how an incident unfolded – which complicates both response and accountability.
In effect, mismanaged AI doesn’t just introduce new risks; it makes existing risks harder to see and harder to contain.
What are the risks unique to audit and security professionals?
Audit and security professionals face a dual challenge. On one hand, they are increasingly targeted because they control access, approvals and oversight. On the other, they are expected to provide assurance over systems that behave dynamically and do not always produce repeatable outcomes.
AI challenges traditional audit assumptions. Models evolve, decisions may not be deterministic, and evidence must often be assessed over time rather than at a single point. This requires new approaches to assurance and monitoring.
There is also a cognitive risk. As AI tools are used to accelerate analysis and decision-making, professionals must guard against over-reliance on automated outputs. Maintaining professional judgement – knowing when to trust AI and when to challenge it – becomes a critical skill in its own right.
In your opinion, what are the most important considerations for AI governance going into 2026?
Going into 2026, the most important shift organisations need to make is moving from AI adoption to AI governance at scale. Data from ISACA’s latest Tech Trends and Priorities Pulse Poll shows that while AI and machine learning are now top technology priorities, only 13pc of organisations say they feel very prepared to manage generative AI risks. That gap between ambition and readiness is where governance becomes critical.
One key consideration is resilience, not just compliance. Regulatory requirements will continue to expand, but governance cannot stop at meeting minimum standards. Growing concern around AI-driven social engineering, ransomware and business continuity reflects the reality that AI is becoming embedded in core operations. Governance therefore needs to focus on how organisations detect failure, respond to incidents and maintain trust when things go wrong – not just how they prevent issues on paper.
Another major factor is regulatory complexity. With frameworks such as NIS2, DORA and the EU AI Act coming into force, many organisations still don’t feel ready. AI governance in 2026 will require translating overlapping regulatory expectations into coherent internal controls, clear accountability and auditable processes, or compliance risk will quickly become operational and reputational risk.
Skills are the third pillar. Effective AI governance depends on people who understand not only how AI works, but how it reshapes risk, decision-making and accountability across the organisation. Without that capability inside audit, risk and security teams, governance frameworks remain theoretical.
Finally, organisations need to address the foundations that AI relies on. Legacy systems, fragmented data environments and cloud security weaknesses continue to constrain governance efforts. Modernising infrastructure and strengthening data and cloud controls are not parallel initiatives – they are prerequisites for governing AI responsibly.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.