European Companies Are Flying Blind on AI Cyberattacks, New Study Warns
A new survey of 681 digital trust professionals across Europe reveals a troubling blind spot: 35% of organizations cannot say whether they have been hit by an AI-powered cyberattack, exposing a dangerous gap between rapid AI adoption and inadequate security oversight. The finding underscores a broader pattern where businesses are deploying AI tools at scale but lack the governance frameworks needed to manage the risks they create.
Why Are AI-Powered Attacks So Hard to Detect?
The challenge isn't just that attackers are using AI; it's that defenders struggle to recognize when they've been targeted. According to the ISACA research, 71% of respondents believe AI-powered phishing and social engineering attacks are harder to detect than traditional threats. The problem compounds when you consider that 58% said AI has made it significantly harder to authenticate digital information, undermining a fundamental security practice.
Misinformation and disinformation emerged as the top AI-related risk in the survey, cited by 87% of respondents, followed by privacy violations at 75% and social engineering at 60%. These aren't hypothetical concerns; they reflect real operational challenges that security teams face daily.
The Governance Gap: Why Adoption Outpaces Oversight
The disconnect between AI adoption and governance is stark. Across European workplaces, 82% of organizations expressly permit AI use, and 74% specifically allow generative AI tools. The most common applications include creating written content (69%), increasing productivity (63%), automating repetitive tasks (54%), and analyzing large datasets (52%). Many organizations report tangible benefits: 77% cited time savings, and 40% said AI increased capacity without additional headcount.
Yet despite this widespread deployment, only 42% of organizations have a formal, comprehensive AI policy in place. Even more concerning, 33% do not require employees to disclose when AI has contributed to work products. This lack of visibility creates operational risk and hands an advantage to malicious actors.
How Organizations Can Close the AI Governance Gap
- Establish Formal AI Policies: Develop comprehensive, documented policies that define where and how AI can be used across the organization, with clear approval workflows and oversight mechanisms.
- Mandate AI Disclosure Requirements: Require employees to document when AI has been used in work products, creating an audit trail that helps identify unauthorized or risky applications.
- Invest in AI-Specific Security Training: More than half of respondents (54%) said they need to upskill within six months to retain their jobs, yet 21% of organizations provide no formal AI training at all.
- Deploy AI for Defensive Purposes: While 43% of organizations report that AI has improved their ability to detect and respond to threats, only 34% are actively deploying AI specifically to support cybersecurity efforts.
- Align with Regulatory Frameworks: The EU AI Act was the most widely referenced governance framework, cited by 45% of organizations, yet 26% still follow no framework at all.
The skills challenge is particularly acute. According to the survey, 41% of respondents named the growing skills gap as one of the biggest risks posed by AI. Over the next year, 79% said they need to upskill to remain competitive, yet many organizations are not investing in the training needed to build that expertise.
Chris Dimitriadis, Chief Global Strategy Officer at ISACA, emphasized the urgency of the moment. "AI has fundamentally changed the threat landscape. Attackers can now hack at the speed of intent, and too many organisations don't even know whether they've already been on the receiving end," he stated. He added that "ungoverned AI doesn't just create operational risk. It actively hands an advantage to those who want to cause harm."
Chris Dimitriadis, Chief Global Strategy Officer at ISACA
What Role Is the EU AI Act Playing in Governance?
The EU AI Act has become the primary reference point for organizations seeking regulatory guidance. The ISACA survey found that 45% of organizations cited the EU AI Act as their governance framework, compared to 26% who referenced NIST standards. However, awareness of the regulation does not automatically translate into compliance or effective implementation.
The European Data Protection Board (EDPB) has recognized this challenge and is working to bridge the gap. In 2025, the EDPB adopted the Helsinki Statement on Enhanced Clarity, Support, and Engagement, which outlines new initiatives to make GDPR compliance easier and strengthen consistency across the EU. The Board is also developing joint guidelines with the European Commission on how the AI Act and EU data protection laws interact, with adoption planned for 2026.
This regulatory support is essential because the complexity of the digital regulatory landscape has grown significantly. The EDPB noted that "the rapid expansion of the EU's digital regulatory framework has added complexity to the data protection ecosystem". To help organizations navigate this complexity, the EDPB has prioritized enhancing legal certainty, making compliance more achievable in practice, and strengthening cooperation among regulators.
Dimitriadis concluded that the challenge is not a departure from established risk management principles but a test of whether organizations can apply them quickly enough in a more complex environment. "The fundamentals of good risk management have not changed. What has changed is the complexity and speed of what practitioners are now being asked to govern," he explained. Organizations that invest in AI governance capability now will not only be better protected; they will also be better positioned to realize AI's benefits.
Dimitriadis