A new study has found that 68% of organisations have experienced data leaks linked to the use of AI tools, yet only 23% have formal security policies in place to address these risks. The findings are based on data security platform Metomic’s ‘2025 State of Data Security Report’. The company prepared the report in collaboration with Harris Interactive, after surveying 404 chief information security officers (CISOs) and security leaders in the US and UK.

The report highlights growing challenges as AI becomes more embedded in daily operations. Despite 90% of respondents expressing confidence in their organisation’s security measures and 91% believing their employee training programmes are effective, many still reported frequent incidents. These included malware, phishing, and data breaches, often linked to the improper use of AI systems. The data underscores the critical gap in AI-specific security protocols as these tools become more ingrained in daily operations.

“Our research shows that employees using AI applications without proper guardrails are unwittingly exposing sensitive company data at an alarming rate,” said Metomic’s co-founder and CTO, Ben van Enckevort. “The gap between security leaders’ confidence and the actual threat landscape represents one of the most significant blind spots in modern cybersecurity.”

Data breaches remain top concern amid AI adoption

Customer and internal data breaches remain a primary concern for 84% of security leaders. However, the importance of ransomware attacks and breaches involving third-party suppliers has increased since 2024, now cited by 83% and 80% of respondents, respectively. These shifts reflect a trend in which cyber threats increasingly target extended networks and external partners.

In the US, internal breaches (88%) and ransomware attacks (87%) were top concerns. In the UK, phishing and compromised accounts (84%) were the most cited issues, followed by customer data breaches (83%).

The survey also found that AI adoption continues to rise across industries. In the UK, 34% of organisations now use AI for customer service, up from 19% in 2024. In the US, AI is more commonly used for employee security training, with 22% of respondents using it for this purpose, compared to 17% in the UK.

Organisational culture remains a key obstacle to improving data security. When asked about barriers to programme success in 2025, 80% of security leaders identified the challenge of building a strong internal security culture. Looking ahead, 44% of security leaders plan to focus on infrastructure oversight and implementation, particularly in securing AI systems. This marks a shift from previous years when security operations took priority.

The report also outlined five recommended actions for CISOs. This includes strengthening security culture, improving AI governance, adopting adaptive security models, and deploying multi-layered AI defences, among others.

Last month, research by Mimecast revealed that over 55% of organisations lack specific strategies to tackle AI-driven cyber threats. The report highlighted increasing concerns regarding AI-related vulnerabilities, insider threats, and gaps in cybersecurity budgets.

Read more: 68% of UK fintechs report rising fraud cases, losses reach millions