How AI is shaping the future of cybersecurity

AI is revolutionising business at lightning speed – but are we ready for the risks it brings?  Cybercriminals are already leveraging generative AI to craft phishing attacks that are nearly undetectable and to spread highly convincing disinformation.

Cybersecurity has entered a new era, and while adopting AI is no longer optional, the real challenge lies in doing so responsibly without compromising your organisation’s cyber resilience.  

The stakes are high: 57% of organisations have limited their Generative AI (GenAI) rollout to low-risk users, and 40% have delayed deployment by three months or more due to data security and governance concerns.

GenAI is cybercrime’s new weapon

In their 2025 Global Threat Report, CrowdStrike highlights how GenAI has rapidly emerged as a preferred tool for cybercriminals, owing to its accessibility and ease of use.

In 2024, GenAI played a bigger role in cyberattacks, especially in social engineering and information operations. Cybercriminals used GenAI to create realistic, convincing content without needing much training or effort, making it perfect for spreading deception on a large scale.

For example, the North Korean group FAMOUS CHOLLIMA used GenAI to create fake LinkedIn profiles and trick recruiters, even using it to generate responses during interviews. Cybercriminals have also leveraged GenAI for financial scams. In 2024, cases included deepfake videos of executives used to steal millions of dollars and voice cloning to pull off business email compromise (BEC) schemes.

The connection between GenAI and social engineering is becoming clearer in malware trends. For example, GoldPickaxe, a mobile malware targeting biometric data, has been used in the Asia Pacific to create deepfake videos since late 2023.

On top of that, research shows LLMs are better than humans at crafting phishing emails, with much higher success rates. These trends highlight how GenAI is changing the threat landscape and why strong defences against its misuse are critical.

Top insights from Gartner’s AI risk report

Gartner’s latest Market Guide for AI Trust, Risk and Security Management also dives into the challenges of adopting AI without proper governance – scenarios that impact every facet of business, including data management.

Their findings highlight the importance of trust, risk, and security management (TRiSM) in AI systems (more about that later). For now, here is a summary of Gartner’s key findings:

  • Organisations face various risks when using AI, with top concerns including data breaches, risks from third-party systems, and inaccurate or harmful outputs.
  • While attacks targeting enterprise AI are still rare, incidents involving uncontrolled, harmful chatbots and internal data sharing issues are frequently reported.
  • Layered measures for managing AI TRiSM apply to all types of AI, including built-in, custom-built, and advanced autonomous systems. These measures work alongside traditional security technologies.
  • A new market is forming around AI governance and enforcement tools, with unique offerings specifically designed to address AI-related risks.
  • The demand for GenAI TRiSM tools is growing, drawing competition from vendors of all sizes. Some vendors focus on security and risk mitigation, while others prioritise ethical practices, safety, and meeting compliance requirements. However, no single solution currently addresses all AI risks and challenges.
  • Managing AI trust and security often highlights gaps between organisational teams, prompting them to work together across departments to find effective solutions.

How TRiSM is forecast to shape AI in business

Gartner’s Market Guide for AI TRiSM recommends proactively managing AI risks by establishing a TRiSM framework that ensures responsible AI adoption before deployment.

Their AI TRiSM model is built on four key layers of technical capabilities, supported by a foundational fifth layer that includes more conventional technology controls, like network, endpoint, and cloud security solutions.

Making up the top two layers of Gartner’s AI TRiSM framework are newer additions to the party: AI governance and runtime solutions. These two functions are merging to create a new market segment, designed to oversee AI interactions more effectively.

By combining AI inventory management and continuous evaluations with runtime inspection and enforcement, teams can perform real-time risk analysis of AI systems that tie back to a continuously updated risk-scored inventory.

This new category builds on the foundation of traditional tools found in the bottom layers of AI TRiSM, which focus on AI information and workloads.

Building cyber resilience for tomorrow

Looking ahead, it’s clear that the future of AI isn’t just about innovation – it’s about finding the sweet spot between moving fast and staying accountable. AI is evolving rapidly, bringing incredible opportunities, as well as big responsibilities.

Gartner predicts that by 2027, ‘AI TRiSM as a service’ will emerge as a viable outsourced option for organisations lacking resources to implement comprehensive AI TRiSM services internally.

Additionally, by 2028 they expect that 25% of large organisations will establish consolidated information governance teams – up from less than 1% in 2023.

These predictions highlight a critical reality: The organisations that thrive will be those that view AI risk management not as a barrier to innovation, but as an enabler of responsible, scalable AI adoption.

Get in touch for a Free, No‑Obligation Consultation

Arrange a chat with our experienced team to discuss your data protection, disaster recovery, cloud or security requirements.

  • Arrange an introductory chat about your requirements
  • Gain a proposal and quote for our services
  • View an interactive demo of our service features

Prefer to call now?
Sales and Support
1300 88 38 25

This field is for validation purposes and should be left unchanged.

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is hidden when viewing the form

© 2021 Global Storage. All rights reserved. Privacy Policy Terms of Service

The Global Storage website is accessible.

Download
Best Practices For Backing Up Microsoft 365

This field is for validation purposes and should be left unchanged.

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

Download
5 Myths About Backing Up Microsoft 365 Debunked

This field is for validation purposes and should be left unchanged.

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy