AI. The great cloud optimiser.

Wondering how AI will transform cloud services? Here it is, from the horse’s mouth (Gartner):

“The adoption of AI within cloud services is poised to revolutionize IT operations, embedding AI as a fundamental element across everything from infrastructure management to application deployment.” ~ Dennis Smith, Distinguished VP Analyst, Gartner.

So, what could go wrong?

Why AI is driving up the cost of cloud

While AI-infused cloud services are set to revolutionise IT operations, this transformation will come at a high cost.

Gartner warns that not only will energy demands due to the need to handle AI requirements potentially increase by more than 300% in the next four years, but “by 2030, companies that fail to optimize the underlying AI compute environment will pay over 50% more than those that do.”

With Gartner additionally predicting that “over 80% of enterprises will deploy industry-specific AI agents in support of critical business objectives by 2023” (compared with less than 10% today), and that “more than 60% will conduct intensive AI model activity across multiple clouds”, the heat is on. But on whom?

The impact on data centres? A total overhaul of power and cooling infrastructures.

The impact on your organisation? The ongoing challenge of balancing the cost of AI workloads within a financial management framework. In other words, you’ll need to diligently measure the business value and ROI of AI-enabled cloud solutions to avoid overspending.  

“Gartner predicts that by 2030, over 80% of enterprises will deploy industry-specific AI agents in support of critical business objectives, up from less than 10% today, and more than 60% will conduct intensive AI model activity across multiple clouds.

But on the other hand…

Adopting AI cloud services may have the potential to blow out your IT budget, but the good news is that AI-powered tools also have the superpower to slash it.

How? Let’s count (just some of) the ways.

AI-powered cloud management tools can reduce costs through several mechanisms:

1. Demand forecasting, right-sizing

    Using AI, you can analyse your current versus historical cloud usage, seasonal patterns, and workload queues to proactively predict your future demand. With this information, you’ll always be able to provision just the right amount of resources. No more over-allocation and no waste!

    That right-sizing can also be applied to your instances and services. AI can compare your actual utilisation (CPU, memory, I/O) to your instance sizes and recommend smaller or more appropriate types. Again, you can reduce expensive over-provisioning without hurting performance.

    2. Leveraging discounts

    Based on your forecasted usage, AI can also show you where you can get more value by maximising long-term discounts (via reserved and savings plans) and not making the mistake of underutilising them. AI can combine real-time telemetry with ML (machine learning) to scale resources up or down before demand spikes – so you never end up under-provisioning (and over-spending) during peaks.

    And of course, you can automate all of this reporting and recommendations to reduce human input to reviewing and decision-making.

    3. Opportunity hunting (for savings) and troubleshooting

    AI tools can also save your organisation money and effort by spotting and pre-empting potential issues. For example, it can identify a workload that won’t be affected if you shift it to a cheaper spot – and schedule it.    

    AI can flag suspicious or unusual spend patterns (like sudden traffic increases) that can send costs spiralling if unchecked. You can set flags for a range of deviations so you’re warned in good time, and can immediately stop and remediate the activity.

    In situations where you have high-spend areas, AI can identify the owner (s) and allocate costs per department. And it can spot and clean up those money wasters like idle databases, unattached volumes, unused snapshots, and stale backups. So you’ve got transparency of all the things you’re potentially paying for, but not using, and can put them under the financial microscope.

    Why is this all so important?

    27% of cloud spend is wasted, according to Flexera in their “2024 State of the Cloud Report,” 2024. And that’s something few organisations can afford.

    In their 2025 report, Flexera report that 84% of respondents believe that managing cloud spend is the top cloud challenge for today’s organisations. Understandably, with cloud spend expected to increase by 28% in the coming year (2026), it’s apparent that many are rethinking their existing cloud cost management strategies.

    While 87% of Flexera’s respondents name cost efficiency/savings as their #1 cloud goal, a focus on cost avoidance has gone from 28% (2024) to 64% (2025). Cost avoidance, of course, is the practice of not incurring preventable and unnecessary expenses in the first place – which is something that AI tools (notably AI-driven FinOp tools) excel at.

    While the potential for cloud cost reduction and ROI varies across vendors and research agencies, what is clear is that AI and automation are critical enablers of such reductions.

    As the journey to an AI-enabled workplace accelerates and we turn to AI to control the costs it generates as a byproduct, the old saying “Doctor, heal thyself” seems all too fitting – and an essential strategy for survival.

    What are the best practices for building a resilient DR plan in 2026?

    If there’s one thing we’re all hyperaware of these days, it’s that nothing is set and forget.

    A new year typically signals that it’s time to review our disaster recovery (DR) processes, practices and technology. For most of us, it’s not because we ‘got it wrong’ last year, but because the pace of change means we need to re-evaluate what we got right, see what we can learn from others less fortunate, advances in technology, and what we can take on board and apply to our own organisations.

    With a significant array of external forces – from cybercrime to floods to system failures – keeping us on our toes and second-guessing our own vulnerability, a near-enough DR plan isn’t nearly good enough.

    Three key strategies to ensure business continuity

    1. Make it (semi) permanent

    An investment in immutable backups as part of your disaster recovery strategy will dramatically improve your organisation’s resilience.

    You’ve likely already got backup under control with your 3-2-1-1 strategy. The 3-2-1-1, of course, refers to the best-practice approach of making three copies of your data, which you store on two different media: one copy off-site and the other a cloud-based, immutable or air-gapped backup.

    It’s tweaking that last backup option that’s potentially a game-changer for your business.

    If you’ve opted for air-gapped backups, then you’re relying on the practice of disconnecting your storage medium from your systems – it’s completely offline and safe from malware, viruses or ransomware. The only problem is that, even though it’s not connected to your network, a disgruntled admin or a malicious actor planted within your company can still sign in to the server and delete, corrupt, or encrypt your data.     

    Whereas, if you opt for immutable backup, you’re locking that data down. This approach uses write-once, read-many (WORM) policies or object-lock technology to make your data impervious to change. Yes, it can be accessed and read on demand, but it can’t ever be overwritten or altered – regardless of the user’s permissions.

    The data lockdown period can be set (to say, 90 days), and at the end of that period, it’s unlocked, and your data is no longer immutable. While you can choose to lock it down permanently, since out-of-date data is generally of no use, it’s neither recommended nor necessary.

    Some key benefits of immutable backups include:

    • Audit trails to show who accesses the data and controls to determine who can access it.
    • Your data is protected from ransomware or someone trying to make malicious changes.
    • You’ll always have clean, trustworthy data
    • The integrity of your data is guaranteed – no bit-rot (the slow, silent corruption of digital data over time), corruption or accidental overwrites
    • Ticks all the compliance boxes for immutability or retention requirements for a wide range of industry frameworks (HIPAA, etc.)
    • Easy and fast recovery with data that’s never corrupted and always ready to use
    • Reduced operational and human error risks with accidental deletion impossible
    • Lower costs with cloud-based immutable solutions

    2. Set realistic targets, and stick to them

    Your DR strategy should never reflect unrealistic and unachievable expectations. It should reflect realistic, appropriate RTOs (recovery time objectives) and RPOs (recovery point objectives) that together will protect your business and boost its resilience.

    Here’s why – together – they’re important:

    1. They protect your bottom line

    Every minute of downtime counts. The inability to operate can lead to significant financial losses, especially for e-commerce, financial services, or SaaS companies. Lower RPO values mean you’ve lost less data between backups. And having no gaps in your data is critical for maintaining data integrity, meeting regulatory or compliance requirements, and retaining customer trust.

    2. It’s all about balance (the right balance, that is)

    The relationship between RTO/RPO and cost is exponential. If you want to achieve a near-zero target, then you need to make a significant investment in your infrastructure and resources. So, the key is finding targets that align with your actual real-world business needs (and budget) rather than pursuing goals that simply make you look good.

    For example, you could consider a tiered approach where, for your mission-critical systems, you have an RTO of between one and four hours and an RPO of 15 minutes. Whereas for your non-critical systems, an RTO of 48 hours and RPO of 24 hours may be perfectly acceptable.

    In terms of best practice, get the balance right by:

    • Carrying out a business impact analysis to determine your business’s actual (not imagined) tolerance for downtime and data loss
    • Base your targets on the requirements of your business, not just your IT capabilities
    • Test regularly to ensure your targets are achievable in real scenarios as well as on paper
    • Communicate the costs to your leadership team so they understand the trade-offs you’re recommending
    • Review your RTO/RPO annually (or if you’re going through a significant phase of change or growth)

    The goal isn’t to have the most aggressive targets possible – few businesses can either afford them or even need them. As long as your targets are achievable and appropriate, you’ll still deliver operational resilience without breaking the bank.

    3. Test, don’t guess

    Thoughts and prayers are never enough if you’re planning to survive a cyberattack or a natural disaster. And guesswork is not your friend either.

    What looks and sounds like best practice on paper doesn’t necessarily translate into a smooth, successful, and reliable recovery in real life. An untested disaster recovery plan is…a disaster waiting to happen when you can least afford it.

    The roll over into a new year is the ideal time to put your current plan through its paces – and put theories to the test. Only testing will reveal if your disaster recovery plan has critical flaws, including:

    • Configuration errors in your backup systems or failover procedures
    • Documentation that’s out of date and doesn’t capture your current infrastructure
    • Overlooked dependencies between your systems  
    • Not enough resources in terms of bandwidth or storage capacity
    • Team members who aren’t clear about their responses and responsibilities
    • System vulnerabilities that no one expected

    It’s only by applying the lens of best practice and diligently testing your disaster recovery plan regularly that you can transform it into a reliable lifeline when disaster strikes.   

    What next?

    If you’re even the slightest bit unsure whether those best laid plans will help you survive a disaster, then let’s chat. Improvement is always possible, and adding resilience is rarely regretted.

    The dawn of a new era – AI vs. cybercrime

    If you spend enough time reading cybersecurity headlines, you might be forgiven for thinking artificial intelligence (AI) is purely a weapon for the bad guys.

    And to be fair, the statistics tell a clear story. Since the rise of generative AI, we’ve seen a staggering 1,200% global surge in phishing attacks.

    It’s a topic we’ve covered before at Global Storage, specifically regarding how AI is shaping the future of cybersecurity risks. But focusing solely on AI as a threat vector ignores the other side of the coin. AI could also be the most potent shield we have.

    For Australian technology decision-makers, the conversation is shifting from ‘how do we defend against AI?’ to ‘how do we use AI to defend ourselves?’

    With 2026 projected to be a pivotal year for autonomous systems and digital sovereignty in our region, leveraging AI for breach response readiness isn’t just a competitive advantage – it’s fast becoming a regulatory necessity.

    The autonomous shift in Australia and New Zealand

    Change is happening at pace and has been for a while. But technology leaders anticipate that 2026 will bring a transition towards increasingly autonomous AI systems in Australia and New Zealand.

    This goes beyond faster chatbots – it’s about creating systems that can reason, plan, and handle security tasks with minimal delay and little need for human intervention.

    This shift coincides with stricter regulatory measures driving a stronger convergence between IT and security. In a world where digital sovereignty is a priority, organisations must prove they can detect and neutralise threats instantly, keeping Australian data safe on Australian shores.

    Speed is the new compliance currency

    Regulatory frameworks in Australia have teeth, and they operate on strict timelines. Consider the Security of Critical Infrastructure (SOCI) Act, which requires reporting significant impact incidents within 12 hours.

    Or APRA CPS 234, which demands notification within 72 hours of a material incident.

    In the second half of 2024 alone, the OAIC received 595 data breach notifications, with 69% caused by malicious attacks. While 66% of breaches were identified in less than 30 days, that timeline is nowhere near fast enough to meet a 12-hour or 72-hour reporting window.

    This is where AI can become your compliance engine. Humans simply cannot sift through terabytes of log data fast enough to identify a patient zero event within 12 hours.

    AI, however, excels at this. It enables predictive threat detection and automated response, ensuring that when you do notify the regulator, you have the full picture, not just a guess.

    It’s no surprise that 93% of organisations indicate AI will influence their cybersecurity investment decisions over the next year.

    Outsmarting the supercharged social engineer

    The modern threat actor is no longer sending typo-riddled emails from a ‘prince in Nigeria’. They are using generative AI to create hyper-personalised, error-free campaigns.

    Recent reports indicate that AI-powered spear phishing attacks now have a 47% success rate against trained security experts. A notable development is the rise of deepfake business email compromise (BEC). In one instance, a UK engineering firm lost USD $25 million after an employee was duped by a deepfake video conference that mimicked their CFO perfectly.

    To embrace proactive cyber defence, we must fight fire with fire. Traditional signature-based detection (looking for known bad code) is useless against a unique, AI-generated email. We need AI-driven behavioural analysis. These tools establish a baseline of normal behaviour for your users – when they log in, what files they access, and how they write emails. 

    When an account suddenly deviates from that pattern (even if they have the correct password), the AI flags it instantly. It is the difference between finding a breach in 200 days versus 2 minutes.

    The necessity of keeping a human in the loop

    Despite the power of automation, AI is not a set-and-forget magic wand. It is a force multiplier, not a replacement for human judgment.

    Arctic Wolf correctly notes that full automation without oversight is rarely advisable. AI models require fine-tuning to avoid false positives – you don’t want your automated response system quarantining your CEO’s laptop during a board meeting because they logged in from a new iPad.

    There is also a trust gap to bridge. Interestingly, research shows that Australians and New Zealanders are ready for AI in critical sectors like emergency response, but only when they are aware of how it is being used. Trust increases significantly with awareness.

    The same logic applies to your internal stakeholders. To leverage AI effectively for compliance, you need a strategy that blends algorithmic speed with human strategic oversight.

    This ensures your defence is nuanced enough to understand business context, but fast enough to stop a machine-speed attack.

    Moving beyond experimental AI

    As we dive into 2026, AI in cybersecurity is moving beyond the experimental phase and into full operational maturity.

    By integrating AI into your breach response strategy, you aren’t just ticking a box for the SOCI Act or APRA. You are building a resilient organisation capable of withstanding the next generation of threats.

    SOC vs. MDR: Why your cyber strategy needs both to survive

    In the world of cybersecurity, acronyms are everywhere. For tech decision-makers trying to prevent a breach, the distinction between these acronyms isn’t just semantics – it’s the difference between a secure network and a very expensive headache.

    Two of the most commonly confused terms are MDR (Managed Detection and Response) and SOC (Security Operations Centre).

    While they are often sold as interchangeable silver bullets, they are fundamentally different disciplines. Relying on one without the other is a bit like installing a high-tech alarm system but leaving your front door wide open.

    To build true cyber resilience, you need to cut through the noise and understand why SOC and MDR are simply better together.

    What is Managed Detection and Response (MDR)?

    At its core, MDR is a service designed to hunt, investigate, and respond to threats. It is, by nature, reactive. It assumes that the ‘bad thing’ has already happened or is currently happening, and its job is to detect it, capture it, and respond to it.

    Think of MDR as the digital equivalent of reviewing security footage after a break-in. You can see exactly how the intruder got in, what they touched, and where they went. It is vital for understanding the scope of an attack and remediating it, but it is often retrospective.

    You can purchase off-the-shelf MDR solutions from vendors like Arctic Wolf or CrowdStrike. These tools are excellent at investigating incidents – answering the ‘what,’ ‘how,’ and ‘who’ of a breach.

    However, according to the 2025 Security Operations Report from Arctic Wolf, attackers are increasingly launching their assaults during “off-business hours.” The report highlights that 51% of security alerts are now triggered outside of the standard workday, making continuous, 24×7 visibility across the entire IT environment an absolute necessity, not just a nice-to-have.

    What is a Security Operations Centre (SOC)?

    If MDR is the team reviewing the footage after the fact, then the SOC is the security guard watching the live monitors 24/7, patrolling the perimeter, and checking that the windows are locked before anyone tries to climb through.

    A SOC is proactive. It scans your entire environment – looking at logs, traffic analysis, and telemetry data – to ask, ‘Where are our vulnerabilities?’ and ‘Is this behaviour normal?’

    Unlike a standalone MDR tool that might flag a specific malware signature, a SOC looks at the bigger picture. It might notice an open port that shouldn’t be there or user behaviour that deviates slightly from the norm. It leverages SIEM (Security Information and Event Management) data to aggregate logs and identify patterns that a single lens of telemetry might miss..

    The power of combining MDR and SOC

    As mentioned, the belief that deploying an endpoint MDR agent provides total coverage is a risky misconception.

    When you rely solely on MDR, you are often looking at the world through a limited perspective. You might see what’s happening on the endpoint, but you’re missing the network traffic, the cloud logs, and the identity management data. You are effectively blind to the ‘grey area’ activity that precedes an attack.

    Conversely, a SOC without strong response capabilities can suffer from ‘analysis paralysis’ – identifying threats but lacking the tooling or authority to stop them instantly.

    As noted in recent industry analysis, while MDR focuses on rapid detection and containment, a SOC provides the broader organisational oversight required to maintain a hardened security posture.

    The most secure organisations don’t choose between MDR and SOC – they combine them to build a stronger defence. Here’s why this integration is essential:.

    • Clear insights
      A SOC collects data from your entire infrastructure – firewalls, servers, cloud environments, and Intrusion Detection and Prevention System (IDPS). When you layer MDR on top of this, you give your ‘hunters’ a complete map of the terrain. They aren’t just seeing a virus alert – they are seeing the traffic that led to the download and the user account that authorised it.
    • Proactive and reactive mindset
      You need someone checking the locks (SOC) and someone ready to tackle the intruder (MDR). A SOC ensures your environment is hardened against attacks by identifying vulnerabilities proactively. If a sophisticated actor does slip through, the MDR capability kicks in to contain the threat immediately.
    • Smarter threat containment
      One of the critical advantages of a combined approach is the ability to take an endpoint offline safely. In a standalone scenario, isolating a critical server might cause more business disruption than the attack itself. With the telemetry and context provided by a SOC, an MDR team can make informed decisions about containment – cutting off the attacker without cutting off your business.

    The verdict

    The message is clear. To keep your data safe in a landscape populated by increasingly sophisticated threats, you need the proactive vigilance of a SOC combined with the reactive speed of MDR.

    It’s not an ‘either/or’ decision. It’s about ensuring that when you leave the house, the doors are locked, the alarm is on, and someone is watching the cameras.

    Beyond the firewall: Embrace proactive cyber defence

    In the world of cybersecurity, the old saying ‘forewarned is forearmed’ has never been more relevant. Yet, too many organisations still operate on a ‘wait and see’ basis, only reacting to threats once the damage is done.

    This traditional, reactionary approach is like installing a smoke detector but having no plan for an actual fire. It’s a strategy that’s becoming increasingly ineffective against the sophisticated and persistent nature of modern cyberattacks.

    As cyber threats grow more sophisticated, cyberattacks have shifted from a potential threat to an unavoidable certainty. Despite massive global investment in cybersecurity, data breaches continue to be widespread.

    The Veeam 2025 Ransomware Trends and Proactive Strategies report highlights this trend, showing that 94% of companies plan to increase their recovery budgets for 2025, and 95% are allocating more funds toward prevention.

    The problem is, even with bigger budgets, many are still on the back foot. Instead, a proactive stance is essential for genuine cyber resilience.

    Let’s explore what proactive threat detection involves and how your organisation can shift from merely reacting to threats to actively hunting them down before they can cause significant harm.

    The overconfidence trap

    It’s easy to believe you’re more prepared than you actually are. In fact, Veeam’s 2025 Risk to Resilience Report reveals a stark reality: while 69% of ransomware victims felt prepared before an attack, that confidence plummeted by over 20% after the incident.

    This gap between perceived readiness and actual recovery capability highlights a critical flaw in many cybersecurity plans.

    Waiting for an alert means the adversary is already inside your network. A proactive strategy, on the other hand, assumes that threats may have already bypassed initial defences and actively seeks them out.

    This is the core principle of proactive threat detection.

    From defence to offence: The role of threat hunting

    Proactive threat detection involves a practice known as cyber threat hunting. Instead of waiting for automated security tools to flag a problem, threat hunting is the process of actively searching for cyber threats that are lurking undetected within a network.

    Think of it as the difference between a security guard who only responds to alarms and one who actively patrols the premises, looking for anything out of the ordinary.

    Threat hunters operate on the assumption that attackers may already be inside. They use their expertise, supported by advanced tools and threat intelligence, to uncover stealthy malicious actors who have slipped past initial defences.

    These adversaries can remain hidden for months, quietly gathering data, escalating privileges, and preparing for a larger attack. Threat hunting is crucial for finding them before they succeed.

    Adopting a Continuous Threat Exposure Management (CTEM) program

    To operationalise proactive detection, organisations are turning to structured approaches like Continuous Threat Exposure Management (CTEM). Gartner defines CTEM as ‘a pragmatic and systemic approach that organisations can use to continually evaluate the accessibility, exposure, and exploitability of their digital and physical assets.’

    Instead of just scanning infrastructure for vulnerabilities, a CTEM program aligns its focus with specific threat vectors or business projects.

    This allows for a more realistic assessment of risk and helps prioritise remediation efforts where they matter most. It highlights both patchable vulnerabilities and unpatchable threats that require different mitigation strategies.

    The potential impact is significant. Gartner predicts that by 2026, organisations that prioritise their security investments based on a CTEM program will experience a two-thirds reduction in breaches.

    Security leaders must consistently oversee their hybrid digital environments to quickly identify and effectively prioritise vulnerabilities, strengthening the organisation’s defences against potential attacks.

    Don’t forget the shared responsibility model

    A common misconception, particularly with the widespread adoption of cloud services, is that the cloud provider handles all aspects of security. This is dangerously incorrect.

    The shared responsibility model is a fundamental concept in cloud security that outlines the division of responsibilities between the cloud service provider (CSP) and the customer.

    While the CSP is responsible for the security of the cloud (i.e., the underlying infrastructure), the customer is responsible for security in the cloud.

    This includes securing your data, applications, access management, and network configurations.

    People often assume that because their data is in the cloud, it’s automatically backed up and protected from all threats. It is not.

    For example, with Microsoft 365, Microsoft ensures the service is running, but you are responsible for protecting your data from accidental deletion, internal threats, or ransomware attacks.

    This is why having a robust, third-party backup and disaster recovery strategy is non-negotiable, even for cloud-based data. It’s a critical component of your proactive defence, ensuring you can recover your data no matter what happens.

    Build a proactive defence today

    Moving from a reactive to a proactive cybersecurity posture is a strategic shift that requires expertise, the right tools, and a deep understanding of the threat landscape.

    Don’t wait for an attack to reveal the gaps in your defence. Take a proactive stance and build a security strategy that is as dynamic and relentless as the threats you face.

    Is the Government being a tad overprotective of our critical infrastructure?

    In our previous critical infrastructure blog, we discussed the Security Legislation Amendment (Critical Infrastructure Protection) Act 2022 – aka the SLACIP Act, whether it applies to you, and if yes, what you need to know.

    But backing up a bit – why exactly did this act come about? What’s changed in the last few years, and has our Government overreacted?

    Worrying trends

    Let’s look at the ASD (Australian Signals Directorate) Cyber Threat Report 2022-2023 to get some local perspective.

    In its report, ASD says upfront: “…Australian governments, critical infrastructure, businesses and households continue to be the target of malicious cyber actors…This threat extends beyond cyber espionage campaigns to disruptive activities against Australia’s essential services.”

    Key trends identified by ASD in FY 2022-23 (as relating to critical infrastructure) include:

    1. State actors focused on critical infrastructure – data theft and business disruption. Here, ASD reports that, as part of their ongoing information-gathering campaigns or disruption activities, state cyber actors have targeted government and critical infrastructure networks globally. (A state actor is a private actor or entity who contracts to a state government.) Cyber operations, says ASD, are “increasingly the preferred vector for state actors to conduct espionage and foreign interference.” In recognition of this, ASD joined international partners in 2022-23 to call out Russia’s Federal Security Service’s use of ‘Snake’ malware for cyber espionage. It also highlighted the actions of a People’s Republic of China state-sponsored cyber actor that used ‘living-off-the-land’ (LOTL) techniques to compromise critical infrastructure organisations. A LOTL attack uses legitimate and trusted system tools to launch its cyberattacks and to evade detection. State actors often possess advanced capabilities and, due to the nature of their backers, have significant resources at their disposal.
    2. Australian critical infrastructure was targeted via increasingly interconnected systems. ASD reports that ‘operational technology connected to the internet and into corporate networks provided opportunities for malicious cyber actors to attack these systems.’

    Stats and facts

    Over the 2020–21 financial year, ACSC (the Australian Cyber Security Centre) received over 67,500 cybercrime reports. This was an increase of nearly 13% over the previous year. The self-reported losses totalled $33 billion. Of these reported incidents, ACSC estimated that approximately 25% were associated with Australia’s critical infrastructure or essential services.

    During the 2022-23 period, ASD notified seven critical infrastructure entities of suspicious cyber activity (it was five the previous year).

    Over that time, ASD responded to 143 incidents that were directly reported by entities that self-identified as critical infrastructure (the previous year saw 95 incidents reported). Luckily, nearly all these incidents were low-level malicious attacks or isolated compromises.

    57% of the incidents affecting critical infrastructure involved compromised accounts, credentials, assets, networks of infrastructure, or DoS attacks. Other ‘popular’ attacks included data breaches and malware infection.

    So, why do bad actors attack?

    There’s no one reason for attacking critical infrastructure.

    The sensitive information they hold, the high levels of connectivity with other organisations and critical infrastructure sectors, and the essential services they provide are alluring targets for those keen to disrupt life as usual, profit from insider knowledge, or wreck revenge for perceived political slights.

    From hospitals losing access to client records, as happened in France in 2022, where their health system reportedly sustained a number of cyber incidents resulting in cancelled operations and shut down hospital systems, to the widespread fallout from a 2023 attack on Denmark’s energy infrastructure – the impacts are significant.

    The reality is that it only takes one successful attack to cripple regions, economies, and communities – and it takes a huge amount of work (and can involve significant human distress) to recover the status quo.

    Why is critical infrastructure such a good target?

    Critical infrastructure networks are known for their interconnected nature. This, along with the third parties in their ICT supply chain, broadens the attack surface for many entities. Weak points include remote access and management solutions, which are becoming prevalent in critical infrastructure networks.

    Operational technology (OT) and connected systems are also a dangling carrot for bad actors. They can target OT to access corporate networks – and the other way around. This allows them to move laterally through systems to reach their target destination. Even if an offensive isn’t directly on an OT, attacking via connected corporate networks can disrupt operations.

    And, of course, any internet-facing system where the hardware or software isn’t updated with the latest security patches is vulnerable to exploitation, as are ICT supply chains and managed service providers.

    Is the Government overreacting?

    We’d say not.

    In justifying the need for further reforms to more tightly regulate Australia’s critical infrastructure, the Government stated in 2022 that ‘Australia is facing increasing cybersecurity threats to essential services, businesses and all levels of government’.

    At the time, the Prime Minister warned that cyberattacks were a ‘present threat’ and acknowledged they were a ‘likely response from Russia’ following the Government’s decision to impose sanctions in response to Russia’s recent aggression against Ukraine.

    In its overview of the 2022 SLACIP bill, the Government also noted that the Parliamentary Joint Committee on Intelligence and Security (PJCIS) had ‘received compelling evidence that the pervasive threat of cyber-enabled attack and manipulation of critical infrastructure assets is serious, considerable in scope and impact, and increasing at an unprecedented rate’.

    To be forewarned but not forearmed is a shortsighted strategy. We’re pleased to say that introducing SLACIP to protect our critical infrastructure shows that the Australian Government has paid close attention to ensuring we can protect what makes the world downunder go around.

    AI: 101 (part 2) – Making a business case for AI

    A quick recap: In our previous blog, we discussed the challenges and complexities behind the rush to embrace AI. We talked about what you needed to run AI (lots and lots of good data, and specialised hardware and software), the GenAI hype cycle, and the potential for use cases to simply bomb without delivering value.

    Lastly, we finished with some somewhat worrying stats that questioned the readiness of Australian organisations to harness AI, let alone understand what they’re going to do with it.

    So, now you’re up to date, let’s move on to use cases, those that are well-defined and understood, and why others are just pie in the sky.     

    AI legal eagles

    Earlier this year, Melbourne law firm Lander & Rogers set up their own AI Lab within the practice. They are currently working up “three or four” prototypes a day (mainly using Microsoft Copilot) to test how they can leverage AI to interact with various data types. Some of the most valuable use cases uncovered (from a flood of ideas generated internally) are those that save lawyers time in finding and surfacing the information they need.

    Amongst Lander & Rogers’ winning use case is engaging AI to build a chronology of events in legal cases.

    What’s not on their agenda, though, is using AI to rewrite a lawyer’s work. Their Head of AI Engineering, Jared Woodruff, says, “That’s not what AI is meant to do. The AI is there to give them [lawyers] all the information that they need to execute that decision and execute it with precision, saving them time.” 

    Lander & Roger have taken a strategic approach to their areas of focus and have defined where AI can be used to deliver definable business value in the legal profession.

    More great legal tech

    Working with AI service provider Automatise, Ethan, an Australian-owned technology service provider, has invested heavily in building Cicero, a pioneering AI tool specifically designed for the Australian legal sector.

    Ethan says that Cicero has already been adopted by several mid-tier and enterprise law firms in Australia and is transforming workflows and enhancing productivity. To quote: “As these firms integrate Cicero into their operations, they experience firsthand the benefits of high quality, coherent summaries and analyses of legal documents, a feat made possible by the fine tuning of LLMs for local use cases.”

    Again, this is another great, well-thought-out use case that meets specific industry needs. If all goes to plan, it will transform the Australian legal industry and deliver an impressive ROI.

    AI pie in the sky

    There are numerous high-profile examples of poor AI use cases. Some are simply ill-conceived, ethically irresponsible, dangerous, or just plain thoughtless. Others have used insufficient or inadequate training data, which produced skewed and reprehensible outcomes.

    This hasn’t daunted the would-be AI adopters, though.

    McKinsey’s 2024 global survey on AI reports that 65% of respondents said they regularly use generative AI for at least one business function. However, only 10% of those organisations had implemented gen AI – at scale – for any use case.

    Further to this, a senior partner at McKinsey, when speaking at the MIT Sloan CIO Symposium, said that while there are many organisational initiatives, “a lot of the efforts are scattershot and don’t contribute to the bottom line.” McKinsey’s survey confirms this, saying that only 15% of the responding companies realised an improvement in earnings for those AI initiatives.

    AI isn’t cheap or easy

    Why is the failure rate (or inability to generate an ROI or measurable business value) so high? This is where we dig out the old axiom: ‘fail to plan, plan to fail.’

    Like any technology project, there needs to be rigour around the ‘what, why, and how’ of the business case. Major considerations include:

    Worryingly, ADAPT’s CIO Edge Survey from February 2024 says that 66% of Australian CFOs say their organisations are unprepared to harness AI. 25% are non-committal, and only 9% say they’re AI-ready. AI-ready or not, 48% of the CIOs surveyed say they haven’t even defined any clear use cases for AI.

    • Setting out your commercial objectives – in other words, defining the problems you’d like to solve within your business, as well as the desired outcomes. Then, deciding if AI is, in fact, the right solution.
    • Ensuring your data is up to scratch – remembering that AI is data, your data needs to be up to date, accurate, relevant, ample – and used appropriately. All of this requires preparing and adhering to a sound data governance strategy.
    • Realistic expectations – yes, AI can be wonderful, but it’s not a magical cure-all. It’s critical not to overestimate the capabilities of AI, and it is essential to test and validate systems to ensure they meet the basic requirements of safety, compliance, accuracy, ethics, transparency, fairness, and security.
    • Making sure you’re resourced up – adopting AI comes at a cost. Just as you wouldn’t let a newly qualified driver lose in your brand-new Tesla, you wouldn’t (or shouldn’t) place your trust in anyone who doesn’t understand the legal,  ethical, and data considerations mentioned earlier. A successful AI project also requires an investment in technology, data and infrastructure. Poor infrastructure can result in performance issues and failure to support the implementation of advanced AI models, compromising both their efficiency and reliability.  
    • Scalability – it’s also critical to test AI projects at scale. What works perfectly as a test project may disappoint in terms of efficiency and reliability when rolled out to the entire organisation. 

    Blue skies or uncertian horizons?

    We know of many businesses that are keen to increase their compute power so they can train their own AI. And we’re supportive of that; we love to see organisations innovate.

    But what concerns us is that few know what they actually want to train AI to do.

    Without clarity of purpose, a strong business case, and a structured, disciplined approach, AI has the potential to become an expensive toy rather than a transformative technology that contributes to the bottom line.

    AI: 101 (When, why, and what the hell?)

    AI is going to change the world. It’s bigger than the internet. All of our jobs will disappear.

    And so, the headlines continue. Everybody who’s anybody has made a meaningful quote about AI, and every technology business has jumped on the AI bandwagon with the same ready-or-not alacrity they embraced delivering cybersecurity services.

    But you’ll have to excuse us if we’re going to take a bit more time to think about this. Because AI poses significant new challenges and complexities, we want to take the time to process the implications, not just pick it up and run with it.

    If you’re feeling equally cautious about AI, you are not alone. In their (well-worth-a-read) feature article from April 2024, “Despite the Buzz, Executives Proceed Cautiously With AI,” Reworked raises the same concerns and cautions.   

    So, backing up a bit, let’s take a 101 approach to AI and start at the beginning.

    What does AI even mean?

    We all know that AI stands for Artificial intelligence. It’s a term from 1955 coined in a proposal for a two-month, 10-man study of artificial intelligence. The ‘AI’ workshop took place a year later, in July and August 1956, which is generally considered its official birthdate.

    Today, AI describes the simulation of human intelligence processes by machines (mainly computers), and can be seen in expert systems, natural language processing (NLP), speech recognition, and machine vision. To note: AI is frequently confused, by vendors and users alike, with machine learning (aka ML). But whereas AI mimics human intelligence, machine learning identifies patterns and then uses that information to teach a machine how to perform specific tasks and produce accurate results.

    So, how does AI work? Basically, AI systems ingest large amounts of labelled training data. AI analyses the data, identifies relationships and patterns within it, and uses what it learns to make predictions about future states. Much as a human brain will access everything it knows at any given point and make a (hopefully) rational and informed decision about what happens next, so will AI. 

    What do you need to run AI?

    AI needs three things to work: 1. data – and lots of it; 2. specialised hardware; and 3. specialised software.

    Let’s talk about the importance of hardware, though, as without access to this, you have nothing. What do you need to know? The process of using a trained AI model to make predictions and decisions based on new data is called AI inference. While you can, at a pinch, run AI inference tasks on a well-optimised CPU, you really need the parallel processing grunt power of a GPU (graphics processing unit) for the compute-intensive task of AI training.

    GPUs play an important role in data centres as they deliver the performance needed to accelerate AI and ML tasks, facilitate video and graphics processing, and run scientific computing and simulation applications.

    Given the importance of AI training, it’s probably no surprise that leading vendors have advised that the demand for high-end, AI-ready GPUs has exceeded supply. The wait time for the average buyer can exceed 260 working days – which is over a year.

    (The silver lining is that you have more time to define your AI strategy rather than rushing in).

    Where are we in the AI hype cycle?

    Gartner uses ‘Hype Cycle’ to describe “innovations and techniques that offer significant and even transformational benefits while also addressing the limitations and risks of fallible systems.” So, what’s hot now, what’s coming, and when can we expect these innovations to become mainstream – or fail to follow through on their promise?

    In looking at Gartner’s 2023 AI Hype Cycle for AI, they call out two kinds of GenAI innovations that dominate. Firstly, the innovations that will be fuelled by GenAI, and impact content discovery, creation, authenticity and regulations, as well as automate human work and the customer and employee experience.

    The next is innovations that will fuel GenAI advancements. This includes using AI to build new GenAI tools and applications. In effect, it’s using innovation to create more innovation –  so it’s a popular use case for business startups.

    The hype cycle illustrates user expectations of AI and how and where it will be used against where Gartner sees these innovations in 2-10 years (see graph). So, while the ‘innovation trigger’ and ‘peak of inflated expectations’ are crammed full of use cases and solutions, some will fall by the wayside while others will surface and go on to be productive.

    The problem being?

    While the list of potential use cases is exciting, Garter’s Hype Cycle does show that AI isn’t going to deliver business value (or even last the distance) in every instance.

    Yes, you may get a head start on the competition by being an early adopter, but you may also become that cautionary tale shared in hushed tones in GenAI blogs and headlines of the future.

    Forbes certainly reached that same conclusion in its article, ‘AI Reality Check: Why Data Is The Key To Breaking The Hype Cycle.’  Here, Forbes discusses whether GenAI reached its ‘peak of inflated expectations’ in August 2023, and many companies then came face-to-face with the reality of extracting genuine and meaningful value from AI.

    Earlier, we listed access to data as a must-have for AI to work. And in its article, Forbes agrees, referring to its research, which firmly points the accusing finger of failure and disappointment at data silos. Nearly 75% of respondents who had implemented AI pilot projects in their organisations said that data silos were the primary barrier to enterprise-wide AI integration. “The number one thing keeping GenAI initiatives from reaching their fullest potential inside large corporations,” says Forbes, “is data.”

    What the hell are we going to do with AI, anyway?

    Worryingly, ADAPT’s CIO Edge Survey from February 2024 says that 66% of Australian CFOs say their organisations are unprepared to harness AI. 25% are non-committal, and only 9% say they’re AI-ready. AI-ready or not, 48% of the CIOs surveyed say they haven’t even defined any clear use cases for AI.

    This leaves us asking, where to from here? How do you ensure that your investment in AI not only delivers business value through the availability of data, hardware, and software but that you are ready to use it and can justify the investment?

    In part two of this topic, we discuss some real-world use cases in action and suggest some of the hard questions you should consider before committing to the shiny new thing that is AI.

    Chicken or egg: Cyber resistance vs cyber resilience

    In a digital world where data is the new ‘everything’, it’s unsurprising that it has become a prime target for criminals. Data is the modern-day equivalent of a stash of gold bullion – and it can be stolen, ransomed, and sold for profit with less effort and risk than a bank heist.

    The unrelenting waves of global cyberattacks mean that the cost of business survival is escalating – with the cost of cyberattacks doubling between 2022 and 2023. To combat this, Infosecurity Magazine reports that 69% of IT leaders saw or expected cybersecurity budget increases of between 10 and 100% in 2024.

    The cost of crime

    At the pointy end of the problem, organisations face damaged or destroyed data, plundered bank accounts, financial fraud, lost productivity, purloined intellectual property, the theft of personal and financial data, and more.

    The blunt end is no less damaging. There’s the cost of recovering data, rebuilding your reputation, and getting your business back to a state of BAU as soon as possible, as well as the hefty price tag that comes with forensic investigation, restoring and deleting hacked data and systems, and even prosecution

    Generative AI to the cyber-rescue?

    Many see the rise of generative AI and expansion into hybrid and multi-cloud environments as the means to alleviate the ongoing attacks. But, of course, the democratisation of generative AI (in other words, goodies and baddies have equal access to its powers) means that potential risks are also heightened.

    Despite this, it’s hard to overcome the optimism that generative AI will be a cyber-saviour. According to Dell Technologies 2024 Global Data Protection Index (APJ Cyber Resiliency Multicloud Edition), 46% of responders believe that generative AI can initially provide an advantage to their cyber security posture, and 42% are investing accordingly.  

    But here’s the rub: 85% agree that generative AI will create large volumes of new data that will need to be protected and secured. So generative AI will, by default, (A) potentially offer better protection and (B) increase the available attack space due to data sprawl and unstructured data.

    Resistance vs resilience

    Of the APJ organisations (excluding China) that Dell surveyed, 57% say they’ve experienced a cyberattack or cyber-related incident in the last 12 months.

    And a good 76% have expressed concern that their current data protection measures are unable to cope with malware and ransomware threats. 66% say they’re not even confident that they can recover all their business-critical data in the event of a destructive cyber-attack.

    So why, if 66% of organisations doubt their ability to recover their data, are 54% investing more in cyber prevention than recovery?

    Can you separate the cyber chicken from the egg?

    In a recent cybersecurity stats round-up, Forbes Advisor reported that in 2023, there were 2,365 cyberattacks impacting 343 million victims.

    Given the inevitability of cyberattack, it’s critical that your methods of resistance are robust, and if disaster strikes, your ability to recover is infallible.

    Look at it this way: While a cruise liner obviously must have radar to detect and try and avoid approaching icebergs, angry orcas, and other collision-prone objects, it’s just as important that they have lifeboats, lifeboat drills, lifejackets, and locator devices available to minimise loss of life and keep everyone afloat.  

    In the words of Harvard Business Review: “Simply being security-conscious is no longer enough, nor is having a prevention-only strategy. Companies must become cyber-resilient—capable of surviving attacks, maintaining operations, and embracing new technologies in the face of evolving threats.”

    So, how do you bolster your cyber resilience?

    According to Dell, 50% of the organisations they surveyed have brought in outside support (including cyber recovery services) to enhance cyber resilience.

    While AI will undoubtedly introduce some initial advantages, as suggested earlier, those could be quickly offset as cybercriminals leverage the very same tools. Not only are traditional system and software vulnerabilities under attack, but due to the sprawl of AI-generated data, there are more and newer opportunities.

    So – can we rely on generative AI to save the day? Probably not – or not yet anyway. What about outside help? Yes, most definitely. However, cyber resilience begins at home, with a top-down strategy based on some inarguable facts:  

    1. Attacks are inevitable. Once you accept that this is the new reality of the digital age, the logical next step is to develop a clear, holistic strategy focusing on business continuity and crisis planning.
    2. People are the first and best line of defence. Ensure your entire organisation takes responsibility and is cyber-aware – to the extent that your procedures are included in your company policies and onboarding processes.  This should include delivering ongoing cyber awareness training and introducing regular drills.
    3. When disaster strikes, survival is in your hands. Establish clear cybersecurity governance that aligns with your business objectives. Everyone in the organisation should know what they need to do to protect the organisation, its data, and its clients and ensure continuity of operations.  
    4. No one is trustworthy. Assume everything around your network is a potential threat. Adopt a zero-trust mindset that requires continual verification and rigidly controls access based on preset policies.  
    5. What you don’t know can hurt you. The ability to detect and prevent threats is critical. Invest in Security as a Service to provide visibility into your data, regardless of where it’s located, so that you can see and address your weaknesses.
    6. Disaster will strike. We live in unexpected times, where cybercrime and unprecedented natural disasters conspire to stop us in our tracks. With cloud-basedDisaster Recovery as a Service, the risk of permanently losing data and disrupting business as usual is significantly reduced.

    Get in touch for a Free, No‑Obligation Consultation

    Arrange a chat with our experienced team to discuss your data protection, disaster recovery, cloud or security requirements.

    • Arrange an introductory chat about your requirements
    • Gain a proposal and quote for our services
    • View an interactive demo of our service features

    Prefer to call now?
    Sales and Support
    1300 88 38 25

    This field is for validation purposes and should be left unchanged.

    By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

    This field is hidden when viewing the form

    © 2021 Global Storage. All rights reserved. Privacy Policy Terms of Service

    The Global Storage website is accessible.

    Download
    Best Practices For Backing Up Microsoft 365

    This field is for validation purposes and should be left unchanged.

    By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

    Download
    5 Myths About Backing Up Microsoft 365 Debunked

    This field is for validation purposes and should be left unchanged.

    By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy