Is the Government being a tad overprotective of our critical infrastructure?

In our previous critical infrastructure blog, we discussed the Security Legislation Amendment (Critical Infrastructure Protection) Act 2022 – aka the SLACIP Act, whether it applies to you, and if yes, what you need to know.

But backing up a bit – why exactly did this act come about? What’s changed in the last few years, and has our Government overreacted?

Worrying trends

Let’s look at the ASD (Australian Signals Directorate) Cyber Threat Report 2022-2023 to get some local perspective.

In its report, ASD says upfront: “…Australian governments, critical infrastructure, businesses and households continue to be the target of malicious cyber actors…This threat extends beyond cyber espionage campaigns to disruptive activities against Australia’s essential services.”

Key trends identified by ASD in FY 2022-23 (as relating to critical infrastructure) include:

  1. State actors focused on critical infrastructure – data theft and business disruption. Here, ASD reports that, as part of their ongoing information-gathering campaigns or disruption activities, state cyber actors have targeted government and critical infrastructure networks globally. (A state actor is a private actor or entity who contracts to a state government.) Cyber operations, says ASD, are “increasingly the preferred vector for state actors to conduct espionage and foreign interference.” In recognition of this, ASD joined international partners in 2022-23 to call out Russia’s Federal Security Service’s use of ‘Snake’ malware for cyber espionage. It also highlighted the actions of a People’s Republic of China state-sponsored cyber actor that used ‘living-off-the-land’ (LOTL) techniques to compromise critical infrastructure organisations. A LOTL attack uses legitimate and trusted system tools to launch its cyberattacks and to evade detection. State actors often possess advanced capabilities and, due to the nature of their backers, have significant resources at their disposal.
  2. Australian critical infrastructure was targeted via increasingly interconnected systems. ASD reports that ‘operational technology connected to the internet and into corporate networks provided opportunities for malicious cyber actors to attack these systems.’

Stats and facts

Over the 2020–21 financial year, ACSC (the Australian Cyber Security Centre) received over 67,500 cybercrime reports. This was an increase of nearly 13% over the previous year. The self-reported losses totalled $33 billion. Of these reported incidents, ACSC estimated that approximately 25% were associated with Australia’s critical infrastructure or essential services.

During the 2022-23 period, ASD notified seven critical infrastructure entities of suspicious cyber activity (it was five the previous year).

Over that time, ASD responded to 143 incidents that were directly reported by entities that self-identified as critical infrastructure (the previous year saw 95 incidents reported). Luckily, nearly all these incidents were low-level malicious attacks or isolated compromises.

57% of the incidents affecting critical infrastructure involved compromised accounts, credentials, assets, networks of infrastructure, or DoS attacks. Other ‘popular’ attacks included data breaches and malware infection.

So, why do bad actors attack?

There’s no one reason for attacking critical infrastructure.

The sensitive information they hold, the high levels of connectivity with other organisations and critical infrastructure sectors, and the essential services they provide are alluring targets for those keen to disrupt life as usual, profit from insider knowledge, or wreck revenge for perceived political slights.

From hospitals losing access to client records, as happened in France in 2022, where their health system reportedly sustained a number of cyber incidents resulting in cancelled operations and shut down hospital systems, to the widespread fallout from a 2023 attack on Denmark’s energy infrastructure – the impacts are significant.

The reality is that it only takes one successful attack to cripple regions, economies, and communities – and it takes a huge amount of work (and can involve significant human distress) to recover the status quo.

Why is critical infrastructure such a good target?

Critical infrastructure networks are known for their interconnected nature. This, along with the third parties in their ICT supply chain, broadens the attack surface for many entities. Weak points include remote access and management solutions, which are becoming prevalent in critical infrastructure networks.

Operational technology (OT) and connected systems are also a dangling carrot for bad actors. They can target OT to access corporate networks – and the other way around. This allows them to move laterally through systems to reach their target destination. Even if an offensive isn’t directly on an OT, attacking via connected corporate networks can disrupt operations.

And, of course, any internet-facing system where the hardware or software isn’t updated with the latest security patches is vulnerable to exploitation, as are ICT supply chains and managed service providers.

Is the Government overreacting?

We’d say not.

In justifying the need for further reforms to more tightly regulate Australia’s critical infrastructure, the Government stated in 2022 that ‘Australia is facing increasing cybersecurity threats to essential services, businesses and all levels of government’.

At the time, the Prime Minister warned that cyberattacks were a ‘present threat’ and acknowledged they were a ‘likely response from Russia’ following the Government’s decision to impose sanctions in response to Russia’s recent aggression against Ukraine.

In its overview of the 2022 SLACIP bill, the Government also noted that the Parliamentary Joint Committee on Intelligence and Security (PJCIS) had ‘received compelling evidence that the pervasive threat of cyber-enabled attack and manipulation of critical infrastructure assets is serious, considerable in scope and impact, and increasing at an unprecedented rate’.

To be forewarned but not forearmed is a shortsighted strategy. We’re pleased to say that introducing SLACIP to protect our critical infrastructure shows that the Australian Government has paid close attention to ensuring we can protect what makes the world downunder go around.

AI: 101 (part 2) – Making a business case for AI

A quick recap: In our previous blog, we discussed the challenges and complexities behind the rush to embrace AI. We talked about what you needed to run AI (lots and lots of good data, and specialised hardware and software), the GenAI hype cycle, and the potential for use cases to simply bomb without delivering value.

Lastly, we finished with some somewhat worrying stats that questioned the readiness of Australian organisations to harness AI, let alone understand what they’re going to do with it.

So, now you’re up to date, let’s move on to use cases, those that are well-defined and understood, and why others are just pie in the sky.     

AI legal eagles

Earlier this year, Melbourne law firm Lander & Rogers set up their own AI Lab within the practice. They are currently working up “three or four” prototypes a day (mainly using Microsoft Copilot) to test how they can leverage AI to interact with various data types. Some of the most valuable use cases uncovered (from a flood of ideas generated internally) are those that save lawyers time in finding and surfacing the information they need.

Amongst Lander & Rogers’ winning use case is engaging AI to build a chronology of events in legal cases.

What’s not on their agenda, though, is using AI to rewrite a lawyer’s work. Their Head of AI Engineering, Jared Woodruff, says, “That’s not what AI is meant to do. The AI is there to give them [lawyers] all the information that they need to execute that decision and execute it with precision, saving them time.” 

Lander & Roger have taken a strategic approach to their areas of focus and have defined where AI can be used to deliver definable business value in the legal profession.

More great legal tech

Working with AI service provider Automatise, Ethan, an Australian-owned technology service provider, has invested heavily in building Cicero, a pioneering AI tool specifically designed for the Australian legal sector.

Ethan says that Cicero has already been adopted by several mid-tier and enterprise law firms in Australia and is transforming workflows and enhancing productivity. To quote: “As these firms integrate Cicero into their operations, they experience firsthand the benefits of high quality, coherent summaries and analyses of legal documents, a feat made possible by the fine tuning of LLMs for local use cases.”

Again, this is another great, well-thought-out use case that meets specific industry needs. If all goes to plan, it will transform the Australian legal industry and deliver an impressive ROI.

AI pie in the sky

There are numerous high-profile examples of poor AI use cases. Some are simply ill-conceived, ethically irresponsible, dangerous, or just plain thoughtless. Others have used insufficient or inadequate training data, which produced skewed and reprehensible outcomes.

This hasn’t daunted the would-be AI adopters, though.

McKinsey’s 2024 global survey on AI reports that 65% of respondents said they regularly use generative AI for at least one business function. However, only 10% of those organisations had implemented gen AI – at scale – for any use case.

Further to this, a senior partner at McKinsey, when speaking at the MIT Sloan CIO Symposium, said that while there are many organisational initiatives, “a lot of the efforts are scattershot and don’t contribute to the bottom line.” McKinsey’s survey confirms this, saying that only 15% of the responding companies realised an improvement in earnings for those AI initiatives.

AI isn’t cheap or easy

Why is the failure rate (or inability to generate an ROI or measurable business value) so high? This is where we dig out the old axiom: ‘fail to plan, plan to fail.’

Like any technology project, there needs to be rigour around the ‘what, why, and how’ of the business case. Major considerations include:

Worryingly, ADAPT’s CIO Edge Survey from February 2024 says that 66% of Australian CFOs say their organisations are unprepared to harness AI. 25% are non-committal, and only 9% say they’re AI-ready. AI-ready or not, 48% of the CIOs surveyed say they haven’t even defined any clear use cases for AI.

  • Setting out your commercial objectives – in other words, defining the problems you’d like to solve within your business, as well as the desired outcomes. Then, deciding if AI is, in fact, the right solution.
  • Ensuring your data is up to scratch – remembering that AI is data, your data needs to be up to date, accurate, relevant, ample – and used appropriately. All of this requires preparing and adhering to a sound data governance strategy.
  • Realistic expectations – yes, AI can be wonderful, but it’s not a magical cure-all. It’s critical not to overestimate the capabilities of AI, and it is essential to test and validate systems to ensure they meet the basic requirements of safety, compliance, accuracy, ethics, transparency, fairness, and security.
  • Making sure you’re resourced up – adopting AI comes at a cost. Just as you wouldn’t let a newly qualified driver lose in your brand-new Tesla, you wouldn’t (or shouldn’t) place your trust in anyone who doesn’t understand the legal,  ethical, and data considerations mentioned earlier. A successful AI project also requires an investment in technology, data and infrastructure. Poor infrastructure can result in performance issues and failure to support the implementation of advanced AI models, compromising both their efficiency and reliability.  
  • Scalability – it’s also critical to test AI projects at scale. What works perfectly as a test project may disappoint in terms of efficiency and reliability when rolled out to the entire organisation. 

Blue skies or uncertian horizons?

We know of many businesses that are keen to increase their compute power so they can train their own AI. And we’re supportive of that; we love to see organisations innovate.

But what concerns us is that few know what they actually want to train AI to do.

Without clarity of purpose, a strong business case, and a structured, disciplined approach, AI has the potential to become an expensive toy rather than a transformative technology that contributes to the bottom line.

AI: 101 (When, why, and what the hell?)

AI is going to change the world. It’s bigger than the internet. All of our jobs will disappear.

And so, the headlines continue. Everybody who’s anybody has made a meaningful quote about AI, and every technology business has jumped on the AI bandwagon with the same ready-or-not alacrity they embraced delivering cybersecurity services.

But you’ll have to excuse us if we’re going to take a bit more time to think about this. Because AI poses significant new challenges and complexities, we want to take the time to process the implications, not just pick it up and run with it.

If you’re feeling equally cautious about AI, you are not alone. In their (well-worth-a-read) feature article from April 2024, “Despite the Buzz, Executives Proceed Cautiously With AI,” Reworked raises the same concerns and cautions.   

So, backing up a bit, let’s take a 101 approach to AI and start at the beginning.

What does AI even mean?

We all know that AI stands for Artificial intelligence. It’s a term from 1955 coined in a proposal for a two-month, 10-man study of artificial intelligence. The ‘AI’ workshop took place a year later, in July and August 1956, which is generally considered its official birthdate.

Today, AI describes the simulation of human intelligence processes by machines (mainly computers), and can be seen in expert systems, natural language processing (NLP), speech recognition, and machine vision. To note: AI is frequently confused, by vendors and users alike, with machine learning (aka ML). But whereas AI mimics human intelligence, machine learning identifies patterns and then uses that information to teach a machine how to perform specific tasks and produce accurate results.

So, how does AI work? Basically, AI systems ingest large amounts of labelled training data. AI analyses the data, identifies relationships and patterns within it, and uses what it learns to make predictions about future states. Much as a human brain will access everything it knows at any given point and make a (hopefully) rational and informed decision about what happens next, so will AI. 

What do you need to run AI?

AI needs three things to work: 1. data – and lots of it; 2. specialised hardware; and 3. specialised software.

Let’s talk about the importance of hardware, though, as without access to this, you have nothing. What do you need to know? The process of using a trained AI model to make predictions and decisions based on new data is called AI inference. While you can, at a pinch, run AI inference tasks on a well-optimised CPU, you really need the parallel processing grunt power of a GPU (graphics processing unit) for the compute-intensive task of AI training.

GPUs play an important role in data centres as they deliver the performance needed to accelerate AI and ML tasks, facilitate video and graphics processing, and run scientific computing and simulation applications.

Given the importance of AI training, it’s probably no surprise that leading vendors have advised that the demand for high-end, AI-ready GPUs has exceeded supply. The wait time for the average buyer can exceed 260 working days – which is over a year.

(The silver lining is that you have more time to define your AI strategy rather than rushing in).

Where are we in the AI hype cycle?

Gartner uses ‘Hype Cycle’ to describe “innovations and techniques that offer significant and even transformational benefits while also addressing the limitations and risks of fallible systems.” So, what’s hot now, what’s coming, and when can we expect these innovations to become mainstream – or fail to follow through on their promise?

In looking at Gartner’s 2023 AI Hype Cycle for AI, they call out two kinds of GenAI innovations that dominate. Firstly, the innovations that will be fuelled by GenAI, and impact content discovery, creation, authenticity and regulations, as well as automate human work and the customer and employee experience.

The next is innovations that will fuel GenAI advancements. This includes using AI to build new GenAI tools and applications. In effect, it’s using innovation to create more innovation –  so it’s a popular use case for business startups.

The hype cycle illustrates user expectations of AI and how and where it will be used against where Gartner sees these innovations in 2-10 years (see graph). So, while the ‘innovation trigger’ and ‘peak of inflated expectations’ are crammed full of use cases and solutions, some will fall by the wayside while others will surface and go on to be productive.

The problem being?

While the list of potential use cases is exciting, Garter’s Hype Cycle does show that AI isn’t going to deliver business value (or even last the distance) in every instance.

Yes, you may get a head start on the competition by being an early adopter, but you may also become that cautionary tale shared in hushed tones in GenAI blogs and headlines of the future.

Forbes certainly reached that same conclusion in its article, ‘AI Reality Check: Why Data Is The Key To Breaking The Hype Cycle.’  Here, Forbes discusses whether GenAI reached its ‘peak of inflated expectations’ in August 2023, and many companies then came face-to-face with the reality of extracting genuine and meaningful value from AI.

Earlier, we listed access to data as a must-have for AI to work. And in its article, Forbes agrees, referring to its research, which firmly points the accusing finger of failure and disappointment at data silos. Nearly 75% of respondents who had implemented AI pilot projects in their organisations said that data silos were the primary barrier to enterprise-wide AI integration. “The number one thing keeping GenAI initiatives from reaching their fullest potential inside large corporations,” says Forbes, “is data.”

What the hell are we going to do with AI, anyway?

Worryingly, ADAPT’s CIO Edge Survey from February 2024 says that 66% of Australian CFOs say their organisations are unprepared to harness AI. 25% are non-committal, and only 9% say they’re AI-ready. AI-ready or not, 48% of the CIOs surveyed say they haven’t even defined any clear use cases for AI.

This leaves us asking, where to from here? How do you ensure that your investment in AI not only delivers business value through the availability of data, hardware, and software but that you are ready to use it and can justify the investment?

In part two of this topic, we discuss some real-world use cases in action and suggest some of the hard questions you should consider before committing to the shiny new thing that is AI.

Chicken or egg: Cyber resistance vs cyber resilience

In a digital world where data is the new ‘everything’, it’s unsurprising that it has become a prime target for criminals. Data is the modern-day equivalent of a stash of gold bullion – and it can be stolen, ransomed, and sold for profit with less effort and risk than a bank heist.

The unrelenting waves of global cyberattacks mean that the cost of business survival is escalating – with the cost of cyberattacks doubling between 2022 and 2023. To combat this, Infosecurity Magazine reports that 69% of IT leaders saw or expected cybersecurity budget increases of between 10 and 100% in 2024.

The cost of crime

At the pointy end of the problem, organisations face damaged or destroyed data, plundered bank accounts, financial fraud, lost productivity, purloined intellectual property, the theft of personal and financial data, and more.

The blunt end is no less damaging. There’s the cost of recovering data, rebuilding your reputation, and getting your business back to a state of BAU as soon as possible, as well as the hefty price tag that comes with forensic investigation, restoring and deleting hacked data and systems, and even prosecution

Generative AI to the cyber-rescue?

Many see the rise of generative AI and expansion into hybrid and multi-cloud environments as the means to alleviate the ongoing attacks. But, of course, the democratisation of generative AI (in other words, goodies and baddies have equal access to its powers) means that potential risks are also heightened.

Despite this, it’s hard to overcome the optimism that generative AI will be a cyber-saviour. According to Dell Technologies 2024 Global Data Protection Index (APJ Cyber Resiliency Multicloud Edition), 46% of responders believe that generative AI can initially provide an advantage to their cyber security posture, and 42% are investing accordingly.  

But here’s the rub: 85% agree that generative AI will create large volumes of new data that will need to be protected and secured. So generative AI will, by default, (A) potentially offer better protection and (B) increase the available attack space due to data sprawl and unstructured data.

Resistance vs resilience

Of the APJ organisations (excluding China) that Dell surveyed, 57% say they’ve experienced a cyberattack or cyber-related incident in the last 12 months.

And a good 76% have expressed concern that their current data protection measures are unable to cope with malware and ransomware threats. 66% say they’re not even confident that they can recover all their business-critical data in the event of a destructive cyber-attack.

So why, if 66% of organisations doubt their ability to recover their data, are 54% investing more in cyber prevention than recovery?

Can you separate the cyber chicken from the egg?

In a recent cybersecurity stats round-up, Forbes Advisor reported that in 2023, there were 2,365 cyberattacks impacting 343 million victims.

Given the inevitability of cyberattack, it’s critical that your methods of resistance are robust, and if disaster strikes, your ability to recover is infallible.

Look at it this way: While a cruise liner obviously must have radar to detect and try and avoid approaching icebergs, angry orcas, and other collision-prone objects, it’s just as important that they have lifeboats, lifeboat drills, lifejackets, and locator devices available to minimise loss of life and keep everyone afloat.  

In the words of Harvard Business Review: “Simply being security-conscious is no longer enough, nor is having a prevention-only strategy. Companies must become cyber-resilient—capable of surviving attacks, maintaining operations, and embracing new technologies in the face of evolving threats.”

So, how do you bolster your cyber resilience?

According to Dell, 50% of the organisations they surveyed have brought in outside support (including cyber recovery services) to enhance cyber resilience.

While AI will undoubtedly introduce some initial advantages, as suggested earlier, those could be quickly offset as cybercriminals leverage the very same tools. Not only are traditional system and software vulnerabilities under attack, but due to the sprawl of AI-generated data, there are more and newer opportunities.

So – can we rely on generative AI to save the day? Probably not – or not yet anyway. What about outside help? Yes, most definitely. However, cyber resilience begins at home, with a top-down strategy based on some inarguable facts:  

  1. Attacks are inevitable. Once you accept that this is the new reality of the digital age, the logical next step is to develop a clear, holistic strategy focusing on business continuity and crisis planning.
  2. People are the first and best line of defence. Ensure your entire organisation takes responsibility and is cyber-aware – to the extent that your procedures are included in your company policies and onboarding processes.  This should include delivering ongoing cyber awareness training and introducing regular drills.
  3. When disaster strikes, survival is in your hands. Establish clear cybersecurity governance that aligns with your business objectives. Everyone in the organisation should know what they need to do to protect the organisation, its data, and its clients and ensure continuity of operations.  
  4. No one is trustworthy. Assume everything around your network is a potential threat. Adopt a zero-trust mindset that requires continual verification and rigidly controls access based on preset policies.  
  5. What you don’t know can hurt you. The ability to detect and prevent threats is critical. Invest in Security as a Service to provide visibility into your data, regardless of where it’s located, so that you can see and address your weaknesses.
  6. Disaster will strike. We live in unexpected times, where cybercrime and unprecedented natural disasters conspire to stop us in our tracks. With cloud-basedDisaster Recovery as a Service, the risk of permanently losing data and disrupting business as usual is significantly reduced.

Do you have a data rubbish dump, or a treasure trove?

Back in 2018, IDC predicted that by 2025, the Global Datasphere would have grown from 33 zettabytes to 175 zettabytes. Arcserve predicted 200 zettabytes, and Statista 180 zettabytes. Now, with 328.77 million terabytes of data being created daily in 2024, Statista’s prediction looks to be on the money.

While that’s all very impressive, what’s probably of more interest to most of us is what form that data will take. Why? Because data falls into two camps. 1. Structured and immediately useful, and 2. Unstructured, and due to its raw, unprocessed, and often chaotic nature, challenging to utilise.

According to IDC, 90% of business data is unstructured – and consists of customer contracts, employee handbooks, product specs, video, imagery, IoT sensor data – and more. Only 46% of companies report that they analyse their unstructured data to extract value from it – and less than half of it at that.

The problem with unstructured data

Structured data, with its standardised format, is a low-hanging fruit, ripe for transformation into business insights. So, its value is readily appreciated.

Whereas, by its very nature, unstructured data isn’t easy to search or sort – and, more often than not, is sprawled across an organisation in siloes. But given the wealth (and breadth) of information it represents, it’s also immensely valuable – just harder to access.

Stockpiling unstructured data, with its sensitive customer, company, and employee information, has inherent dangers, starting with security. In its report on unstructured data, CIO Dive says, ‘idle data’ brings higher costs: “Costs associated with security breaches double for companies with more unstructured data.”

Then, there is the cost of storing unstructured data. Logic dictates that as storage costs grow, so must budgets for storage and management. With 38% of businesses already saying that data costs are too high or unpredictable, allocating a hard-won budget to data you will potentially never use can be a bitter pill.

Data sprawl is also the enemy of efficiency as employees manually input data from multiple, decentralised sources. Time and time again.

Given all the challenges, it’s no surprise that 88% of organisations agree that data sprawl makes data management hard and complicates implementing an end-to-end data strategy.

So, why hang on to it?

Despite the cost, few are willing to discard that precious (if underutilised) unstructured data – with most organisations saying that if the cost weren’t a factor, they’d like to keep their data longer.

But why?

  • Never say no to an opportunity. You can count on the data you never got around to analysing costing you that edge you so desperately wanted. Or significantly improve the customer experience and, as a result, cement their lifetime loyalty. No one wants to be the person responsible for deciding to bin it.  
  • Compliance caution. Unstructured data poses a massive compliance problem for many. Perhaps a big part of the problem is that 96% of organisations with mostly siloed unstructured data don’t know what information lies hidden in that sprawl (whereas of those who have centralised their unstructured data, 98% know exactly what lies beneath the chaos). The crux of it is that if you don’t know what’s in your unstructured data or where it is, you can’t be sure you’re effectively complying with the regulatory standards that govern your business. So, unless you have centralised your unstructured data and got to grips with what you have, it’s safer to hang on to all your data.
  • AI (artificial intelligence). Harnessing the power of AI is an opportunity that forward-thinking organisations should ignore at their peril. However, if you’re already a convert, you need to get your unstructured data firmly under control with a centralised approach to content. As observed by IDC, while 84% of businesses are already using or exploring AI, “given that LLMs (large language models) are trained on unstructured data, IT leaders can only leverage the power of AI once they have a strategy to manage and secure their data on a single platform.”

Who in your organisation ‘owns’ all this unstructured data anyway?

Answer: Your CDO (chief data officer), aka chief data and analytics officer or just chief analytics officer. Whereas your CIO is typically more focused on technology, your CDO is charged with developing and implementing your data strategy.

While an evolving and relatively new C-suite role, the CDO mantra is ‘data-driven success.’ Part of the CDO’s role, says CIO.com, is to “‘break down silos and change the practice of data hoarding in individual company units.”

With IDC reporting that companies that used their unstructured data in the past 12 months experienced “improved customer satisfaction, data governance and regulatory compliance, among other positive outcomes,” the CDO role is a big step in the right direction.  

With most (93%) CDOs agreeing that AI success is a high priority, it’s no surprise that adding analytics and AI to their portfolio is regarded as a key step to success. As is driving value by transforming and curating data (both structured and unstructured), to make it easier to succeed with generative AI.

And as LLMs are powered by unstructured data, it’s clear that one person’s data rubbish dump is a CDO’s carefully curated treasure trove.

Contact us to have an obligation-free chat about our data management services.

Cloud and GenAI. It had to happen.

Whiskers and kittens? Fish and chips? Ben and Jerry? Cloud and GenAI are set to become an inevitable pairing – and one you need to prepare for.

More cloud, more smarts

In its 2023 CIO and Technology Executive Survey, Gartner says the results indicate that over 62% of Australian CIOs expected to spend more on the cloud this year – but are they architecting their cloud platforms to prepare for GenAI?

“Local CIOs have told us the top two technologies they plan on investing in next year are SASE (secure access service edge) to simplify the delivery of critical network and security services via the cloud, and generative AI for its potential to improve innovation and efficiencies across the organization,” says Rowsell-Jones, Distinguished VP Analyst at Gartner.

According to Gartner, the investment in GenAI will continue to increase alongside the continued shift to digital in Australia over 2024. And Gartner anticipates that enterprises will primarily look to incorporate GenAI through their existing spend in the long term – via the software, hardware, and services already in use.

How will GenAI be served up to users?

GenAI thrives on data and compute power – and the more, the better. So, cloud is an obvious vehicle. However, training AI models, such as the LLM (large language model) that powers ChatGPT, requires access to massive amounts of data and vast amounts of compute power. And that poses a problem for organisations keen to drive value from GenAI but lack the computing resources to leverage the amazing but power-hungry technology.

This is where the first of Forbes’ (10) predictions for computing trends in 2024 comes in: Get ready for AI-as-a-Service.

Just when we needed yet another technology acronym, AIaaS pops into frame. It’s all good, though: By accessing AI-as-a-service through cloud platforms, even those lacking the necessary cloud infrastructure and compute power can leverage AI’s powerful, transformative technology.

While AIaaS is exciting, the subject of cloud cybersecurity and GenAI is more sobering. Forbes warns that “encryption, authentication and disaster recovery are three functions of cloud computing services that will be increasingly in demand as we face up to the evolving threat landscape of 2024.” With data thefts and breaches increasing in frequency and severity as hackers use AI to develop new forms of attack, all systems accessible to humans will be at risk from social engineering attacks. Leaving security and resilience high on the agenda of all cloud providers and customers.

Which brings us to governance and readiness.

Governance and GenAI

In its must-do guide for GenAI governance, Phil Moyer, Google Cloud’s global vice-president for AI and Business Solutions, observed, “Today’s leaders are eager to adopt generative AI technologies and tools. Yet the next question after what to do with it remains, ‘How do you ensure risk management and governance with your AI models?’ In particular, using generative AI in a business setting can pose various risks around accuracy, privacy and security, regulatory compliance, and intellectual property infringement.”

And he makes a very good point. But it’s too early to look to the Australian government for prescriptive guidance just yet; there is currently no AI-specific regulatory framework in place. However, the good news is that we can expect the expanding risks to accelerate focused legislation. While Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure, and reliable, they are voluntary.

That said, the Australian Government is all in favour of AI adoption, pledging $41.2 million to ‘support the responsible deployment of AI’ in its 2023/2024 budget. This includes strengthening the Responsible AI Network and launching the Responsible AI Adopt Program to help SMEs adopt AI.

Governance internationally, though, has raced ahead. The proposed EU AI Act will be the world’s first comprehensive AI law – watch this space. In 2023, Australia joined the EU and 27 other countries in signing the Bletchley Declaration, an international commitment to ensuring that AI should be designed, developed, deployed, and used in a safe, human-centric, trustworthy, and responsible manner.

Ready, set, go – easier said than done?

How do you ensure you are ready for GenAI and your cloud infrastructure to play nice? It’s one thing to give GenAI the nod but another to successfully integrate it into your cloud architecture. Without a carefully defined and agreed-upon approach, you risk not only failed projects but also a compromised security framework.

  • Articulate and agree on use cases within your organisation for AI so you can determine what changes should be made to your IT landscape to best suit your needs.
  • Remember that GenAI is data-centric so ensure your data is clean, accessible, and compatible with cloud storage solutions.
  • Think ahead when it comes to security and privacy. It’s imperative to have a robust security architecture integrated at every step of the process.
  • Balance scalability with cost-efficiency to reap benefits, rather than drain finances.
  • Choose the right cloud infrastructure model for your use case.
  • Monitor, monitor, and monitor. Not only the performance of your AI models but also your cloud resource costs to ensure operational and architectural efficiency.
  • Be ethical, stay legal. If GenAI is making decisions impacting your users or creating content, then ethical considerations must drive design principles. While specific AI legislation is not (yet) in place, Australia’s Privacy Act covers some of the considerations, and amendments are due to follow.
  • Disaster recovery and resilience. High availability can be the difference between value and disaster. It’s critical that your provider/s can minimise downtime and data loss in case of system failures.

Your cloud infrastructure is critical to your ability to leverage GenAI’s transformative power. We don’t want you to be left behind.


The Modern CIO: Building bridges between business and customers.

Once upon a time, the CIO was an unappreciated and largely unknown hero; relegated to the back room and responsible for keeping the lights on without fanfare or recognition. Now, the role has matured to one which is central (and critical) to achieving business goals.

As well as being charged with the responsibilities that come with a seat at the boardroom table, today’s CIO is accountable for building a digital customer-first foundation that can easily evolve to meet changing demands.

How did Customer Experience (CX) become a CIO responsibility?

One of the most telling comments in Forrester’s “The CIO’s Role In The Growth Agenda” report is where they say: One CIO we spoke with told us, “It turns out, I actually own customer experience because I’m responsible for the systems that serve them.”

And with CX being increasingly reliant on technology, the choices the CIO makes now will underpin business growth. They’re important, and far-reaching.

Here’s why.

The case for exceptional CX being the norm, not the exception.

In Forbes’ article from late 2023, “Leading Digital Transformation: Why CIOs Should Keep CX Top Of Mind,” they observe that research has repeatedly shown that keeping customers happy and finding better ways to engage with them is not just crucial for survival but also key to thriving in a challenging economic climate.

Forbes also points to PwC’s Customer Loyalty Executive Survey 2023, where 87% of executives and 51% of consumers in the United States agreed that an online shopping experience can negatively impact loyalty if it’s not as easy or enjoyable as shopping in person.

What is apparent from this, is that CX is critical to growth and loyalty (and profitability) across virtually every aspect of customer interactions – from websites to apps, support to fulfilment, to personalised omnichannel communications based on previous behaviour, preferences, and purchases. And key to this, is your organisation’s ability to collect and meaningfully analyse masses of data – via technology.

Is there more to the CIO role than CX, though?

While important, CX isn’t the be-all and end-all – it’s a two-way bridge. Your technology environment needs to empower your internal stakeholders so they can derive deeper and more valuable insights into the market and make better decisions. From what to sell, when and how, and what next – impacting product development, sales, customer service, marketing, and growth strategies.

And of course, the better the technology, the more ownership and support by your tech teams.

So, circling back around to the original point of this article – today’s CIO plays a critical role in deciding and guiding the use of technology (from your systems of engagement, systems of insight, security, and infrastructure – nothing is exempt) and data.

The decisions you make should enhance how the business interacts with your customers, optimise its processes, and align your business strategies with the needs and high flying CX expectations of your customers – while bringing joy to your stakeholders.

That given, let’s look at how you can ‘make it so.’

The four key strategies to drive a customer-centric tech approach.

1. Be customer aware

Make sure your business is where and what your customers expect it to be with the ability to interact with you how they want to.

While it’s not as simplistic as building it and they will automatically come, failing to build solutions that deliver the high-quality experience your customers expect (from web to mobile apps to self-help) is a sure-fire path to failure in a digital world.

2. Stand united

Your technology model should link your tech and business teams – from marketing, to sales, CX and product, and digital – together, not drive a ‘have/have-not’ wedge between them.

In Forrester’s “The CIO’s Role In The Growth Agenda” report, they say: “In our studies, respondents at enterprises with high levels of alignment across customer-facing functions report 2.4x higher revenue growth than those with some or no alignment. Those same aligned groups benefit from working with IT teams that are 3.7 times more likely to be highly or somewhat aligned with other functions.”

Also consider what new technologies like AI (artificial intelligence) and ML (machine learning) will bring to the table as part of your drive to improve your business operations and gain a competitive advantage. While you may prefer to develop custom models that work well with your current data sets, keep an eye out for records management application vendors who are incorporating AI directly into their products.

3. Discard complexity

Stop investing in old technology. Make now the time to move on from the cost and complications inherited with legacy systems to consolidate and build better customer facing systems.

Reduce the complexity of your systems of records by ensuring you have a strong ability to retrieve data from your existing systems. This way you can be confident that you can access the data you need in the future – which is especially important if you are in a regulated industry.

For example, in the professional services sector, many organisations are switching to cloud-based records management systems to enable new Business Innovation, and as a result, are shutting down their old on-premise systems. Global Storage customers in this sector trust that their legacy data is secure and recoverable through our range of cloud services, which allows them to move forward and free up old capital and resources.

4. Invest in result

While it’s tempting to adopt one shiny, exciting new solution after another, step back and reconsider. The most important thing about technology is the result, not the way to achieve it.

Keeping this in mind will help you focus on what matters most to the business. For example, Global Storage offers an outcome-based service with strict SLAs that allows our customers to concentrate on innovation within the business. This saves them from getting bogged down in the essential but routine operational tasks and the effort and expense of keeping up with new technology and systems that ultimately add little value to the business.

In summary, building great bridges requires strong foundations – ones that are deep and true to support the weight of change and significant business growth.

Above all, the foundations you lay as CIO should enable fast and complete business recovery following a natural or maliciously contrived disaster.

Contact us to have an obligation-free chat.


Global Storage takes out Veeam VCSP Partner of the Year for ANZ

Veeam recently announced their ANZ Partner Awards to celebrate the success of their channel in 2022. Global Storage were delighted to accept the award for Veeam Cloud and Service Provider (VCSP) of the Year for Australia.

Laura Currie, Channel & Alliances Marketing Manager for ANZ commented on the award.

“Your commitment to ongoing growth and valuable insights into our products and programs have truly set you apart. Your dedication to our partnership and active engagement within the Veeam community have significantly contributed to our mutual success.”

Laura Currie

The partner awards celebrated 13 partners across ANZ for their achievement and activity with Veeam in the previous year.

“In the past year, Veeam has made great progress in helping its ANZ partners build their practices, in order to better serve their customers,” said Gary Mitchell, VP of ANZ at Veeam Software. Gary went on to say that “Veeam’s 100 per cent channel model firmly puts Veeam’s partners at the centre of the ecosystem and we are extremely proud to be working with them to provide customers with the resilience, availability, and business outcomes they need. We are thrilled to be able to celebrate their achievements at this year’s ANZ Partner Awards.”

This reward reflects Global Storage’s ongoing commitment to delivering our innovative and secure Back Up and Disaster Recovery as a Service offering.

As a Platinum Veeam VCSP partner we invest in our people with 6 certified Veeam Technical Sales Professionals forming part of our team. With over two decades of data management experience the Global Storage team is uniquely qualified to help companies of all sizes realise agility, efficiency, and intelligent data management across diverse cloud environments.

Global Storage takes out Veeam VCSP Partner of the Year for ANZ

Source: Veeam celebrates A/NZ channel — ARN (arnnet.com.au)


Written in partnership with Veeam.

Cloud: Simplifying an increasingly complex hybrid landscape with confidence

The challenges for today’s CISOs aren’t going away any time soon – especially when it comes to data management, protection and recovery in a multi-cloud or hybrid-cloud environment.

The complexities associated with cloud and tech environments were listed as a top 3 challenge in the Focus Networks Intel Report for the CIO & CISO Leaders Australia Summit 2023. And, according to ARN, cloud spending will top the list in 2024.

So, what does this mean for your organisation and its ability to manage your hybrid cloud environment?

Shouldn’t hybrid cloud be getting easier, not more complex?

You’d think the rush to hybrid cloud would be slowing down by now.

But, says Veeam, in its #1 Hybrid Cloud Backup Guide, hybrid cloud implementations are unlikely to go away. Whether by careful, strategic design or accidental evolution, 92% of businesses already have a hybrid or multi-cloud setup. Regardless of the route taken, hybrid cloud is today’s reality for most organisations.

Hybrid cloud, observes Veeam, no longer means a mix of on-premises and a (single) public cloud. These days, a hybrid environment is more likely to consist of specifically chosen platforms used to serve different purposes. For example, disaster recovery (DR), production, dev-test and more. Meaning there’s more to measure, manage, and protect.

So, it’s easy to see how, over time, the complexity of hybrid cloud – especially in terms of backing it up – has grown, not lessened.

Managing data protection and security is easy (said no one, ever)

As we adopt more modern platforms, the struggle to manage them and their dispersed, often locked-away data grows in the face of ever-evolving cyber threats. And legacy backup solutions won’t cut the mustard. They’re old news, high-risk, and only suitable for dangerously old and high-risk technology environments.

If you have a modern multi-cloud environment, it’s obvious you need to take a modern approach to protecting it. Even then, not all cloud backup solutions on offer are created equal. With the need to back up your physical and virtual machines (VMs), cloud-native infrastructure and platforms, SaaS, and Kubernetes – all of which benefit from purpose-built protection, it can be a big ask. While native backup tooling is available from both first- and third-party vendors, this multi-vendor approach can result in siloed management and often creates more challenges than it overcomes. At a time when the desire is to reduce costs and simplify management, it does the opposite.

Then, there are those public cloud vendors who lock your data into their platforms, meaning you need to compromise on performance, capabilities, and costs rather than embrace a move to a better, more suitable platform.

Multi-cloud and hybrid-cloud environments are now the norm not the exception. So, the need for a single pane of glass approach to data management, protection and recovery is more critical than ever before.

The lowdown on the future of cloud (and what it means for you)

First, let’s look at where cloud is heading. Because above all, as cloud evolves and transforms, you need to consider solutions that will go the distance.

In Forbes’ article on Cloud Computing In 2024: Unveiling Transformations And Opportunities, they open with this bold statement: “The dynamic realm of cloud computing is on the brink of remarkable transformations in 2024, as organizations and service providers brace themselves for an era characterized by innovation, challenges, and unprecedented opportunities.”

Sounds great, but what do they actually mean by this?

In its list of 11 key trends for 2024 – Forbes says the era of one-size-fits-all cloud solutions is on the way out and a more tailored and dynamic approach that combines public and private clouds is in. Hybrid and multi-cloud environments are set to become the new normal for organizations of all sizes – which comes as little surprise to most of us.

More importantly (in the context of this blog), Forbes says that with the shift to multi-cloud environments and serverless computing, IT departments will face novel challenges, including paying more attention to security. While specialised solutions that are designed to help simplify the inherently intricate nature of multi-cloud environments are emerging, Forbes cautions against tools that conceal complexity without genuinely streamlining or reducing it.

More positively, though, Forbes says that AI will optimise cloud management, in a transition from novelty to the norm and bring benefits, including streamlined overall cloud operations.

Another trend Forbes noted (one that’s far from new in a world strapped for skilled technology resources) is the challenge of bridging a skills gap as cloud adoption increases. Meaning solutions that reduce the need for specialised cloud-computing professionals will be welcomed with open arms.

So, where to from here?

Given the challenges, what’s important when considering a data protection, management, and security platform to support your ever-evolving hybrid-cloud environment?

  • Centralised management. Drive efficiency and reduce costs with a single view of all environments and just one toolset.
  • The ability to support everything. As hybrid environments grow in complexity, look for a solution that natively supports everything from SaaS to physical servers, Kubernetes, and more.
  • Own your own data. Eliminate data lock-in with a solution that allows you to move data freely across your infrastructure so it’s available where and when you need it.
  • Only use and pay for what you need. Choose a solution that allows you to cherry-pick the components you need without financial or licensing penalties.
  • A seamless experience. Protect, manage, and recover your hybrid cloud environment with a platform that delivers what it promises without downtime, data loss, or compromise.

Hybrid cloud offers benefits and challenges in equal measure – something we deal with daily. Reach out to Global Storage for an obligation-free chat about how we can help you simplify the complex.


Written in partnership with Veeam.

Get in touch for a Free, No‑Obligation Consultation

Arrange a chat with our experienced team to discuss your data protection, disaster recovery, cloud or security requirements.

  • Arrange an introductory chat about your requirements
  • Gain a proposal and quote for our services
  • View an interactive demo of our service features

Prefer to call now?
Sales and Support
1300 88 38 25

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is hidden when viewing the form
This field is for validation purposes and should be left unchanged.

© 2021 Global Storage. All rights reserved. Privacy Policy Terms of Service

The Global Storage website is accessible.

Download
Best Practices For Backing Up Microsoft 365

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is for validation purposes and should be left unchanged.

Download
5 Myths About Backing Up Microsoft 365 Debunked

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is for validation purposes and should be left unchanged.