Chicken or egg: Cyber resistance vs cyber resilience
In a digital world where data is the new ‘everything’, it’s unsurprising that it has become a prime target for criminals. Data is the modern-day equivalent of a stash of gold bullion – and it can be stolen, ransomed, and sold for profit with less effort and risk than a bank heist.
The unrelenting waves of global cyberattacks mean that the cost of business survival is escalating – with the cost of cyberattacks doubling between 2022 and 2023. To combat this, Infosecurity Magazine reports that 69% of IT leaders saw or expected cybersecurity budget increases of between 10 and 100% in 2024.
The cost of crime
At the pointy end of the problem, organisations face damaged or destroyed data, plundered bank accounts, financial fraud, lost productivity, purloined intellectual property, the theft of personal and financial data, and more.
The blunt end is no less damaging. There’s the cost of recovering data, rebuilding your reputation, and getting your business back to a state of BAU as soon as possible, as well as the hefty price tag that comes with forensic investigation, restoring and deleting hacked data and systems, and even prosecution
Generative AI to the cyber-rescue?
Many see the rise of generative AI and expansion into hybrid and multi-cloud environments as the means to alleviate the ongoing attacks. But, of course, the democratisation of generative AI (in other words, goodies and baddies have equal access to its powers) means that potential risks are also heightened.
Despite this, it’s hard to overcome the optimism that generative AI will be a cyber-saviour. According to Dell Technologies 2024 Global Data Protection Index (APJ Cyber Resiliency Multicloud Edition), 46% of responders believe that generative AI can initially provide an advantage to their cyber security posture, and 42% are investing accordingly.
But here’s the rub: 85% agree that generative AI will create large volumes of new data that will need to be protected and secured. So generative AI will, by default, (A) potentially offer better protection and (B) increase the available attack space due to data sprawl and unstructured data.
Resistance vs resilience
Of the APJ organisations (excluding China) that Dell surveyed, 57% say they’ve experienced a cyberattack or cyber-related incident in the last 12 months.
And a good 76% have expressed concern that their current data protection measures are unable to cope with malware and ransomware threats. 66% say they’re not even confident that they can recover all their business-critical data in the event of a destructive cyber-attack.
So why, if 66% of organisations doubt their ability to recover their data, are 54% investing more in cyber prevention than recovery?
Can you separate the cyber chicken from the egg?
In a recent cybersecurity stats round-up, Forbes Advisor reported that in 2023, there were 2,365 cyberattacks impacting 343 million victims.
Given the inevitability of cyberattack, it’s critical that your methods of resistance are robust, and if disaster strikes, your ability to recover is infallible.
Look at it this way: While a cruise liner obviously must have radar to detect and try and avoid approaching icebergs, angry orcas, and other collision-prone objects, it’s just as important that they have lifeboats, lifeboat drills, lifejackets, and locator devices available to minimise loss of life and keep everyone afloat.
In the words of Harvard Business Review: “Simply being security-conscious is no longer enough, nor is having a prevention-only strategy. Companies must become cyber-resilient—capable of surviving attacks, maintaining operations, and embracing new technologies in the face of evolving threats.”
So, how do you bolster your cyber resilience?
According to Dell, 50% of the organisations they surveyed have brought in outside support (including cyber recovery services) to enhance cyber resilience.
While AI will undoubtedly introduce some initial advantages, as suggested earlier, those could be quickly offset as cybercriminals leverage the very same tools. Not only are traditional system and software vulnerabilities under attack, but due to the sprawl of AI-generated data, there are more and newer opportunities.
So – can we rely on generative AI to save the day? Probably not – or not yet anyway. What about outside help? Yes, most definitely. However, cyber resilience begins at home, with a top-down strategy based on some inarguable facts:
- Attacks are inevitable. Once you accept that this is the new reality of the digital age, the logical next step is to develop a clear, holistic strategy focusing on business continuity and crisis planning.
- People are the first and best line of defence. Ensure your entire organisation takes responsibility and is cyber-aware – to the extent that your procedures are included in your company policies and onboarding processes. This should include delivering ongoing cyber awareness training and introducing regular drills.
- When disaster strikes, survival is in your hands. Establish clear cybersecurity governance that aligns with your business objectives. Everyone in the organisation should know what they need to do to protect the organisation, its data, and its clients and ensure continuity of operations.
- No one is trustworthy. Assume everything around your network is a potential threat. Adopt a zero-trust mindset that requires continual verification and rigidly controls access based on preset policies.
- What you don’t know can hurt you. The ability to detect and prevent threats is critical. Invest in Security as a Service to provide visibility into your data, regardless of where it’s located, so that you can see and address your weaknesses.
- Disaster will strike. We live in unexpected times, where cybercrime and unprecedented natural disasters conspire to stop us in our tracks. With cloud-basedDisaster Recovery as a Service, the risk of permanently losing data and disrupting business as usual is significantly reduced.
Do you have a data rubbish dump, or a treasure trove?
Back in 2018, IDC predicted that by 2025, the Global Datasphere would have grown from 33 zettabytes to 175 zettabytes. Arcserve predicted 200 zettabytes, and Statista 180 zettabytes. Now, with 328.77 million terabytes of data being created daily in 2024, Statista’s prediction looks to be on the money.
While that’s all very impressive, what’s probably of more interest to most of us is what form that data will take. Why? Because data falls into two camps. 1. Structured and immediately useful, and 2. Unstructured, and due to its raw, unprocessed, and often chaotic nature, challenging to utilise.
According to IDC, 90% of business data is unstructured – and consists of customer contracts, employee handbooks, product specs, video, imagery, IoT sensor data – and more. Only 46% of companies report that they analyse their unstructured data to extract value from it – and less than half of it at that.
The problem with unstructured data
Structured data, with its standardised format, is a low-hanging fruit, ripe for transformation into business insights. So, its value is readily appreciated.
Whereas, by its very nature, unstructured data isn’t easy to search or sort – and, more often than not, is sprawled across an organisation in siloes. But given the wealth (and breadth) of information it represents, it’s also immensely valuable – just harder to access.
Stockpiling unstructured data, with its sensitive customer, company, and employee information, has inherent dangers, starting with security. In its report on unstructured data, CIO Dive says, ‘idle data’ brings higher costs: “Costs associated with security breaches double for companies with more unstructured data.”
Then, there is the cost of storing unstructured data. Logic dictates that as storage costs grow, so must budgets for storage and management. With 38% of businesses already saying that data costs are too high or unpredictable, allocating a hard-won budget to data you will potentially never use can be a bitter pill.
Data sprawl is also the enemy of efficiency as employees manually input data from multiple, decentralised sources. Time and time again.
Given all the challenges, it’s no surprise that 88% of organisations agree that data sprawl makes data management hard and complicates implementing an end-to-end data strategy.
So, why hang on to it?
Despite the cost, few are willing to discard that precious (if underutilised) unstructured data – with most organisations saying that if the cost weren’t a factor, they’d like to keep their data longer.
But why?
- Never say no to an opportunity. You can count on the data you never got around to analysing costing you that edge you so desperately wanted. Or significantly improve the customer experience and, as a result, cement their lifetime loyalty. No one wants to be the person responsible for deciding to bin it.
- Compliance caution. Unstructured data poses a massive compliance problem for many. Perhaps a big part of the problem is that 96% of organisations with mostly siloed unstructured data don’t know what information lies hidden in that sprawl (whereas of those who have centralised their unstructured data, 98% know exactly what lies beneath the chaos). The crux of it is that if you don’t know what’s in your unstructured data or where it is, you can’t be sure you’re effectively complying with the regulatory standards that govern your business. So, unless you have centralised your unstructured data and got to grips with what you have, it’s safer to hang on to all your data.
- AI (artificial intelligence). Harnessing the power of AI is an opportunity that forward-thinking organisations should ignore at their peril. However, if you’re already a convert, you need to get your unstructured data firmly under control with a centralised approach to content. As observed by IDC, while 84% of businesses are already using or exploring AI, “given that LLMs (large language models) are trained on unstructured data, IT leaders can only leverage the power of AI once they have a strategy to manage and secure their data on a single platform.”
Who in your organisation ‘owns’ all this unstructured data anyway?
Answer: Your CDO (chief data officer), aka chief data and analytics officer or just chief analytics officer. Whereas your CIO is typically more focused on technology, your CDO is charged with developing and implementing your data strategy.
While an evolving and relatively new C-suite role, the CDO mantra is ‘data-driven success.’ Part of the CDO’s role, says CIO.com, is to “‘break down silos and change the practice of data hoarding in individual company units.”
With IDC reporting that companies that used their unstructured data in the past 12 months experienced “improved customer satisfaction, data governance and regulatory compliance, among other positive outcomes,” the CDO role is a big step in the right direction.
With most (93%) CDOs agreeing that AI success is a high priority, it’s no surprise that adding analytics and AI to their portfolio is regarded as a key step to success. As is driving value by transforming and curating data (both structured and unstructured), to make it easier to succeed with generative AI.
And as LLMs are powered by unstructured data, it’s clear that one person’s data rubbish dump is a CDO’s carefully curated treasure trove.
Contact us to have an obligation-free chat about our data management services.
Cloud and GenAI. It had to happen.
Whiskers and kittens? Fish and chips? Ben and Jerry? Cloud and GenAI are set to become an inevitable pairing – and one you need to prepare for.
More cloud, more smarts
In its 2023 CIO and Technology Executive Survey, Gartner says the results indicate that over 62% of Australian CIOs expected to spend more on the cloud this year – but are they architecting their cloud platforms to prepare for GenAI?
“Local CIOs have told us the top two technologies they plan on investing in next year are SASE (secure access service edge) to simplify the delivery of critical network and security services via the cloud, and generative AI for its potential to improve innovation and efficiencies across the organization,” says Rowsell-Jones, Distinguished VP Analyst at Gartner.
According to Gartner, the investment in GenAI will continue to increase alongside the continued shift to digital in Australia over 2024. And Gartner anticipates that enterprises will primarily look to incorporate GenAI through their existing spend in the long term – via the software, hardware, and services already in use.
How will GenAI be served up to users?
GenAI thrives on data and compute power – and the more, the better. So, cloud is an obvious vehicle. However, training AI models, such as the LLM (large language model) that powers ChatGPT, requires access to massive amounts of data and vast amounts of compute power. And that poses a problem for organisations keen to drive value from GenAI but lack the computing resources to leverage the amazing but power-hungry technology.
This is where the first of Forbes’ (10) predictions for computing trends in 2024 comes in: Get ready for AI-as-a-Service.
Just when we needed yet another technology acronym, AIaaS pops into frame. It’s all good, though: By accessing AI-as-a-service through cloud platforms, even those lacking the necessary cloud infrastructure and compute power can leverage AI’s powerful, transformative technology.
While AIaaS is exciting, the subject of cloud cybersecurity and GenAI is more sobering. Forbes warns that “encryption, authentication and disaster recovery are three functions of cloud computing services that will be increasingly in demand as we face up to the evolving threat landscape of 2024.” With data thefts and breaches increasing in frequency and severity as hackers use AI to develop new forms of attack, all systems accessible to humans will be at risk from social engineering attacks. Leaving security and resilience high on the agenda of all cloud providers and customers.
Which brings us to governance and readiness.
Governance and GenAI
In its must-do guide for GenAI governance, Phil Moyer, Google Cloud’s global vice-president for AI and Business Solutions, observed, “Today’s leaders are eager to adopt generative AI technologies and tools. Yet the next question after what to do with it remains, ‘How do you ensure risk management and governance with your AI models?’ In particular, using generative AI in a business setting can pose various risks around accuracy, privacy and security, regulatory compliance, and intellectual property infringement.”
And he makes a very good point. But it’s too early to look to the Australian government for prescriptive guidance just yet; there is currently no AI-specific regulatory framework in place. However, the good news is that we can expect the expanding risks to accelerate focused legislation. While Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure, and reliable, they are voluntary.
That said, the Australian Government is all in favour of AI adoption, pledging $41.2 million to ‘support the responsible deployment of AI’ in its 2023/2024 budget. This includes strengthening the Responsible AI Network and launching the Responsible AI Adopt Program to help SMEs adopt AI.
Governance internationally, though, has raced ahead. The proposed EU AI Act will be the world’s first comprehensive AI law – watch this space. In 2023, Australia joined the EU and 27 other countries in signing the Bletchley Declaration, an international commitment to ensuring that AI should be designed, developed, deployed, and used in a safe, human-centric, trustworthy, and responsible manner.
Ready, set, go – easier said than done?
How do you ensure you are ready for GenAI and your cloud infrastructure to play nice? It’s one thing to give GenAI the nod but another to successfully integrate it into your cloud architecture. Without a carefully defined and agreed-upon approach, you risk not only failed projects but also a compromised security framework.
- Articulate and agree on use cases within your organisation for AI so you can determine what changes should be made to your IT landscape to best suit your needs.
- Remember that GenAI is data-centric so ensure your data is clean, accessible, and compatible with cloud storage solutions.
- Think ahead when it comes to security and privacy. It’s imperative to have a robust security architecture integrated at every step of the process.
- Balance scalability with cost-efficiency to reap benefits, rather than drain finances.
- Choose the right cloud infrastructure model for your use case.
- Monitor, monitor, and monitor. Not only the performance of your AI models but also your cloud resource costs to ensure operational and architectural efficiency.
- Be ethical, stay legal. If GenAI is making decisions impacting your users or creating content, then ethical considerations must drive design principles. While specific AI legislation is not (yet) in place, Australia’s Privacy Act covers some of the considerations, and amendments are due to follow.
- Disaster recovery and resilience. High availability can be the difference between value and disaster. It’s critical that your provider/s can minimise downtime and data loss in case of system failures.
Your cloud infrastructure is critical to your ability to leverage GenAI’s transformative power. We don’t want you to be left behind.
The Modern CIO: Building bridges between business and customers.
Once upon a time, the CIO was an unappreciated and largely unknown hero; relegated to the back room and responsible for keeping the lights on without fanfare or recognition. Now, the role has matured to one which is central (and critical) to achieving business goals.
As well as being charged with the responsibilities that come with a seat at the boardroom table, today’s CIO is accountable for building a digital customer-first foundation that can easily evolve to meet changing demands.
How did Customer Experience (CX) become a CIO responsibility?
One of the most telling comments in Forrester’s “The CIO’s Role In The Growth Agenda” report is where they say: One CIO we spoke with told us, “It turns out, I actually own customer experience because I’m responsible for the systems that serve them.”
And with CX being increasingly reliant on technology, the choices the CIO makes now will underpin business growth. They’re important, and far-reaching.
Here’s why.
The case for exceptional CX being the norm, not the exception.
In Forbes’ article from late 2023, “Leading Digital Transformation: Why CIOs Should Keep CX Top Of Mind,” they observe that research has repeatedly shown that keeping customers happy and finding better ways to engage with them is not just crucial for survival but also key to thriving in a challenging economic climate.
Forbes also points to PwC’s Customer Loyalty Executive Survey 2023, where 87% of executives and 51% of consumers in the United States agreed that an online shopping experience can negatively impact loyalty if it’s not as easy or enjoyable as shopping in person.
What is apparent from this, is that CX is critical to growth and loyalty (and profitability) across virtually every aspect of customer interactions – from websites to apps, support to fulfilment, to personalised omnichannel communications based on previous behaviour, preferences, and purchases. And key to this, is your organisation’s ability to collect and meaningfully analyse masses of data – via technology.
Is there more to the CIO role than CX, though?
While important, CX isn’t the be-all and end-all – it’s a two-way bridge. Your technology environment needs to empower your internal stakeholders so they can derive deeper and more valuable insights into the market and make better decisions. From what to sell, when and how, and what next – impacting product development, sales, customer service, marketing, and growth strategies.
And of course, the better the technology, the more ownership and support by your tech teams.
So, circling back around to the original point of this article – today’s CIO plays a critical role in deciding and guiding the use of technology (from your systems of engagement, systems of insight, security, and infrastructure – nothing is exempt) and data.
The decisions you make should enhance how the business interacts with your customers, optimise its processes, and align your business strategies with the needs and high flying CX expectations of your customers – while bringing joy to your stakeholders.
That given, let’s look at how you can ‘make it so.’
The four key strategies to drive a customer-centric tech approach.
1. Be customer aware
Make sure your business is where and what your customers expect it to be with the ability to interact with you how they want to.
While it’s not as simplistic as building it and they will automatically come, failing to build solutions that deliver the high-quality experience your customers expect (from web to mobile apps to self-help) is a sure-fire path to failure in a digital world.
2. Stand united
Your technology model should link your tech and business teams – from marketing, to sales, CX and product, and digital – together, not drive a ‘have/have-not’ wedge between them.
In Forrester’s “The CIO’s Role In The Growth Agenda” report, they say: “In our studies, respondents at enterprises with high levels of alignment across customer-facing functions report 2.4x higher revenue growth than those with some or no alignment. Those same aligned groups benefit from working with IT teams that are 3.7 times more likely to be highly or somewhat aligned with other functions.”
Also consider what new technologies like AI (artificial intelligence) and ML (machine learning) will bring to the table as part of your drive to improve your business operations and gain a competitive advantage. While you may prefer to develop custom models that work well with your current data sets, keep an eye out for records management application vendors who are incorporating AI directly into their products.
3. Discard complexity
Stop investing in old technology. Make now the time to move on from the cost and complications inherited with legacy systems to consolidate and build better customer facing systems.
Reduce the complexity of your systems of records by ensuring you have a strong ability to retrieve data from your existing systems. This way you can be confident that you can access the data you need in the future – which is especially important if you are in a regulated industry.
For example, in the professional services sector, many organisations are switching to cloud-based records management systems to enable new Business Innovation, and as a result, are shutting down their old on-premise systems. Global Storage customers in this sector trust that their legacy data is secure and recoverable through our range of cloud services, which allows them to move forward and free up old capital and resources.
4. Invest in result
While it’s tempting to adopt one shiny, exciting new solution after another, step back and reconsider. The most important thing about technology is the result, not the way to achieve it.
Keeping this in mind will help you focus on what matters most to the business. For example, Global Storage offers an outcome-based service with strict SLAs that allows our customers to concentrate on innovation within the business. This saves them from getting bogged down in the essential but routine operational tasks and the effort and expense of keeping up with new technology and systems that ultimately add little value to the business.
In summary, building great bridges requires strong foundations – ones that are deep and true to support the weight of change and significant business growth.
Above all, the foundations you lay as CIO should enable fast and complete business recovery following a natural or maliciously contrived disaster.
Contact us to have an obligation-free chat.
Global Storage takes out Veeam VCSP Partner of the Year for ANZ
Veeam recently announced their ANZ Partner Awards to celebrate the success of their channel in 2022. Global Storage were delighted to accept the award for Veeam Cloud and Service Provider (VCSP) of the Year for Australia.
Laura Currie, Channel & Alliances Marketing Manager for ANZ commented on the award.
“Your commitment to ongoing growth and valuable insights into our products and programs have truly set you apart. Your dedication to our partnership and active engagement within the Veeam community have significantly contributed to our mutual success.”
Laura Currie
The partner awards celebrated 13 partners across ANZ for their achievement and activity with Veeam in the previous year.
“In the past year, Veeam has made great progress in helping its ANZ partners build their practices, in order to better serve their customers,” said Gary Mitchell, VP of ANZ at Veeam Software. Gary went on to say that “Veeam’s 100 per cent channel model firmly puts Veeam’s partners at the centre of the ecosystem and we are extremely proud to be working with them to provide customers with the resilience, availability, and business outcomes they need. We are thrilled to be able to celebrate their achievements at this year’s ANZ Partner Awards.”
This reward reflects Global Storage’s ongoing commitment to delivering our innovative and secure Back Up and Disaster Recovery as a Service offering.
As a Platinum Veeam VCSP partner we invest in our people with 6 certified Veeam Technical Sales Professionals forming part of our team. With over two decades of data management experience the Global Storage team is uniquely qualified to help companies of all sizes realise agility, efficiency, and intelligent data management across diverse cloud environments.

Source: Veeam celebrates A/NZ channel — ARN (arnnet.com.au)
Written in partnership with Veeam.
Cloud: Simplifying an increasingly complex hybrid landscape with confidence
The challenges for today’s CISOs aren’t going away any time soon – especially when it comes to data management, protection and recovery in a multi-cloud or hybrid-cloud environment.
The complexities associated with cloud and tech environments were listed as a top 3 challenge in the Focus Networks Intel Report for the CIO & CISO Leaders Australia Summit 2023. And, according to ARN, cloud spending will top the list in 2024.
So, what does this mean for your organisation and its ability to manage your hybrid cloud environment?
Shouldn’t hybrid cloud be getting easier, not more complex?
You’d think the rush to hybrid cloud would be slowing down by now.
But, says Veeam, in its #1 Hybrid Cloud Backup Guide, hybrid cloud implementations are unlikely to go away. Whether by careful, strategic design or accidental evolution, 92% of businesses already have a hybrid or multi-cloud setup. Regardless of the route taken, hybrid cloud is today’s reality for most organisations.
Hybrid cloud, observes Veeam, no longer means a mix of on-premises and a (single) public cloud. These days, a hybrid environment is more likely to consist of specifically chosen platforms used to serve different purposes. For example, disaster recovery (DR), production, dev-test and more. Meaning there’s more to measure, manage, and protect.
So, it’s easy to see how, over time, the complexity of hybrid cloud – especially in terms of backing it up – has grown, not lessened.
Managing data protection and security is easy (said no one, ever)
As we adopt more modern platforms, the struggle to manage them and their dispersed, often locked-away data grows in the face of ever-evolving cyber threats. And legacy backup solutions won’t cut the mustard. They’re old news, high-risk, and only suitable for dangerously old and high-risk technology environments.
If you have a modern multi-cloud environment, it’s obvious you need to take a modern approach to protecting it. Even then, not all cloud backup solutions on offer are created equal. With the need to back up your physical and virtual machines (VMs), cloud-native infrastructure and platforms, SaaS, and Kubernetes – all of which benefit from purpose-built protection, it can be a big ask. While native backup tooling is available from both first- and third-party vendors, this multi-vendor approach can result in siloed management and often creates more challenges than it overcomes. At a time when the desire is to reduce costs and simplify management, it does the opposite.
Then, there are those public cloud vendors who lock your data into their platforms, meaning you need to compromise on performance, capabilities, and costs rather than embrace a move to a better, more suitable platform.
Multi-cloud and hybrid-cloud environments are now the norm not the exception. So, the need for a single pane of glass approach to data management, protection and recovery is more critical than ever before.
The lowdown on the future of cloud (and what it means for you)
First, let’s look at where cloud is heading. Because above all, as cloud evolves and transforms, you need to consider solutions that will go the distance.
In Forbes’ article on Cloud Computing In 2024: Unveiling Transformations And Opportunities, they open with this bold statement: “The dynamic realm of cloud computing is on the brink of remarkable transformations in 2024, as organizations and service providers brace themselves for an era characterized by innovation, challenges, and unprecedented opportunities.”
Sounds great, but what do they actually mean by this?
In its list of 11 key trends for 2024 – Forbes says the era of one-size-fits-all cloud solutions is on the way out and a more tailored and dynamic approach that combines public and private clouds is in. Hybrid and multi-cloud environments are set to become the new normal for organizations of all sizes – which comes as little surprise to most of us.
More importantly (in the context of this blog), Forbes says that with the shift to multi-cloud environments and serverless computing, IT departments will face novel challenges, including paying more attention to security. While specialised solutions that are designed to help simplify the inherently intricate nature of multi-cloud environments are emerging, Forbes cautions against tools that conceal complexity without genuinely streamlining or reducing it.
More positively, though, Forbes says that AI will optimise cloud management, in a transition from novelty to the norm and bring benefits, including streamlined overall cloud operations.
Another trend Forbes noted (one that’s far from new in a world strapped for skilled technology resources) is the challenge of bridging a skills gap as cloud adoption increases. Meaning solutions that reduce the need for specialised cloud-computing professionals will be welcomed with open arms.
So, where to from here?
Given the challenges, what’s important when considering a data protection, management, and security platform to support your ever-evolving hybrid-cloud environment?
- Centralised management. Drive efficiency and reduce costs with a single view of all environments and just one toolset.
- The ability to support everything. As hybrid environments grow in complexity, look for a solution that natively supports everything from SaaS to physical servers, Kubernetes, and more.
- Own your own data. Eliminate data lock-in with a solution that allows you to move data freely across your infrastructure so it’s available where and when you need it.
- Only use and pay for what you need. Choose a solution that allows you to cherry-pick the components you need without financial or licensing penalties.
- A seamless experience. Protect, manage, and recover your hybrid cloud environment with a platform that delivers what it promises without downtime, data loss, or compromise.
Hybrid cloud offers benefits and challenges in equal measure – something we deal with daily. Reach out to Global Storage for an obligation-free chat about how we can help you simplify the complex.
Written in partnership with Veeam.
The new NIST list – what you need to know
How time flies. It’s already been almost 10 years since the NIST (National Institute for Standards and Technology) Cybersecurity Framework was first rolled out to provide technical guidance for those responsible for critical infrastructure interests, including energy, banking, and public health.
By early November, we can expect to see a sixth function officially added to the famous five functions of an effective cybersecurity program – namely: Identify, protect, detect, respond, and recover.
And we’re glad to say that the final function is ‘govern’.
It’s expected that the addition of the sixth function will expand the usefulness of the NIST framework to all those sectors outside of critical infrastructure and provide guidance to support their overall cybersecurity strategies.
Celebrating the new NIST framework
So, why does NIST 2.0 make us quietly happy? Possibly because it’s something we’ve taken to heart.
From the Global Storage perspective, governance has long been the missing piece in the cybersecurity puzzle. Having gone through the intensive processes of earning ISO 27001 certification several years ago, it’s good to see NIST catching up with the technology partners (like us) who adopted ‘govern’ as a central premise to support and protect their customers more effectively.
And the Australian Government obviously agrees. Its current principles of cybersecurity governance are grouped into four key activities: govern, protect, detect and respond. Govern: Identifying and managing security risks. Protect: Implementing controls to reduce security risks. Detect: Detecting and understanding cyber security events to identify cyber security incidents. Respond: Responding to and recovering from cyber security incidents.
In its discussion paper, “Strengthening Australia’s cybersecurity regulations and incentives,” the government is actively seeking views about how it can incentivise businesses to invest in cybersecurity, including through possible regulatory changes. The first of the proposed new policies up for discussion is governance standards for large businesses. Suggested governance approaches include alignment with international standards and frameworks (like ISO 27001 and NIST).
Governance (and the associated reporting) is clearly a timely new focus for those non-critical infrastructure Australian businesses that haven’t yet fully developed a robust and all-encompassing cybersecurity plan. ASIC has started to actively fine businesses that fail to take remedial action after breaches – and they are unlikely to accept excuses based on size and lack of capability from the SMB sector.
It’s been interesting for us to watch some of our larger customers, who previously aligned themselves with the ASD Essential Eight, now realigning themselves with NIST due to its depth, breadth and maturity. And we expect the addition of the ‘govern’ function to cement that move even more firmly.
Catching the curve ball
While we’d like to say we were ahead of the curve in becoming ISO 27001 certified, the reality is that many technology partners saw the writing on the wall. We could see that “govern” would be recognised as an important function over and above the five technical, control-based standards championed by NIST up until now – and that our commitment to going further should be sooner than later.
What Global Storage’s ISO accreditation (and statement of applicability) means for our customers is that we keep the necessary governance records for them. So, if they are audited or even prosecuted, we can prove that the principles and controls of ‘govern’ were fully followed. In effect, they can leverage our external certification against their compliance requirements, making it easier for them to do business with confidence. And in turn, we leverage the certifications of our own ISO-accredited service providers.
While committing to ISO 27001 five years ago was a market differentiator, it’s now a prerequisite for most partners like us. Now, from a sales perspective, it accelerates the conversations and removes roadblocks. Whereas ‘before’, our customers had no dedicated security resources, today’s organisations typically have multiple internal staff whose primary responsibility is security. But they are the lucky ones. With the huge global deficit in cybersecurity resources, they’re often lucky to be able to afford to hire and retain the people needed. All of which makes it even more important that a partner can offer the certified support needed.
New framework, new challenges
But going back to a cybersecurity framework that includes ‘govern’, for those already in a regulated industry (for example, health and banking), it shouldn’t pose too much of a problem – they are used to the requirement of being audited.
In the case of non-regulated and often less mature industries, though, it will pose a challenge despite growing customer demand that they level up. For these organisations in particular, having a service provider that’s already got all those ‘govern’ boxes ready-ticked will alleviate the time, pain, and distraction of completing additional paperwork.
As I’ve said, we’ve made a significant investment in ISO 27001, and that accreditation requires us to achieve and maintain precise standards and undergo a yearly external audit. It’s also shaped the way we run our business. We can’t afford mistakes; we put our reputation on the line daily. These days, saying “oops, sorry, my bad” isn’t good enough for us or our customers (and in our books, it never has been) – meaning we’re very prescriptive about how we run our cybersecurity functions and services.
Feel good about the company you keep
Like practically every company in the world, we’ve had cybercriminals trying to attack us – but every attempt has been detected, contained, and dealt with in keeping with our governance system. We’ve never had a breach.
With NIST soon to be updated and the Australian Government looking likely to enforce governance for all organisations regardless of size, it’s critical these businesses can turn to a trusted service provider who has been there, done that – and actually lives and breathes the concept of “govern”. Only by doing that can they quickly and directly move forward and comply while reducing risk.
Service partners like Global Storage are no longer just the clean-up crew when something goes wrong. We’re not just the people you lean on for (exceptional) backup and recovery as a service and disaster recovery as a service to provide 24/7 protection, but the in-depth reporting needed to keep you compliant, auditable, and accountable for everything cybersecurity.
So, when your performance and strategy are held up against NIST standards, ISO standards, or government governance regulations, you can be confident that you, too, are ahead of the cybercrime curve ball.
When it comes to cybercrime, you are not a unicorn.
At the risk of sounding like a broken record, cybercrime is only getting worse. And no matter how ‘special’ and ‘unique’ you are, you are unlikely to remain unscathed.
Ransomware is now the rule, not the exception
In Veeam’s 2022 Ransomware Trends Report, they summarised the learnings gained by interviewing 1,000 organisations that had all experienced ransomware attacks. So, not those living in fear of an attack, but those who had been through one and came out the other side in varying degrees of health. The researchers talked to security professionals, IT operations, backup administrators and CISO (or equivalent IT executives).
Veeam’s ransomware report dovetails with their 2022 Data Protection Trends report, where 76% of the 3,393 organisations surveyed had suffered at least one ransomware attack, and 24% had avoided or were totally unaware that they’d been attacked. As with the ransomware report mentioned above, the criteria for being included in this research was that each organisation must have experienced at least one attack in 2021.
Between these two pieces of research, two important trends were uncovered:
- Cybercriminals were double dipping. To quote Veeam: “Only about one in four (27%) organizations suffered just one attack, presumably with bad actors attempting to return for more ransom.”
- No unicorn is safe. Again, to quote Veeam: “Organizations of all sizes appear relatively equal in the persistence of attacks from small-to-medium-sized businesses (SMBs) (100–249 employees) to large enterprises (>5,000 employees). Said another way, just like any other disaster (fire/flood), ransomware attacks are universally pervasive.”
Veeam also noted that ransomware survey respondents reported that an average of 47% of their data was encrypted by ransomware.
As a result of this research, one of Veeam’s primary conclusions was that “the best way to reduce the risk of a cyberattack like ransomware is to have a comprehensive and tested disaster response plan.”
Move your mouse away from that!
Despite our increased awareness and training, humans remain the greatest point of failure when it comes to inviting cyberattacks into our businesses. Phishing emails, malicious links and websites are still the most common point of entry for criminals.
One positive observation made by Veeam was that only 1% of their respondents reported they could not identify the entry point. In other words, 99% of the time, the monitoring and investigation tools they used pinpointed their vulnerabilities – human and otherwise – so they could be addressed.
Once a bad actor has gained entry into your environment, Veeam says that 94% of the time, your backup repositories are their primary target. And that 68% of repositories are impacted as a result.
Veeam adds:
“Specific production platform or application types were targeted in 80% of successful ransomware attacks, presumably based on known vulnerabilities within common platform types, such as mainstream hypervisors and operating systems or wide-spread workloads like NAS filers or database servers.”
We get it: Protecting your data isn’t simple
With organisational data often spread across multiple clouds and systems, as well as geographies and locations, it only adds to the challenge of ensuring your data is not only available and scalable – but also protected.
Faced with today’s cyber challenges (and new threats looming as AI becomes part of the baddies’ arsenal), your ability to be cyber resilient and recover to a business-as-usual state as quickly as possible is more critical than ever. No one can count on being the fairy-tale exception to the rule when it comes to ransomware attacks.
To rehash that well-worn saying: It’s not a matter of if your unicorn breaks its horn, but when.
According to Veeam’s 2023 Data Protection Trends report, “…many legacy IT environments are running legacy backup solutions that were designed for the physical data center era. This specifically hinders an enterprise’s ability to focus on cloud-based SaaS and IaaS, which puts your data at risk of data breach and can lead to unoptimized large-scale data management.”
Interestingly, Veeam reports that 52% of those organisations with encrypted data paid the ransom demand (mainly with the help of their cyber insurance policies) and successfully recovered it. As for the rest? 25% paid up but didn’t recover their data. The remainder undertook remediation to recover their data successfully, but this took an average of 18 days, which is a long time to be out of the business-as-usual loop.
It’s time to join the rest of the herd
While cybercrime is pervasive and seemingly unavoidable, it doesn’t absolve your business from taking its share of responsibility from a legal, commercial, and ethical standpoint.
It’s hard (and for some, impossible) to recover from a massive fine, the sense of betrayal experienced by your customers when their data is sold off to the highest bidder, or your employees are unable to work as every line of business application freezes. For days, weeks, and even months.
And yet, knowing this, only one out of every six organisations test whether their backup solutions work by restoring and verifying their data. So, when it comes to a ransomware attack, most businesses are still winging it when it comes to having backup that works.
Unicorn or not, the only certainty in life for today’s businesses is the importance of weathering that inevitable cyber storm. And that includes ensuring you have:
- Reliable, innovative, industrial-strength cybersecurity solutions
- A well-understood, committed and tested cyber resiliency strategy
Feel free to talk to us if you’re unsure about either. We’ll even throw in some love and rainbows.
Written in partnership with Veeam.
Zero trust given. And why that’s a good thing for hybrid cloud environments.
While it makes perfect sense to push your workloads to the public cloud, especially if they can be moved into SaaS environments, this doesn’t work for all legacy workloads. This is why we continue to see – and advocate – hybrid cloud environments.
For many organisations juggling workloads is not a matter of taking a cloud-first approach but opting for cloud-fit instead. This involves finding the ideal cloud environment for each workload. One that’s cost-effective and ticks all the security boxes.
But this is when it gets tricky. If you’re taking a cloud-fit approach, how do you ensure cyber resiliency across all your platforms? And what happens when your data is moving between those platforms?
Data breach statistics aren’t getting any prettier, with a 26% increase in notifiable data breaches to OAIC in the latter half of 2022. Which is where zero trust comes to the fore.
But first, let’s back up a bit – what is zero trust, why is it the hot new approach, and how do you get some?
Trust no one, question everything
Two of the best cybersecurity rules to live by are: 1. Trust no one. 2. Question everything. And those rules, in a nutshell, are the key to zero trust.
Zero trust takes distrust of and questioning your users to a whole new level – but this is a good thing. Regardless of whether they’re inside or outside of your network, users are subjected to authentication, authorisation, and continuous validation for security configuration and posture. Only when they pass these conditions with flying colours are they a) granted access or b) allowed to have continued access to your applications and precious data.
Importantly to those who have gone the cloud-fit route, zero trust assumes that there is no traditional network edge. So, networks can be local, in the cloud, or a combination or hybrid with resources anywhere, as well as users in any location. Regarded as ‘perimeterless security’ (just think of networks without borders!), the zero-trust security model is also known as zero trust architecture (ZTA), zero trust network architecture or zero trust network access (ZTNA).
And while it’s so hot right now, zero-trust isn’t actually new. (You might like to check out this excellent article on the history of zero-trust here on TechTarget.) However, it is the way to go.
In a 2022 Forrester Opportunity Snapshot, the renowned researcher reports that 83% of Australian and New Zealand firms say zero trust is the future of their organisation’s security. And in tech news publisher VentureBeat’s article on zero-trust trends for 2022, they include zero-trust becoming the foundation of more hybrid cloud integrations as one of the big four trends to watch out for.
So, how and where do you get started?
It’s all about leadership
It’s important to remember that zero trust is a philosophy, not a product. And like most philosophies, it can take some effort to get everyone on the same page.
To quote John Engates, Field CTO for Cloudflare:
“To get zero trust across the finish line, some companies may appoint a zero trust officer. Showing leadership, demonstrating how important it is to the organisation, putting someone in charge of getting to a zero trust stance is really critical. No matter how you demonstrate that to your stakeholders, it’s really critical for someone to stand up and say, ‘We’ve got to do better at this; we have to do it comprehensively across the entire organisation. And we have to do it soon because the threats aren’t getting easier to deal with.”
In their Opportunity Snapshot, Forrester agrees, saying it’s critical to “be a leader and communicator, not a technician.” They report that 48% of zero trust leaders in Australia and New Zealand said “their stakeholders struggled to understand the business value of adopting a Zero Trust approach. Only 41% listened and understood stakeholders’ criticism or feedback, then worked through their issues with the Zero Trust team, and returned with a solution.” Forrester concludes that this poses a challenge as zero trust leaders thought the most important trait in their role was to be technical (52%), compared to being communicative (13%).
Despite the challenges, Forrester says that these same zero trust firms reported a more empowered employee experience, with 74% reporting more flexibility to work from anywhere or on any networks, 61% were relieved of the burden of security responsibility through password-free authentication, and 27% enjoyed an increased choice to work with any device or programmes.
So, where to start?
Engates from Cloudflare is a fan of making the zero trust goal manageable by attacking it in bite-sized chunks. He says that the important thing is to “get started and get moving.” And we agree.
To help you address the challenges created by the shift to cloud hosting, remote work, and other modernisation, Zerotrustroadmap.org provides an excellent step-by-step vendor-agnostic roadmap, complete with an implementation timeline.
Or you’re welcome to just talk to us.
In partnership with Cloudflare, a global leader in zero trust services.