Breaking the bank: How email scams target financial institutions

Picture the scene. You’re running a financial organisation that is, for all intents and purposes, the digital equivalent of a Fort Knox. Your cyber defences are formidable, your staff well-trained, and your compliance paperwork, well, compliant.

Then you get hit by a simple-looking email that results in millions of dollars going walkabout, sensitive data leaked, and a regulatory finger-wagging of epic proportions.

No ransomware. Just one clever piece of business email compromise (BEC).

BEC attacks: What you need to know

BEC is essentially a simple but devastating deception. A threat actor leverages email to trick targets into transferring funds, sensitive data, or both.

As highlighted in the Arctic Wolf 2025 Threat Report, the current state of BEC is a concern because:

  • BEC makes up 27% of all incident response cases, making it the second-biggest source of security incidents.
  • Human error is behind 99.2% of BEC root causes, with phishing accounting for 73.5%.
  • The finance and insurance industry makes up 26.5% of all BEC cases—nearly double the next highest sector—and it’s the only industry where BEC has surpassed ransomware.

Why are BEC numbers so high? Simply put, because it works. And the strategy behind it is straightforward: using BEC, attackers target organisations that handle large sums of money and rely on email communication.

Common BEC tactics include:

  • Phishing emails that steal your login details.
  • Pretending to be a trusted contact, like your CEO or a supplier (or both).
  • Taking advantage of already hacked accounts.
  • Using AI-generated traps that look just like real messages.

The days of spotting a phishing email by its poor spelling are over. Today’s attackers craft messages with more precision than many corporate communications teams..

The critical change in status for financial institutions

Banks, super funds, insurers, and credit unions have been officially classified as ‘critical infrastructure’ under the 2022 Security Legislation Amendment (Critical Infrastructure Protection) Act (SLACIP). This emphasises just how important it is to keep this industry secure and protected.

But why is the financial services industry such a popular BEC victim?

  1. Big money, big target: Financial institutions deal with large sums of money daily, making them an attractive option for attackers looking for big payouts.
  2. Access to sensitive data: These institutions store confidential information about wealthy individuals and businesses, which attackers can use for more fraud.
  3. Relying on trust: Employees often need to quickly process wire transfers and invoices to keep customers happy, which creates opportunities for social engineering attacks.
  4. Complex vendor networks: Financial firms depend on various third-party vendors, increasing the risk of compromised communications and fake payment requests.
  5. Advanced impersonation tactics: Attackers use tricks like spoofing and malware to pose as trusted contacts, slipping past basic cybersecurity measures.

Growing regulatory pressure and rising fines

Adding fuel to the fire is evidence that legal enforcement is no longer an empty threat. In fact, the Australian Securities and Investments Commission (ASIC) is very obviously cracking down.

Recent lawsuits, like the one against FIIG Securities for “systemic and prolonged cybersecurity failures”, are a clear warning.

ASIC’s enforcement, only the second of its kind, flags inadequate cybersecurity as grounds for civil penalties, highlighting the fact that financial institutions can’t afford to drag their feet when it comes to following regulations anymore..

Guidelines for taking back control

To tackle cybersecurity challenges in the industry, most financial organisations are required to follow the Australian Prudential Regulation Authority (APRA) CPS 234 Information Security Standard.

The CPS 234 standard is, in many ways, the playbook for safeguarding information and operational resilience. Finserv companies are required to:

  • Set clear roles: The board plays a big role in information security, setting expectations and ensuring proper oversight.
  • Stay cyber-strong: Companies need solid security systems that match their size and risks, doing regular checks on potential threats and keeping them in check.
  • Monitor vendors: Keep a close eye on third-party providers, checking their security regularly and managing any risks they pose.
  • Implement solid policies: Have a clear set of cybersecurity rules everyone knows and follows, from employees to vendors.
  • Organise information: Rank data based on how critical or sensitive it is. Know what’s at stake if something goes wrong.
  • Test often: Regularly check if security systems are working, and fix any issues fast.
  • Report issues: Serious security problems? APRA wants to know within set timeframes.

7 ways to make a stand against BEC

If you now feel like you’re sitting under the Sword of Damocles, you’re not wrong. But all is not lost.

Here are some direct, tangible steps that teams at every level can take to cut down on risk:

  1. Ongoing phishing training. Teach staff to question every out-of-character email—even if it appears to be from their own CEO.
  2. Strong access controls. Implement biometric or possession-based multi-factor authentication.
  3. Credential management. Monitor for compromised credentials appearing on the dark web.
  4. Asset inventory. Know your systems, regularly update them, and close any open doors.
  5. Continuous monitoring and logging. Trust, but verify. Actually, don’t trust; just verify.
  6. Third-party risk management. Vendors’ security policies should be as solid as your own, or you could risk exposure despite your best intentions.
  7. Schedule and action regular security audits and penetration tests. If you can’t remember your last test, assume it never happened.

Is your business continuity and cybersecurity planning up to scratch?

BEC is here to stay, skilfully exploiting simple human error and complex organisational blind spots alike. The era of blaming bad luck or rogue emails is behind us. Regulators are watching, fines are increasing, and customers are ready to walk away at the first sign of weakness.

No one expects financial institutions to become Fort Knox overnight. But with the right mix of continuous education, robust controls, and a healthy dose of cyber-paranoia, you can be the hardest target on the block.

Because hope is not a strategy, and ignorance is expensive.

Why Data Governance Risk & Compliance is a risky business. And what you can do about it.

While Data GRCaaS (Data Governance Risk and Compliance as a Service) may sound like yet another incomprehensible IT acronym to many, it’s likely to greatly interest those in your business responsible for managing risk. They know exactly how important GRC is to the well-being of your business and its future.

Non-compliance with data protection regulations is both risky and expensive. It can cost dearly in terms of reputation and financial penalties that few can recover from. One such example is the Medibank data breach of 2022. The Office of the Australian Information Commissioner (OAIC) has started court proceedings against Medibank for failing to take reasonable steps to protect their personal information from misuse and unauthorised access or disclosure in breach of the Privacy Act 1988. If the prosecution is successful, the maximum civil penalty order theoretically available under the Privacy Act, in this case, is an unimaginable AU$ 21.5 trillion. It’s unlikely that this magnitude of fine will be awarded, but it signals how seriously OAIC takes this case – and the importance of GRC.  

But let’s back up a bit first and define Governance Risk and Compliance (GRC) and why it’s relevant to your data.

A (very quick) guide to GRC

Governance, Risk, and Compliance (GRC) is a structured approach used by organisations to align their IT and business goals while managing any risks. It helps to ensure compliance with regulations and maintain effective governance practices.

In plain language:

  • The Governance part refers to the framework of rules, processes, and practices your organisation follows. It encompasses establishing policies, taking accountability for meeting those policies, and overseeing your business performance.
  • The Risk aspect is all about the focus on identifying, accessing, and managing risks that could impact your ability to achieve your objectives. It includes risk management strategies and practices to mitigate potential threats (for example, cyber threats).  
  • And the Compliance part is the process of making sure you follow the letter of the law and adhere to both external regulations and your own internal policies. This includes monitoring and reporting on any compliance-related issues and ensuring your business meets its legal and ethical standards.

So, what’s Data GRC?

In adding Data to the GRC mix, the focus moves quite specifically to the areas of risk associated with data, like uncontrolled or illegal data access, exposure to data breaches, cyberattacks, and insider threats.

Safeguarding your data (and handling the ‘what comes next’) presents a unique set of challenges and adds still more layers of complexity to your GRC initiatives. Given the dynamic nature of cybercrime and the increasingly heavy fines for those who fail to protect their data, it’s a genuine worry for most businesses. And managing it internally, without expert and dedicated resources who have the time and knowledge to monitor, manage and protect your data 24/7, comes with its risks.

What are some of the everyday data risks we’re talking about?

  1. The effort of keeping up and responding to ever-evolving legal, industry, and internal requirements regarding how you protect your data, what you must do in case of a breach, and by when.
  2. Being blindsided by an incomplete view of your data.
  3. Slow response at times of need with manual remediation processes for mitigating risks.
  4. The struggle to implement and maintain a zero-trust security posture to help strengthen your security posture and compliance initiatives.
  5. Without an audit trail, you have no idea who has accessed, deleted, created, or moved your data.
  6. The inability to identify, prioritise, and address data security needs in real time (before it’s too late).

What is Data GRCaaS?

Data GRCaaS uses a service-based modular strategy designed to help you safeguard your data and ensure it is managed according to an agreed data compliance framework. And because the service is cloud-based and therefore scalable (and supported by industry-leading best practices and committed resources), it replaces the costly in-house infrastructure and experts you’d need to do the same job. It works across your entire environment – on-prem, cloud, or hybrid.

In real-world terms, what’s in it for you? How will it improve your GRC? Let’s take a look.

How does Data GRCaaS deliver on your compliance wish list?

Regulatory compliance is at the top of the GRC list. The good news is that if you need to comply with and report on data standards like SOCI, ACSC ISM, GDPR, PCI, HIPAA, HITECH, SEC, SOX, CJIS, CMMC, or PIPEDA in addition to your internal policies, you’re covered. With Data GRCaaS, you can’t slip up.

Data GRCaaS allows you to get to grips with your data. You’ll be able to discover, identify, classify, and label your sensitive data at scale in preparation for implementing DLP (data loss prevention). And this is a very good thing; DLP solutions help you protect your critical information, whether stored on endpoints, in the cloud, or in transit. Deep integration with Microsoft means it will also identify and categorise sensitive information in your emails, Teams, and SharePoint and pick up any unauthorised data exposure or behaviours. You’ll also save money with your newfound ability to identify stale data and decide if it can be archived or deleted – driving down your data storage costs.

You’ll also improve your security posture. Data GRCaaS will help you mitigate against the risk of data breaches, cyberattacks, and insider threats by adopting a Zero-Trust or least-privilege approach. Other compliance improvements include managing your permissions and understanding who is accessing, deleting, creating, and moving data – so you have control and visibility.

Peace of mind (and this can’t be overstated in terms of importance for those responsible for your GRC). A Data GRCaaS solution will mitigate your risk when it comes to data breaches, cyberattacks, and insider threats. It will also identify and action file-level security breaches as they happen. This includes insider threats, malware, and ransomware.

Lastly, your back is covered 24/7. Data GRCaaS is supported by real people who continuously oversee the management, reporting, and remediation of your data security, governance, and compliance risks – day in and day out.

What next?

With Data GRCaaS, you’ll be able to understand and remediate against industry-relevant data risks by type, sensitivity, regulation, risk, policy, and more. And we guarantee that’s going to make a lot of people happy and better able to sleep at night.

Beyond backup: The compelling case for data resilience

Thinking that simply backing up your data will save the day is a shortsighted strategy with little or no place in today’s world. Because when it happens – that inevitable cyberattack or natural disaster – you’ll find that just having a copy of your data is far from enough.

And if you have a hybrid cloud environment, with data sprawled across myriad locations and platforms, then you assuredly need more than just backups to save your bacon.

If you haven’t yet developed a data resilience strategy, there’s no time to waste. The latest Notifiable Data Breaches Report from the Office of the Australian Information Commissioner revealed a rapid rise nationwide in notifiable data breaches in the first six months of 2024.

At the risk of sounding like a broken record, we once again say: It’s no longer a matter of if (you’re attacked), but when.

Backup vs. data resiliency

Just so there’s no confusion:

Should you be creating backups? Obviously – that’s a yes. Backing up your data is essential for data recovery – but it’s a reactive approach, a pink band-aid applied after the accident in the hope that it will hasten recovery. Yes, backups restore your lost data. But they won’t prevent you from losing it in the first place, and the post-disaster backup process can lead to significant downtime, as your systems may need to be taken offline to restore data.

By comparison, data resilience is a proactive approach. It focuses on preventing data loss and ensuring continuous availability. So, when disaster strikes (as it will), your business can keep running, downtime is minimised, and data integrity is maintained.

In short, if you’re not thinking about data resilience, you’re not thinking far enough ahead.

What does disaster look like?

What happens to your business when you experience a natural disaster or cyber-attack? Why can this sort of event stop your people and operations in their tracks?

Here’s what can happen:

  1. Operational systems out of commission: Your core business applications and systems may become inaccessible, halting production, sales, or service delivery. Everything you rely on to run a business is in ‘off’ mode.  
  2. Employee productivity plummets: Your staff may be unable to perform their tasks effectively, leading to decreased productivity, frustration, fear, and low morale.
  3. No access to data: Being unable to access essential data, including customer information, financial records, and operational data, can severely impact your decision-making and operations.
  4. You can’t communicate: Your communication tools (think email, messaging platforms, etc.) can be compromised. Your team members can’t talk to each other, let alone to your customers and suppliers.
  5. Disrupted financial transactions: Your payment processing systems may be disrupted, preventing sales and impacting your cash flow.
  6. Zero customer service: If your customer support systems go down, it’s a red flag for your customer relationships. Few customers are impressed with delayed responses to their queries and requests for help and are fast to change loyalties.
  7. You can quickly get a bad rep. Trust can be rapidly eroded if customers learn of the breach, leading to potential loss of business and reputation damage.
  8. Failed regulatory compliance: Your compliance with data protection laws may be at risk, resulting in legal consequences and significant fines.
  9. Disrupted supply chains: If your suppliers or partners are affected, it may disrupt your supply chain, impacting inventory and delivery.
  10. The cost of recovery: Then, there’s the financial burden of remediation efforts, including IT forensics, system repairs, and potential legal fees. All of which can place a heavy strain on your people and your bank balance.

Given the potential impact on your business, relying on backups to dig you out of the deep hole of disaster is highly optimistic.

Data resilience – a holistic approach

Data resilience is about ensuring business continuity. It’s accepting that the impact of an attack can be wide and varied and that just restoring data via back-ups isn’t going to be enough to get you back in business.

Don’t get us wrong – backups are essential (and play an important role in a data resilience approach) – but they’re only part of the picture. Big-picture data resilience also encompasses recovery, redundancy, disaster recovery (DR) planning and cybersecurity. And it requires you to implement measures that ensure data availability, integrity, and security even in the face of unexpected events to minimise data loss and maintain business continuity.

Adopting a data resilience strategy can help your business pre-, during-, and post-incident in three ways.  

  1. It enables you to better withstand a cyber-attack.
  2. If you’re already impacted, it helps you to access your most important data and applications despite network disruptions or failures.
  3. It supports your rapid recovery and return to BAU.

How about data resiliency in a hybrid or multi-cloud environment?

Security and recovery are not assured simply because you’re in the cloud – whether public or private. And scarily, backup repositories are targeted in 96% of attacks, with bad actors ‘successfully’ affecting those repositories in 76% of cases.

If you count yourselves amongst the 89% of organisations with a multi-cloud strategy, you’re probably well aware of the challenges of backing up in the cloud. Legacy systems don’t deliver; relying on native backup tooling for each environment both fragments ease of management and crates inefficiencies and higher costs; and some first-party vendor solutions restrict flexibility and compromise performance, which drives up costs.

However, as said earlier, just investing in backup (no matter how good) on its own is a shortsighted strategy. Achieving data resilience requires your backup and cybersecurity teams to be aligned. To quote Veeam’s 2024 Ransomware Trends Report, “Recovery from a ransomware attack is a team sport.”

Yet most organisations struggle with this alignment, with 63% saying they need a complete overhaul or significant improvement to be fully aligned.

When asked why their teams weren’t better aligned, the most common answer (by respondents to Veeam’s report) was “a lack of integration between backup tools and cybersecurity tools.”

Summary

It’s been said that backup is easy, but recovery is hard – especially if you’re relying on your saved data to do more than it was ever intended. And with the rate at which we generate data and the increasing complexity of our technology environments, ‘hard’ isn’t a word that any of us want to hear.

A data resilience strategy that utilises integrated backup and cybersecurity tools is essential to survive D-day.

Whether it’s your first, tenth, or hundredth attack, you need to be able to face every event with the confidence that you will come out the other side with your data and business intact. Resilient to the end.

Chicken or egg: Cyber resistance vs cyber resilience

In a digital world where data is the new ‘everything’, it’s unsurprising that it has become a prime target for criminals. Data is the modern-day equivalent of a stash of gold bullion – and it can be stolen, ransomed, and sold for profit with less effort and risk than a bank heist.

The unrelenting waves of global cyberattacks mean that the cost of business survival is escalating – with the cost of cyberattacks doubling between 2022 and 2023. To combat this, Infosecurity Magazine reports that 69% of IT leaders saw or expected cybersecurity budget increases of between 10 and 100% in 2024.

The cost of crime

At the pointy end of the problem, organisations face damaged or destroyed data, plundered bank accounts, financial fraud, lost productivity, purloined intellectual property, the theft of personal and financial data, and more.

The blunt end is no less damaging. There’s the cost of recovering data, rebuilding your reputation, and getting your business back to a state of BAU as soon as possible, as well as the hefty price tag that comes with forensic investigation, restoring and deleting hacked data and systems, and even prosecution

Generative AI to the cyber-rescue?

Many see the rise of generative AI and expansion into hybrid and multi-cloud environments as the means to alleviate the ongoing attacks. But, of course, the democratisation of generative AI (in other words, goodies and baddies have equal access to its powers) means that potential risks are also heightened.

Despite this, it’s hard to overcome the optimism that generative AI will be a cyber-saviour. According to Dell Technologies 2024 Global Data Protection Index (APJ Cyber Resiliency Multicloud Edition), 46% of responders believe that generative AI can initially provide an advantage to their cyber security posture, and 42% are investing accordingly.  

But here’s the rub: 85% agree that generative AI will create large volumes of new data that will need to be protected and secured. So generative AI will, by default, (A) potentially offer better protection and (B) increase the available attack space due to data sprawl and unstructured data.

Resistance vs resilience

Of the APJ organisations (excluding China) that Dell surveyed, 57% say they’ve experienced a cyberattack or cyber-related incident in the last 12 months.

And a good 76% have expressed concern that their current data protection measures are unable to cope with malware and ransomware threats. 66% say they’re not even confident that they can recover all their business-critical data in the event of a destructive cyber-attack.

So why, if 66% of organisations doubt their ability to recover their data, are 54% investing more in cyber prevention than recovery?

Can you separate the cyber chicken from the egg?

In a recent cybersecurity stats round-up, Forbes Advisor reported that in 2023, there were 2,365 cyberattacks impacting 343 million victims.

Given the inevitability of cyberattack, it’s critical that your methods of resistance are robust, and if disaster strikes, your ability to recover is infallible.

Look at it this way: While a cruise liner obviously must have radar to detect and try and avoid approaching icebergs, angry orcas, and other collision-prone objects, it’s just as important that they have lifeboats, lifeboat drills, lifejackets, and locator devices available to minimise loss of life and keep everyone afloat.  

In the words of Harvard Business Review: “Simply being security-conscious is no longer enough, nor is having a prevention-only strategy. Companies must become cyber-resilient—capable of surviving attacks, maintaining operations, and embracing new technologies in the face of evolving threats.”

So, how do you bolster your cyber resilience?

According to Dell, 50% of the organisations they surveyed have brought in outside support (including cyber recovery services) to enhance cyber resilience.

While AI will undoubtedly introduce some initial advantages, as suggested earlier, those could be quickly offset as cybercriminals leverage the very same tools. Not only are traditional system and software vulnerabilities under attack, but due to the sprawl of AI-generated data, there are more and newer opportunities.

So – can we rely on generative AI to save the day? Probably not – or not yet anyway. What about outside help? Yes, most definitely. However, cyber resilience begins at home, with a top-down strategy based on some inarguable facts:  

  1. Attacks are inevitable. Once you accept that this is the new reality of the digital age, the logical next step is to develop a clear, holistic strategy focusing on business continuity and crisis planning.
  2. People are the first and best line of defence. Ensure your entire organisation takes responsibility and is cyber-aware – to the extent that your procedures are included in your company policies and onboarding processes.  This should include delivering ongoing cyber awareness training and introducing regular drills.
  3. When disaster strikes, survival is in your hands. Establish clear cybersecurity governance that aligns with your business objectives. Everyone in the organisation should know what they need to do to protect the organisation, its data, and its clients and ensure continuity of operations.  
  4. No one is trustworthy. Assume everything around your network is a potential threat. Adopt a zero-trust mindset that requires continual verification and rigidly controls access based on preset policies.  
  5. What you don’t know can hurt you. The ability to detect and prevent threats is critical. Invest in Security as a Service to provide visibility into your data, regardless of where it’s located, so that you can see and address your weaknesses.
  6. Disaster will strike. We live in unexpected times, where cybercrime and unprecedented natural disasters conspire to stop us in our tracks. With cloud-basedDisaster Recovery as a Service, the risk of permanently losing data and disrupting business as usual is significantly reduced.

Do you have a data rubbish dump, or a treasure trove?

Back in 2018, IDC predicted that by 2025, the Global Datasphere would have grown from 33 zettabytes to 175 zettabytes. Arcserve predicted 200 zettabytes, and Statista 180 zettabytes. Now, with 328.77 million terabytes of data being created daily in 2024, Statista’s prediction looks to be on the money.

While that’s all very impressive, what’s probably of more interest to most of us is what form that data will take. Why? Because data falls into two camps. 1. Structured and immediately useful, and 2. Unstructured, and due to its raw, unprocessed, and often chaotic nature, challenging to utilise.

According to IDC, 90% of business data is unstructured – and consists of customer contracts, employee handbooks, product specs, video, imagery, IoT sensor data – and more. Only 46% of companies report that they analyse their unstructured data to extract value from it – and less than half of it at that.

The problem with unstructured data

Structured data, with its standardised format, is a low-hanging fruit, ripe for transformation into business insights. So, its value is readily appreciated.

Whereas, by its very nature, unstructured data isn’t easy to search or sort – and, more often than not, is sprawled across an organisation in siloes. But given the wealth (and breadth) of information it represents, it’s also immensely valuable – just harder to access.

Stockpiling unstructured data, with its sensitive customer, company, and employee information, has inherent dangers, starting with security. In its report on unstructured data, CIO Dive says, ‘idle data’ brings higher costs: “Costs associated with security breaches double for companies with more unstructured data.”

Then, there is the cost of storing unstructured data. Logic dictates that as storage costs grow, so must budgets for storage and management. With 38% of businesses already saying that data costs are too high or unpredictable, allocating a hard-won budget to data you will potentially never use can be a bitter pill.

Data sprawl is also the enemy of efficiency as employees manually input data from multiple, decentralised sources. Time and time again.

Given all the challenges, it’s no surprise that 88% of organisations agree that data sprawl makes data management hard and complicates implementing an end-to-end data strategy.

So, why hang on to it?

Despite the cost, few are willing to discard that precious (if underutilised) unstructured data – with most organisations saying that if the cost weren’t a factor, they’d like to keep their data longer.

But why?

  • Never say no to an opportunity. You can count on the data you never got around to analysing costing you that edge you so desperately wanted. Or significantly improve the customer experience and, as a result, cement their lifetime loyalty. No one wants to be the person responsible for deciding to bin it.  
  • Compliance caution. Unstructured data poses a massive compliance problem for many. Perhaps a big part of the problem is that 96% of organisations with mostly siloed unstructured data don’t know what information lies hidden in that sprawl (whereas of those who have centralised their unstructured data, 98% know exactly what lies beneath the chaos). The crux of it is that if you don’t know what’s in your unstructured data or where it is, you can’t be sure you’re effectively complying with the regulatory standards that govern your business. So, unless you have centralised your unstructured data and got to grips with what you have, it’s safer to hang on to all your data.
  • AI (artificial intelligence). Harnessing the power of AI is an opportunity that forward-thinking organisations should ignore at their peril. However, if you’re already a convert, you need to get your unstructured data firmly under control with a centralised approach to content. As observed by IDC, while 84% of businesses are already using or exploring AI, “given that LLMs (large language models) are trained on unstructured data, IT leaders can only leverage the power of AI once they have a strategy to manage and secure their data on a single platform.”

Who in your organisation ‘owns’ all this unstructured data anyway?

Answer: Your CDO (chief data officer), aka chief data and analytics officer or just chief analytics officer. Whereas your CIO is typically more focused on technology, your CDO is charged with developing and implementing your data strategy.

While an evolving and relatively new C-suite role, the CDO mantra is ‘data-driven success.’ Part of the CDO’s role, says CIO.com, is to “‘break down silos and change the practice of data hoarding in individual company units.”

With IDC reporting that companies that used their unstructured data in the past 12 months experienced “improved customer satisfaction, data governance and regulatory compliance, among other positive outcomes,” the CDO role is a big step in the right direction.  

With most (93%) CDOs agreeing that AI success is a high priority, it’s no surprise that adding analytics and AI to their portfolio is regarded as a key step to success. As is driving value by transforming and curating data (both structured and unstructured), to make it easier to succeed with generative AI.

And as LLMs are powered by unstructured data, it’s clear that one person’s data rubbish dump is a CDO’s carefully curated treasure trove.

Contact us to have an obligation-free chat about our data management services.

Cloud and GenAI. It had to happen.

Whiskers and kittens? Fish and chips? Ben and Jerry? Cloud and GenAI are set to become an inevitable pairing – and one you need to prepare for.

More cloud, more smarts

In its 2023 CIO and Technology Executive Survey, Gartner says the results indicate that over 62% of Australian CIOs expected to spend more on the cloud this year – but are they architecting their cloud platforms to prepare for GenAI?

“Local CIOs have told us the top two technologies they plan on investing in next year are SASE (secure access service edge) to simplify the delivery of critical network and security services via the cloud, and generative AI for its potential to improve innovation and efficiencies across the organization,” says Rowsell-Jones, Distinguished VP Analyst at Gartner.

According to Gartner, the investment in GenAI will continue to increase alongside the continued shift to digital in Australia over 2024. And Gartner anticipates that enterprises will primarily look to incorporate GenAI through their existing spend in the long term – via the software, hardware, and services already in use.

How will GenAI be served up to users?

GenAI thrives on data and compute power – and the more, the better. So, cloud is an obvious vehicle. However, training AI models, such as the LLM (large language model) that powers ChatGPT, requires access to massive amounts of data and vast amounts of compute power. And that poses a problem for organisations keen to drive value from GenAI but lack the computing resources to leverage the amazing but power-hungry technology.

This is where the first of Forbes’ (10) predictions for computing trends in 2024 comes in: Get ready for AI-as-a-Service.

Just when we needed yet another technology acronym, AIaaS pops into frame. It’s all good, though: By accessing AI-as-a-service through cloud platforms, even those lacking the necessary cloud infrastructure and compute power can leverage AI’s powerful, transformative technology.

While AIaaS is exciting, the subject of cloud cybersecurity and GenAI is more sobering. Forbes warns that “encryption, authentication and disaster recovery are three functions of cloud computing services that will be increasingly in demand as we face up to the evolving threat landscape of 2024.” With data thefts and breaches increasing in frequency and severity as hackers use AI to develop new forms of attack, all systems accessible to humans will be at risk from social engineering attacks. Leaving security and resilience high on the agenda of all cloud providers and customers.

Which brings us to governance and readiness.

Governance and GenAI

In its must-do guide for GenAI governance, Phil Moyer, Google Cloud’s global vice-president for AI and Business Solutions, observed, “Today’s leaders are eager to adopt generative AI technologies and tools. Yet the next question after what to do with it remains, ‘How do you ensure risk management and governance with your AI models?’ In particular, using generative AI in a business setting can pose various risks around accuracy, privacy and security, regulatory compliance, and intellectual property infringement.”

And he makes a very good point. But it’s too early to look to the Australian government for prescriptive guidance just yet; there is currently no AI-specific regulatory framework in place. However, the good news is that we can expect the expanding risks to accelerate focused legislation. While Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure, and reliable, they are voluntary.

That said, the Australian Government is all in favour of AI adoption, pledging $41.2 million to ‘support the responsible deployment of AI’ in its 2023/2024 budget. This includes strengthening the Responsible AI Network and launching the Responsible AI Adopt Program to help SMEs adopt AI.

Governance internationally, though, has raced ahead. The proposed EU AI Act will be the world’s first comprehensive AI law – watch this space. In 2023, Australia joined the EU and 27 other countries in signing the Bletchley Declaration, an international commitment to ensuring that AI should be designed, developed, deployed, and used in a safe, human-centric, trustworthy, and responsible manner.

Ready, set, go – easier said than done?

How do you ensure you are ready for GenAI and your cloud infrastructure to play nice? It’s one thing to give GenAI the nod but another to successfully integrate it into your cloud architecture. Without a carefully defined and agreed-upon approach, you risk not only failed projects but also a compromised security framework.

  • Articulate and agree on use cases within your organisation for AI so you can determine what changes should be made to your IT landscape to best suit your needs.
  • Remember that GenAI is data-centric so ensure your data is clean, accessible, and compatible with cloud storage solutions.
  • Think ahead when it comes to security and privacy. It’s imperative to have a robust security architecture integrated at every step of the process.
  • Balance scalability with cost-efficiency to reap benefits, rather than drain finances.
  • Choose the right cloud infrastructure model for your use case.
  • Monitor, monitor, and monitor. Not only the performance of your AI models but also your cloud resource costs to ensure operational and architectural efficiency.
  • Be ethical, stay legal. If GenAI is making decisions impacting your users or creating content, then ethical considerations must drive design principles. While specific AI legislation is not (yet) in place, Australia’s Privacy Act covers some of the considerations, and amendments are due to follow.
  • Disaster recovery and resilience. High availability can be the difference between value and disaster. It’s critical that your provider/s can minimise downtime and data loss in case of system failures.

Your cloud infrastructure is critical to your ability to leverage GenAI’s transformative power. We don’t want you to be left behind.


The Modern CIO: Building bridges between business and customers.

Once upon a time, the CIO was an unappreciated and largely unknown hero; relegated to the back room and responsible for keeping the lights on without fanfare or recognition. Now, the role has matured to one which is central (and critical) to achieving business goals.

As well as being charged with the responsibilities that come with a seat at the boardroom table, today’s CIO is accountable for building a digital customer-first foundation that can easily evolve to meet changing demands.

How did Customer Experience (CX) become a CIO responsibility?

One of the most telling comments in Forrester’s “The CIO’s Role In The Growth Agenda” report is where they say: One CIO we spoke with told us, “It turns out, I actually own customer experience because I’m responsible for the systems that serve them.”

And with CX being increasingly reliant on technology, the choices the CIO makes now will underpin business growth. They’re important, and far-reaching.

Here’s why.

The case for exceptional CX being the norm, not the exception.

In Forbes’ article from late 2023, “Leading Digital Transformation: Why CIOs Should Keep CX Top Of Mind,” they observe that research has repeatedly shown that keeping customers happy and finding better ways to engage with them is not just crucial for survival but also key to thriving in a challenging economic climate.

Forbes also points to PwC’s Customer Loyalty Executive Survey 2023, where 87% of executives and 51% of consumers in the United States agreed that an online shopping experience can negatively impact loyalty if it’s not as easy or enjoyable as shopping in person.

What is apparent from this, is that CX is critical to growth and loyalty (and profitability) across virtually every aspect of customer interactions – from websites to apps, support to fulfilment, to personalised omnichannel communications based on previous behaviour, preferences, and purchases. And key to this, is your organisation’s ability to collect and meaningfully analyse masses of data – via technology.

Is there more to the CIO role than CX, though?

While important, CX isn’t the be-all and end-all – it’s a two-way bridge. Your technology environment needs to empower your internal stakeholders so they can derive deeper and more valuable insights into the market and make better decisions. From what to sell, when and how, and what next – impacting product development, sales, customer service, marketing, and growth strategies.

And of course, the better the technology, the more ownership and support by your tech teams.

So, circling back around to the original point of this article – today’s CIO plays a critical role in deciding and guiding the use of technology (from your systems of engagement, systems of insight, security, and infrastructure – nothing is exempt) and data.

The decisions you make should enhance how the business interacts with your customers, optimise its processes, and align your business strategies with the needs and high flying CX expectations of your customers – while bringing joy to your stakeholders.

That given, let’s look at how you can ‘make it so.’

The four key strategies to drive a customer-centric tech approach.

1. Be customer aware

Make sure your business is where and what your customers expect it to be with the ability to interact with you how they want to.

While it’s not as simplistic as building it and they will automatically come, failing to build solutions that deliver the high-quality experience your customers expect (from web to mobile apps to self-help) is a sure-fire path to failure in a digital world.

2. Stand united

Your technology model should link your tech and business teams – from marketing, to sales, CX and product, and digital – together, not drive a ‘have/have-not’ wedge between them.

In Forrester’s “The CIO’s Role In The Growth Agenda” report, they say: “In our studies, respondents at enterprises with high levels of alignment across customer-facing functions report 2.4x higher revenue growth than those with some or no alignment. Those same aligned groups benefit from working with IT teams that are 3.7 times more likely to be highly or somewhat aligned with other functions.”

Also consider what new technologies like AI (artificial intelligence) and ML (machine learning) will bring to the table as part of your drive to improve your business operations and gain a competitive advantage. While you may prefer to develop custom models that work well with your current data sets, keep an eye out for records management application vendors who are incorporating AI directly into their products.

3. Discard complexity

Stop investing in old technology. Make now the time to move on from the cost and complications inherited with legacy systems to consolidate and build better customer facing systems.

Reduce the complexity of your systems of records by ensuring you have a strong ability to retrieve data from your existing systems. This way you can be confident that you can access the data you need in the future – which is especially important if you are in a regulated industry.

For example, in the professional services sector, many organisations are switching to cloud-based records management systems to enable new Business Innovation, and as a result, are shutting down their old on-premise systems. Global Storage customers in this sector trust that their legacy data is secure and recoverable through our range of cloud services, which allows them to move forward and free up old capital and resources.

4. Invest in result

While it’s tempting to adopt one shiny, exciting new solution after another, step back and reconsider. The most important thing about technology is the result, not the way to achieve it.

Keeping this in mind will help you focus on what matters most to the business. For example, Global Storage offers an outcome-based service with strict SLAs that allows our customers to concentrate on innovation within the business. This saves them from getting bogged down in the essential but routine operational tasks and the effort and expense of keeping up with new technology and systems that ultimately add little value to the business.

In summary, building great bridges requires strong foundations – ones that are deep and true to support the weight of change and significant business growth.

Above all, the foundations you lay as CIO should enable fast and complete business recovery following a natural or maliciously contrived disaster.

Contact us to have an obligation-free chat.


Global Storage takes out Veeam VCSP Partner of the Year for ANZ

Veeam recently announced their ANZ Partner Awards to celebrate the success of their channel in 2022. Global Storage were delighted to accept the award for Veeam Cloud and Service Provider (VCSP) of the Year for Australia.

Laura Currie, Channel & Alliances Marketing Manager for ANZ commented on the award.

“Your commitment to ongoing growth and valuable insights into our products and programs have truly set you apart. Your dedication to our partnership and active engagement within the Veeam community have significantly contributed to our mutual success.”

Laura Currie

The partner awards celebrated 13 partners across ANZ for their achievement and activity with Veeam in the previous year.

“In the past year, Veeam has made great progress in helping its ANZ partners build their practices, in order to better serve their customers,” said Gary Mitchell, VP of ANZ at Veeam Software. Gary went on to say that “Veeam’s 100 per cent channel model firmly puts Veeam’s partners at the centre of the ecosystem and we are extremely proud to be working with them to provide customers with the resilience, availability, and business outcomes they need. We are thrilled to be able to celebrate their achievements at this year’s ANZ Partner Awards.”

This reward reflects Global Storage’s ongoing commitment to delivering our innovative and secure Back Up and Disaster Recovery as a Service offering.

As a Platinum Veeam VCSP partner we invest in our people with 6 certified Veeam Technical Sales Professionals forming part of our team. With over two decades of data management experience the Global Storage team is uniquely qualified to help companies of all sizes realise agility, efficiency, and intelligent data management across diverse cloud environments.

Global Storage takes out Veeam VCSP Partner of the Year for ANZ

Source: Veeam celebrates A/NZ channel — ARN (arnnet.com.au)


Written in partnership with Veeam.

Cloud: Simplifying an increasingly complex hybrid landscape with confidence

The challenges for today’s CISOs aren’t going away any time soon – especially when it comes to data management, protection and recovery in a multi-cloud or hybrid-cloud environment.

The complexities associated with cloud and tech environments were listed as a top 3 challenge in the Focus Networks Intel Report for the CIO & CISO Leaders Australia Summit 2023. And, according to ARN, cloud spending will top the list in 2024.

So, what does this mean for your organisation and its ability to manage your hybrid cloud environment?

Shouldn’t hybrid cloud be getting easier, not more complex?

You’d think the rush to hybrid cloud would be slowing down by now.

But, says Veeam, in its #1 Hybrid Cloud Backup Guide, hybrid cloud implementations are unlikely to go away. Whether by careful, strategic design or accidental evolution, 92% of businesses already have a hybrid or multi-cloud setup. Regardless of the route taken, hybrid cloud is today’s reality for most organisations.

Hybrid cloud, observes Veeam, no longer means a mix of on-premises and a (single) public cloud. These days, a hybrid environment is more likely to consist of specifically chosen platforms used to serve different purposes. For example, disaster recovery (DR), production, dev-test and more. Meaning there’s more to measure, manage, and protect.

So, it’s easy to see how, over time, the complexity of hybrid cloud – especially in terms of backing it up – has grown, not lessened.

Managing data protection and security is easy (said no one, ever)

As we adopt more modern platforms, the struggle to manage them and their dispersed, often locked-away data grows in the face of ever-evolving cyber threats. And legacy backup solutions won’t cut the mustard. They’re old news, high-risk, and only suitable for dangerously old and high-risk technology environments.

If you have a modern multi-cloud environment, it’s obvious you need to take a modern approach to protecting it. Even then, not all cloud backup solutions on offer are created equal. With the need to back up your physical and virtual machines (VMs), cloud-native infrastructure and platforms, SaaS, and Kubernetes – all of which benefit from purpose-built protection, it can be a big ask. While native backup tooling is available from both first- and third-party vendors, this multi-vendor approach can result in siloed management and often creates more challenges than it overcomes. At a time when the desire is to reduce costs and simplify management, it does the opposite.

Then, there are those public cloud vendors who lock your data into their platforms, meaning you need to compromise on performance, capabilities, and costs rather than embrace a move to a better, more suitable platform.

Multi-cloud and hybrid-cloud environments are now the norm not the exception. So, the need for a single pane of glass approach to data management, protection and recovery is more critical than ever before.

The lowdown on the future of cloud (and what it means for you)

First, let’s look at where cloud is heading. Because above all, as cloud evolves and transforms, you need to consider solutions that will go the distance.

In Forbes’ article on Cloud Computing In 2024: Unveiling Transformations And Opportunities, they open with this bold statement: “The dynamic realm of cloud computing is on the brink of remarkable transformations in 2024, as organizations and service providers brace themselves for an era characterized by innovation, challenges, and unprecedented opportunities.”

Sounds great, but what do they actually mean by this?

In its list of 11 key trends for 2024 – Forbes says the era of one-size-fits-all cloud solutions is on the way out and a more tailored and dynamic approach that combines public and private clouds is in. Hybrid and multi-cloud environments are set to become the new normal for organizations of all sizes – which comes as little surprise to most of us.

More importantly (in the context of this blog), Forbes says that with the shift to multi-cloud environments and serverless computing, IT departments will face novel challenges, including paying more attention to security. While specialised solutions that are designed to help simplify the inherently intricate nature of multi-cloud environments are emerging, Forbes cautions against tools that conceal complexity without genuinely streamlining or reducing it.

More positively, though, Forbes says that AI will optimise cloud management, in a transition from novelty to the norm and bring benefits, including streamlined overall cloud operations.

Another trend Forbes noted (one that’s far from new in a world strapped for skilled technology resources) is the challenge of bridging a skills gap as cloud adoption increases. Meaning solutions that reduce the need for specialised cloud-computing professionals will be welcomed with open arms.

So, where to from here?

Given the challenges, what’s important when considering a data protection, management, and security platform to support your ever-evolving hybrid-cloud environment?

  • Centralised management. Drive efficiency and reduce costs with a single view of all environments and just one toolset.
  • The ability to support everything. As hybrid environments grow in complexity, look for a solution that natively supports everything from SaaS to physical servers, Kubernetes, and more.
  • Own your own data. Eliminate data lock-in with a solution that allows you to move data freely across your infrastructure so it’s available where and when you need it.
  • Only use and pay for what you need. Choose a solution that allows you to cherry-pick the components you need without financial or licensing penalties.
  • A seamless experience. Protect, manage, and recover your hybrid cloud environment with a platform that delivers what it promises without downtime, data loss, or compromise.

Hybrid cloud offers benefits and challenges in equal measure – something we deal with daily. Reach out to Global Storage for an obligation-free chat about how we can help you simplify the complex.


Written in partnership with Veeam.

The new NIST list – what you need to know 

How time flies. It’s already been almost 10 years since the NIST (National Institute for Standards and Technology) Cybersecurity Framework was first rolled out to provide technical guidance for those responsible for critical infrastructure interests, including energy, banking, and public health. 

By early November, we can expect to see a sixth function officially added to the famous five functions of an effective cybersecurity program – namely: Identify, protect, detect, respond, and recover. 

And we’re glad to say that the final function is ‘govern’. 

It’s expected that the addition of the sixth function will expand the usefulness of the NIST framework to all those sectors outside of critical infrastructure and provide guidance to support their overall cybersecurity strategies.  

Celebrating the new NIST framework 

So, why does NIST 2.0 make us quietly happy? Possibly because it’s something we’ve taken to heart. 

From the Global Storage perspective, governance has long been the missing piece in the cybersecurity puzzle. Having gone through the intensive processes of earning ISO 27001 certification several years ago, it’s good to see NIST catching up with the technology partners (like us) who adopted ‘govern’ as a central premise to support and protect their customers more effectively. 

And the Australian Government obviously agrees. Its current principles of cybersecurity governance are grouped into four key activities: govern, protect, detect and respond. Govern: Identifying and managing security risks. Protect: Implementing controls to reduce security risks. Detect: Detecting and understanding cyber security events to identify cyber security incidents. Respond: Responding to and recovering from cyber security incidents. 

 In its discussion paper, “Strengthening Australia’s cybersecurity regulations and incentives,” the government is actively seeking views about how it can incentivise businesses to invest in cybersecurity, including through possible regulatory changes. The first of the proposed new policies up for discussion is governance standards for large businesses. Suggested governance approaches include alignment with international standards and frameworks (like ISO 27001 and NIST).  

Governance (and the associated reporting) is clearly a timely new focus for those non-critical infrastructure Australian businesses that haven’t yet fully developed a robust and all-encompassing cybersecurity plan. ASIC has started to actively fine businesses that fail to take remedial action after breaches – and they are unlikely to accept excuses based on size and lack of capability from the SMB sector.  

It’s been interesting for us to watch some of our larger customers, who previously aligned themselves with the ASD Essential Eight, now realigning themselves with NIST due to its depth, breadth and maturity. And we expect the addition of the ‘govern’ function to cement that move even more firmly. 

Catching the curve ball 

While we’d like to say we were ahead of the curve in becoming ISO 27001 certified, the reality is that many technology partners saw the writing on the wall. We could see that “govern” would be recognised as an important function over and above the five technical, control-based standards championed by NIST up until now – and that our commitment to going further should be sooner than later.  

What Global Storage’s ISO accreditation (and statement of applicability) means for our customers is that we keep the necessary governance records for them. So, if they are audited or even prosecuted, we can prove that the principles and controls of ‘govern’ were fully followed. In effect, they can leverage our external certification against their compliance requirements, making it easier for them to do business with confidence. And in turn, we leverage the certifications of our own ISO-accredited service providers.  

While committing to ISO 27001 five years ago was a market differentiator, it’s now a prerequisite for most partners like us. Now, from a sales perspective, it accelerates the conversations and removes roadblocks. Whereas ‘before’, our customers had no dedicated security resources, today’s organisations typically have multiple internal staff whose primary responsibility is security. But they are the lucky ones. With the huge global deficit in cybersecurity resources, they’re often lucky to be able to afford to hire and retain the people needed. All of which makes it even more important that a partner can offer the certified support needed.  

New framework, new challenges 

But going back to a cybersecurity framework that includes ‘govern’, for those already in a regulated industry (for example, health and banking), it shouldn’t pose too much of a problem – they are used to the requirement of being audited.  

In the case of non-regulated and often less mature industries, though, it will pose a challenge despite growing customer demand that they level up. For these organisations in particular, having a service provider that’s already got all those ‘govern’ boxes ready-ticked will alleviate the time, pain, and distraction of completing additional paperwork. 

As I’ve said, we’ve made a significant investment in ISO 27001, and that accreditation requires us to achieve and maintain precise standards and undergo a yearly external audit. It’s also shaped the way we run our business. We can’t afford mistakes; we put our reputation on the line daily. These days, saying “oops, sorry, my bad” isn’t good enough for us or our customers (and in our books, it never has been) – meaning we’re very prescriptive about how we run our cybersecurity functions and services.  

Feel good about the company you keep 

Like practically every company in the world, we’ve had cybercriminals trying to attack us – but every attempt has been detected, contained, and dealt with in keeping with our governance system. We’ve never had a breach. 

With NIST soon to be updated and the Australian Government looking likely to enforce governance for all organisations regardless of size, it’s critical these businesses can turn to a trusted service provider who has been there, done that – and actually lives and breathes the concept of “govern”. Only by doing that can they quickly and directly move forward and comply while reducing risk. 

Service partners like Global Storage are no longer just the clean-up crew when something goes wrong. We’re not just the people you lean on for (exceptional) backup and recovery as a service and disaster recovery as a service to provide 24/7 protection, but the in-depth reporting needed to keep you compliant, auditable, and accountable for everything cybersecurity.  

So, when your performance and strategy are held up against NIST standards, ISO standards, or government governance regulations, you can be confident that you, too, are ahead of the cybercrime curve ball.  


Get in touch for a Free, No‑Obligation Consultation

Arrange a chat with our experienced team to discuss your data protection, disaster recovery, cloud or security requirements.

  • Arrange an introductory chat about your requirements
  • Gain a proposal and quote for our services
  • View an interactive demo of our service features

Prefer to call now?
Sales and Support
1300 88 38 25

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is hidden when viewing the form
This field is for validation purposes and should be left unchanged.

© 2021 Global Storage. All rights reserved. Privacy Policy Terms of Service

The Global Storage website is accessible.

Download
Best Practices For Backing Up Microsoft 365

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is for validation purposes and should be left unchanged.

Download
5 Myths About Backing Up Microsoft 365 Debunked

By filling out this form you are consenting to our team reaching out to you. You may unsubscribe at any time. Learn more by visiting our Privacy Policy

This field is for validation purposes and should be left unchanged.