Cloud and GenAI. It had to happen.
Whiskers and kittens? Fish and chips? Ben and Jerry? Cloud and GenAI are set to become an inevitable pairing – and one you need to prepare for.
More cloud, more smarts
In its 2023 CIO and Technology Executive Survey, Gartner says the results indicate that over 62% of Australian CIOs expected to spend more on the cloud this year – but are they architecting their cloud platforms to prepare for GenAI?
“Local CIOs have told us the top two technologies they plan on investing in next year are SASE (secure access service edge) to simplify the delivery of critical network and security services via the cloud, and generative AI for its potential to improve innovation and efficiencies across the organization,” says Rowsell-Jones, Distinguished VP Analyst at Gartner.
According to Gartner, the investment in GenAI will continue to increase alongside the continued shift to digital in Australia over 2024. And Gartner anticipates that enterprises will primarily look to incorporate GenAI through their existing spend in the long term – via the software, hardware, and services already in use.
How will GenAI be served up to users?
GenAI thrives on data and compute power – and the more, the better. So, cloud is an obvious vehicle. However, training AI models, such as the LLM (large language model) that powers ChatGPT, requires access to massive amounts of data and vast amounts of compute power. And that poses a problem for organisations keen to drive value from GenAI but lack the computing resources to leverage the amazing but power-hungry technology.
This is where the first of Forbes’ (10) predictions for computing trends in 2024 comes in: Get ready for AI-as-a-Service.
Just when we needed yet another technology acronym, AIaaS pops into frame. It’s all good, though: By accessing AI-as-a-service through cloud platforms, even those lacking the necessary cloud infrastructure and compute power can leverage AI’s powerful, transformative technology.
While AIaaS is exciting, the subject of cloud cybersecurity and GenAI is more sobering. Forbes warns that “encryption, authentication and disaster recovery are three functions of cloud computing services that will be increasingly in demand as we face up to the evolving threat landscape of 2024.” With data thefts and breaches increasing in frequency and severity as hackers use AI to develop new forms of attack, all systems accessible to humans will be at risk from social engineering attacks. Leaving security and resilience high on the agenda of all cloud providers and customers.
Which brings us to governance and readiness.
Governance and GenAI
In its must-do guide for GenAI governance, Phil Moyer, Google Cloud’s global vice-president for AI and Business Solutions, observed, “Today’s leaders are eager to adopt generative AI technologies and tools. Yet the next question after what to do with it remains, ‘How do you ensure risk management and governance with your AI models?’ In particular, using generative AI in a business setting can pose various risks around accuracy, privacy and security, regulatory compliance, and intellectual property infringement.”
And he makes a very good point. But it’s too early to look to the Australian government for prescriptive guidance just yet; there is currently no AI-specific regulatory framework in place. However, the good news is that we can expect the expanding risks to accelerate focused legislation. While Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure, and reliable, they are voluntary.
That said, the Australian Government is all in favour of AI adoption, pledging $41.2 million to ‘support the responsible deployment of AI’ in its 2023/2024 budget. This includes strengthening the Responsible AI Network and launching the Responsible AI Adopt Program to help SMEs adopt AI.
Governance internationally, though, has raced ahead. The proposed EU AI Act will be the world’s first comprehensive AI law – watch this space. In 2023, Australia joined the EU and 27 other countries in signing the Bletchley Declaration, an international commitment to ensuring that AI should be designed, developed, deployed, and used in a safe, human-centric, trustworthy, and responsible manner.
Ready, set, go – easier said than done?
How do you ensure you are ready for GenAI and your cloud infrastructure to play nice? It’s one thing to give GenAI the nod but another to successfully integrate it into your cloud architecture. Without a carefully defined and agreed-upon approach, you risk not only failed projects but also a compromised security framework.
- Articulate and agree on use cases within your organisation for AI so you can determine what changes should be made to your IT landscape to best suit your needs.
- Remember that GenAI is data-centric so ensure your data is clean, accessible, and compatible with cloud storage solutions.
- Think ahead when it comes to security and privacy. It’s imperative to have a robust security architecture integrated at every step of the process.
- Balance scalability with cost-efficiency to reap benefits, rather than drain finances.
- Choose the right cloud infrastructure model for your use case.
- Monitor, monitor, and monitor. Not only the performance of your AI models but also your cloud resource costs to ensure operational and architectural efficiency.
- Be ethical, stay legal. If GenAI is making decisions impacting your users or creating content, then ethical considerations must drive design principles. While specific AI legislation is not (yet) in place, Australia’s Privacy Act covers some of the considerations, and amendments are due to follow.
- Disaster recovery and resilience. High availability can be the difference between value and disaster. It’s critical that your provider/s can minimise downtime and data loss in case of system failures.
Your cloud infrastructure is critical to your ability to leverage GenAI’s transformative power. We don’t want you to be left behind.