PWC predicts “80% of businesses expect AI to drive major changes in the next decade.” That’s a staggering figure, but here’s the kicker: most of them aren’t ready for the risks that come with it. It’s easy to get swept up in the excitement around AI-developed software, but we’ve been here before.
Remember when Microsoft Access databases started popping up everywhere? Departments created their own custom solutions without IT oversight. It was convenient, until it wasn’t. Critical business processes became dependent on systems that no one knew how to maintain. Sound familiar?
Today, AI-driven software is threatening to do the same thing, just on a much bigger scale. In business continuity management, we’re seeing more departments developing AI tools to manage everything from customer service to supply chain predictions. It’s powerful, it’s innovative, and it’s fast. But there’s a major risk that businesses are overlooking—those AI systems, like the MS Access databases of the past, can become critical to operations and yet, dangerously fragile.
The MS Access Debacle All Over Again?
In the 90s and early 2000s, Microsoft Access was a godsend for non-technical employees. Need a quick fix? Build your own database. No need to wait for IT. The problem? Many of those databases quickly became mission-critical, yet were often poorly documented, impossible to scale, and reliant on a single person who built them. When that person left, chaos followed.
We’re seeing the same thing today with AI-driven software. People are excited to jump on the AI bandwagon, using tools like ChatGPT, custom machine learning models, and AI-driven analytics to solve problems. But here’s the danger: without proper oversight, these AI solutions can become the next “forgotten database”—critical to business functions, but nearly impossible to keep or scale.
The Business Continuity Nightmare
From a business continuity perspective, this trend is alarming. Why? Because AI tools are now being embedded in everyday operations, often without a clear understanding of how they work, who will maintain them, or what happens when they fail. AI software can be incredibly complex, making it hard to fix or replace if something goes wrong. In fact, a Gartner report suggests that by 2025, over 30% of critical business functions will rely on AI technology that isn’t controlled or understood by IT.
This is the worst kind of risk. When an AI-driven system goes down, how quickly can it be recovered? Who is responsible? What’s the backup plan? Just like with those rogue MS Access databases, most businesses won’t have answers until it’s too late.
How to Prevent the Next AI Crisis in Business Continuity
Let’s talk about prevention. You can avoid AI becoming your company’s Achilles heel by taking a few simple, yet critical steps.
1. Bring AI under the IT Umbrella
AI tools should not exist in silos. IT departments must be involved in the development, deployment, and maintenance of any AI-driven solutions. Just because someone in marketing or finance can build an AI model doesn’t mean they should. AI needs to be treated as an integral part of your IT infrastructure, with proper governance, monitoring, and disaster recovery plans.
2. Document Everything
One of the biggest problems with MS Access databases was the lack of documentation. People built solutions and moved on, leaving behind a tangled mess. AI is no different. Every AI model, every dataset, and every integration must be well-documented. Who built it? How does it work? What data does it rely on? Without this information, recovering from an AI failure will be nearly impossible.
3. Train, Don’t Outsource Responsibility
Relying on a single person or a small team to manage AI-driven tools is a recipe for disaster. AI is a constantly evolving field, and it requires ongoing maintenance and training. Make sure your teams are up to speed on how these tools work and what to do when they fail. Train people across departments, so no one person holds the keys to the AI kingdom.
4. Create AI-Specific Business Continuity Plans
Most businesses have continuity plans for their servers, applications, and data—but what about AI? If an AI system fails, what’s the backup? Can operations continue without it? Can you quickly rebuild the model, or will it take weeks of downtime? Your business continuity plan needs to account for AI tools and processes, including disaster recovery scenarios that cover the potential loss or failure of AI-driven systems.
5. Regularly Test Your AI Failover Capabilities
Don’t wait for a crisis to discover that your AI system has no backup. Just like you would test the failover for a server or a database, you need to test what happens when an AI system goes offline. Run simulations to confirm you can recover quickly. If you can’t, adjust your plan. Better to find out now than in the middle of a real incident.
The Future of AI in Business Continuity
AI is here to stay, and it’s going to be a critical part of many businesses moving ahead. But it’s also a double-edged sword. We can’t let the same mistakes that plagued MS Access databases come back to haunt us. By treating AI as a core part of your infrastructure, involving IT, documenting everything, and regularly testing your recovery plans, you’ll keep your business safe from the potential pitfalls of AI-driven software.
So, next time your department’s looking at building or buying an AI tool to solve a quick problem, ask yourself: Are we about to create the next MS Access crisis? If the answer is yes, it’s time to rethink your approach before it’s too late.