Organisations are rapidly adopting AI models to improve efficiency and automate business processes. However, this adoption introduces serious security risks that many companies haven’t fully considered. This article breaks down the key security concerns and what organisations need to do about them.
Key Security Concerns When Deploying AI Models
Trained AI Models Are Prime Targets for Attackers
When cybercriminals break into a company’s network, they typically steal large amounts of data but struggle to identify what’s actually valuable. Companies accumulate massive volumes of outdated files, old databases, and irrelevant documents. Attackers often exfiltrate terabytes of data only to find that most of it has little value.
Trained AI models completely change this dynamic. If an organisation has trained an AI model to handle customer service, manage sales processes, or support product development, that model contains concentrated business value in one package. It holds intellectual property, business processes, customer information, and operational knowledge all in one place.
For attackers, stealing a trained AI model means getting everything valuable in a single theft. There’s no need to sort through junk data.
The risk is even higher when AI models are used for actual business operations rather than just providing recommendations. If an AI model is making decisions or directly interacting with customers, compromising it can shut down critical business functions. If the model contains unreleased product information or strategic plans, competitors could gain significant advantages.
Organisations need to treat trained AI models as crown jewels and protect them accordingly with strong access controls, encryption, and monitoring.
AI Models Can Drift and Give Bad Advice Over Time
AI models are designed to learn and improve from experience. They find patterns and develop better ways to handle common situations. This is useful, but it creates a problem called “drift” where the model’s behaviour changes in unwanted ways.
Over time, an AI model might start giving responses that don’t align with company policies or values. It could learn patterns that lead to culturally offensive suggestions. In customer service, it might gradually become less helpful or even recommend competitor products. In healthcare applications, drift could result in dangerous medical advice.
The model interacts with thousands or millions of users and learns from those interactions. Without oversight, it can develop responses that harm the business or customers.
This means organisations cannot simply deploy an AI model and forget about it. They need to:
- Regularly test the model’s responses to ensure accuracy
- Monitor for responses that fall outside acceptable ranges
- Set limits on how creative or autonomous the model can be
- Have processes to correct the model when it drifts
Think of it like quality control in manufacturing. Just as you wouldn’t produce widgets without checking them, you can’t run AI models without verifying their outputs.
Governance and Accountability Considerations When Deploying AI Models
Decide Whether to Use One AI Model or Multiple Separate Models
Organisations face a choice: deploy one large AI model that handles multiple functions across the company, or use separate AI models for different departments.
A single unified model might seem more efficient, but it creates risks. The model could absorb and reinforce problematic aspects of company culture like departmental conflicts or biased decision-making. If the sales and operations teams have competing priorities, the AI might encode those conflicts into its recommendations.
Using separate models for different departments reduces this risk. Each model stays focused on its specific function without interference from other parts of the organisation. The trade-off is losing some efficiency and the ability to share insights across functions.
Another consideration: humans still need to make final decisions. AI models should support decision-making, not replace it entirely. Some decisions require judgment, ethics, and accountability that AI cannot provide.
The technology exists to create fully AI-run organisations using AI decision-making, blockchain, and automation. This might become viable in three to six years. However, legal and regulatory frameworks don’t support this yet, and the risks are poorly understood.
Understand Insurance Coverage for AI Mistakes
When an AI model makes a mistake or gives bad advice that causes damage, who is responsible? Does the company’s insurance cover it?
These are unresolved questions. Traditional Errors & Omissions insurance policies weren’t written with AI in mind. As AI adoption increases, insurance companies are starting to evaluate how dependent organisations are on AI systems and pricing that risk accordingly.
This pattern mirrors what happened with cyber insurance. When ransomware attacks became widespread, insurance companies wrote policies without good data, minimal security requirements, and premiums based on guesswork. As losses mounted, premiums skyrocketed and coverage became more restrictive.
AI insurance is heading down the same path. As AI-related incidents occur, organisations will face difficult questions:
- Is an executive liable for following AI advice that turns out wrong?
- What happens when an executive ignores AI recommendations and things go badly?
- Who pays for damages when an AI model malfunctions?
These scenarios create governance complications that organisations need to address before problems occur. Review current insurance policies to understand what’s covered and what isn’t. Consider whether additional coverage is needed as AI deployment expands.
What Organisations Should Do
AI models can improve efficiency and reduce costs, but they introduce new security and operational risks. Organisations deploying AI should:
Protect AI models as critical assets. Use the same security controls applied to intellectual property and sensitive data. Implement access restrictions, encryption, and activity monitoring.
Monitor AI behaviour continuously. Set up processes to regularly test AI outputs for accuracy and appropriateness. Don’t assume the model will stay consistent over time.
Use segmented models where appropriate. Consider separate AI models for different functions to reduce risk and prevent problems in one area from affecting others.
Clarify insurance coverage. Review policies to understand AI-related gaps and obtain additional coverage if needed.
Keep humans in control. Maintain human oversight for important decisions. AI should assist decision-making, not make final calls on critical matters.
Document AI decision processes. Create clear policies on when AI recommendations should be followed and when human judgment should override them.
The pressure to adopt AI quickly is strong, but rushing deployment without addressing these risks is dangerous. Taking time to implement proper controls is responsible management of technology that’s still evolving and whose full risks aren’t yet understood.
