Governance, Risk and the Future of AI Policy Making
Category: News
Published: 2nd December 2025
Artificial intelligence is no longer an experimental technology reserved for innovators and early adopters. It is now embedded in everyday business operations. Often in ways organisations are not fully aware of.
Staff use tools like ChatGPT to speed up tasks, marketing teams deploy automated content generators, and customer service teams rely on AI-driven chatbots, with some businesses incorporating autonomous AI agents directly into their products and services.
For IT and security teams, this new reality represents both a tremendous opportunity and a significant governance challenge. AI systems; internal, external, known, or unknown, all process sensitive data, make decisions that affect customers, and create outputs that can expose an organisation to legal, ethical, and reputational risk.
Yet most companies still have no formal AI governance framework, no structured oversight, and no visibility into where AI is being used across the business.
The Emerging Standards: ISO 27001 and ISO 42001
Many IT leaders ask whether AI is covered by existing compliance frameworks. The answer is yes, but with important distinctions.
ISO 27001 remains crucial since AI systems inevitably process sensitive information. Organisations must continue identifying what data requires protection, where it resides, and which controls apply. But AI brings new operational, ethical, and security risks that ISO 27001 alone does not address.
This is why ISO 42001, the first AI Management System Standard, was published. It provides a structured framework for responsible, transparent and well-governed AI adoption.
Demand for AI policy is surging as businesses recognise the need for documented AI oversight. Risk Crew, a Red Helix company is already guiding organisations through early adoption of ISO 42001 and helping organisations formalise their governance approach.
What Regulators and Governments Expect Today
Governments, regulators and industry bodies are becoming increasingly vocal about AI governance. Their guidance consistently emphasises understanding where AI is used, establishing clear policies for acceptable use, performing AI-specific risk assessments, assigning accountability, and providing staff training to minimise data leakage or misuse.
Although enforcement and policies vary, the direction of travel is clear and organisations will soon be expected to prove they have control over their AI systems.
The Most Urgent AI Controls for Enterprise
Step 1: Identify the AI use across your organisation
The first and most critical step for IT Directors is gaining full visibility into how AI is being used across the organisation. This means identifying two distinct areas.
AI incorporated into products, services and websites and AI used informally by employees through third-party tools like ChatGPT or Google Gemini.
Most organisations have almost no oversight of staff usage. Employees may upload sensitive information, customer records or confidential IP into external AI platforms without realising the implications. Since many AI tools run through browsers or SaaS models, existing monitoring tools often fail to detect them.
Step 2: Create an AI risk assessment
Once usage is understood, organisations must conduct a formal AI risk and impact assessment. This should evaluate the likelihood of model bias, discriminatory outcomes, hallucinations, misinformation, data misuse, and unauthorised access. Both customer-facing systems and internal productivity tools require a high-level of scrutiny.
Step 3: Create tailored policies
From there, IT leaders can establish tailored AI policies. These typically cover acceptable use, the handling of confidential information, rules for internal generative AI tools, and guidelines for how AI and machine learning may be applied to business operations.
There is no universal policy. Each organisation requires its own approach based on its structure, risk appetite and existing technology stack.
Why Enforcing AI Policies Is Difficult
First, it is essential you understand the limitations and abilities of your current security stack to monitor the use of AI found during Step 1.
While it is technically possible to block access to AI platforms entirely, doing so is counterproductive. Staff will simply find workarounds, and productivity gains will be lost.
The practical solution is governance, not prohibition. Clear policies, appropriate controls, and ongoing training are essential. Employees must understand what information is safe to enter into AI systems, how to recognise AI errors such as hallucinations or bias, and when to escalate potentially harmful outputs.
What AI Policy Will Look Like in Five Years
AI governance will evolve rapidly as models become more autonomous and interconnected. Organisations can expect mandatory documentation of AI usage, stricter auditing requirements, sector-specific regulations, clearer accountability for automated decisions, and enhanced transparency obligations.
As agent-based systems become the norm, policies will need to address interactions between AI models and how they collectively influence business decisions.
AI policy will not be a box-ticking exercise. It will be as central to digital governance as information security policies are today.
AI adoption is accelerating faster than most companies can manage. But the solution is clear and achievable: identify where AI is used, assess the risks, set enforceable policies, and align with recognised standards such as ISO 42001.
Businesses that take these steps now will secure a competitive advantage, protect their data, safeguard their customers, and prepare for the coming wave of AI regulation.