AI Governance Established and £200M AI Portfolio Assessed for Auto OEM
Industry: Manufacturing
Client
A global automotive manufacturer
Goal
To establish a Global AI Governance Framework for a multi-billion-pound enterprise operating in a regulated sector, enabling responsible AI adoption at scale and preparing the organisation for emerging AI regulations (e.g., EU AI Act).
Challenges
- Desire for AI education and adoption combined with risk aversion.
- Lack of oversight/governance of AI tool use, development, and procurement across the enterprise
- Emerging regulatory expectations (e.g., EU AI Act).
Solution
Defined and implemented an Ethical Data and AI Governance Framework, including:
* Authored the Data and AI Ethics Principles;
* Authored the AI Policy;
* Established and chaired the enterprise AI Steering Group;
* Established a Risk Management Framework; and
* Implemented AI governance processes, including an automated AI impact and risk assessment process and supporting tooling.
Our Fractional Head of AI prioritised AI training:
* Delivered numerous AI training sessions, workshops, and immersion days to increase AI literacy;
* Facilitated team workshops to brainstorm ideas and define AI roadmaps;
* Provided role-based enablement (e.g., executives, senior leaders, product managers) and playbooks to drive adoption grounded in trustworthy AI principles.
The team implemented EU AI Act regulatory compliance initiatives:
* Implemented an AI system inventory and risk-tiering process;
* Educated product managers to identify, risk-categorise, and inventory AI initiatives;
* Developed playbooks for building the AI solution inventory;
* Established a Risk Management Framework;
* Defined AI solution lifecycle controls.
Impact:
Established enterprise-wide AI governance with clear accountability, improving executive confidence and reducing unmanaged AI risk.
Conducted risk and impact assessments for a £200m portfolio of AI initiatives using a newly introduced assessment process and tooling.
Created an AI inventory and a risk-tiering approach.
Embedded trustworthy AI practices in delivery teams through training and playbooks, increasing adoption and consistency across business units.
Context
A global automotive manufacturer operating in a highly regulated sector sought to establish a Global AI Governance Framework to enable responsible AI adoption at scale and prepare the organisation for emerging regulations such as the EU AI Act. The enterprise, a multi-billion pound company with distributed product and engineering teams across regions, required a single, coherent approach to govern AI tool use, development, procurement and deployment while maintaining innovation velocity in manufacturing, supply chain and customer-facing systems. The programme aimed to balance a strong desire for AI education and use with a culturally embedded risk averseness by introducing practical controls, accountability and scalable processes.
Challenges
The organisation faced several intertwined challenges: there was a lack of oversight and governance over AI tool use, development and procurement across business units, creating pockets of unmanaged technical and regulatory risk. Leaders wanted to accelerate AI literacy and adoption but were risk-averse and uncertain how to combine rapid experimentation with robust controls. Emerging regulatory expectations such as the EU AI Act increased urgency: product teams needed clarity on classification, documentation and compliance obligations. The absence of an enterprise-wide AI inventory, inconsistent risk categorisation, and no standardised AI impact assessment approach impeded executive confidence and made it difficult to prioritise remediation or investment decisions.
Implementation
The programme defined and implemented an Ethical Data and AI Governance Framework to address these gaps. Our Fractional Head of AI authored the Data and AI Ethics Principles and the enterprise AI Policy, providing clear behavioural and technical guardrails aligned to regulatory requirements and manufacturing safety standards. An enterprise AI Steering Group was established and chaired to provide senior oversight, approve high-risk initiatives and drive cross-functional alignment. A Risk Management Framework was created to integrate AI-specific risks into existing enterprise risk processes and to define escalation paths and accountability.
To operationalise governance, the team implemented automated AI governance processes, including an AI impact and risk assessment process supported by tooling that allowed teams to evaluate privacy, safety, fairness and regulatory risk early in the lifecycle. EU AI Act readiness was advanced by rolling out an AI system inventory and risk tiering process. Product managers were educated to identify, risk-categorise and inventory AI initiatives; the programme developed playbooks and templates for building the AI solution inventory and for classifying systems against the EU AI Act risk tiers. Lifecycle controls were defined for design, validation, deployment and monitoring phases to ensure ongoing compliance and traceability.
AI training was made a priority to move from policy to practice. The team delivered multiple AI training sessions, workshops and immersion days to increase AI literacy across functions. Team workshops were used to brainstorm ideas and define AI roadmaps. Role-based enablement—targeting executives, senior leaders and product managers—was delivered through focused sessions and playbooks to drive adoption grounded in “trustworthy AI” principles. The combined approach of policy, tooling and learning ensured adoption was not only mandated but achievable for delivery teams.
Results
The initiative established enterprise-wide AI governance with clear accountability, improving executive confidence and materially reducing unmanaged AI risk. Using the newly introduced automated risk and impact assessment process and tooling, the organisation conducted assessments for a portfolio of AI initiatives valued at £200m, producing consistent risk ratings and remediation plans. An AI inventory and risk-tiering approach provided a single source of truth for AI systems across the business and enabled prioritised compliance workstreams for high-risk systems. Trustworthy-AI practices were embedded in delivery teams through targeted training and playbooks, increasing adoption and consistency across business units and accelerating the organisation’s readiness for regulatory change such as the EU AI Act. Senior leaders reported improved visibility into AI investments and risks, and the steering group now provides a repeatable governance forum to sustain responsible AI adoption at scale.
*Case studies reflect work undertaken by our Heads of AI either during their tenure with Head of AI or in prior roles before they were part of the Head of AI network; they are provided for illustrative purposes only and are based on conversations with our Heads of AI.