Practical AI Governance: Moving Past the Hype Phase
- Luke Schafer

- Jan 14
- 6 min read
How achieving a Microsoft Advanced Specialisation taught us to govern AI for speed, not fear
SixPivot recently achieved the Microsoft AI Platform on Azure Advanced Specialisation. It’s a badge we’re proud of, but importantly, it signifies that we’ve moved past the "hype" phase and into the "how do we actually control this thing" phase.
AI isn’t traditional software. It’s non-deterministic, meaning you can give it the same input and get different outputs. Despite that unpredictability, organisations are deploying it into production right now. Generally, we see people falling into two groups. Some are concerned and cautious. Others are quite a bit more… cavalier. I’m not going to call them cowboys, but if the hat fits…
Both groups need the same thing: practical AI governance.
AI Governance isn’t about red tape or slowing down. It’s about building a framework that lets you ship faster because you trust what you’ve built.
This post breaks down how to wrap concrete governance around probabilistic systems so you can ship faster without breaking the bank, the law, or your customers' trust.
Note: While this guidance centres on Microsoft Foundry in Azure, these same concepts apply to all workloads that leverage AI.

Why AI Governance is Different
If you try to govern AI using the same playbook you use for a standard CRUD [Create/Read/Update/Delete] app, you’re going to have a bad experience. Traditional software is deterministic. If the code says 2 + 2, the answer is 4 every time (we hope!). You can test it, audit it, and set a watch by it.
AI systems are probabilistic. They deal in likelihoods, not certainties, creating a control paradox. If you try to lock the system down too tightly, you strip away the reasoning capabilities that make AI useful. If you leave it wide open, you’re one hallucination or jailbreak away from a front-page headline.
We are also seeing the regulatory landscape shifting. Between the EU AI Act and emerging Australian regulation and guidelines, "we didn't know it would say that" is no longer a valid defence. Ungoverned AI creates massive technical and legal debt that must eventually be repaid with interest.
Practical governance acts as an accelerator. When you have clear guardrails, your developers don't have to guess where the boundaries are. They can build with confidence, knowing the safety net is already in place.
Five Essential Components of Practical AI Governance
To move beyond the theory, we focus on five core pillars that move "Responsible AI" from lip service to production.
1. Guardrails
These are the technical controls that kethate AI’s behaviour within acceptable bounds. We look at this in three layers:
Input Guardrails: Preventing prompt injection attacks and ensuring sensitive data doesn't reach the model
Output Guardrails: Using tools like Azure AI Content Safety to filter harmful content or verify that responses stay on-topic
Behavioural/technical Guardrails: Implementing rate limits and "human-in-the-loop" triggers for high-stakes actions, like processing a large refund
2. Identity & Access Control
The "who" and "what" are just as important as the "how." You need to know exactly who is interacting with the AI and what data it is allowed to access, both in general and on behalf of different users.
Implement Row Level Security on all data sources, for example, using Azure RBAC and Entra ID
This is a large topic on its own. Essentially, you should aim for all ‘ingested’ data to carry its permission structure so that users can only access via AI what they can access via other means
Never use shared service accounts for AI integrations. Not only is auditing impossible, but RBAC also becomes, at best, problematic and, at worst, impossible. Instead, consider Delegated/OBO Access
Apply the principle of least privilege so the AI only accesses the specific data sources it needs for a task
3. Data Lineage & AIBOM
An AIBOM (AI Bill of Materials) is a complete inventory of all models, libraries, and data sources involved. If a vulnerability is discovered in a specific version of a model, your AIBOM tells you instantly if you’re at risk. Coupling this with data lineage allows you to trace an output back to its source, which may be important for compliance audits.
4. Evaluations & Red-Teaming
You can't just "vibe check" an AI and call it a day. Evaluations provide an effective way to validate your workloads:
Functional Testing: Is it actually accurate and consistent?
Safety Testing: Can we trick it into being biased or harmful?
Red-Teaming: This is where a person or system acts as the "adversary," trying to find the cracks in the system before a malicious user does

5. Responsibility & Accountability
Before becoming operational, establishing clear responsibility and accountability is important, especially for high-risk/cost actions like system shutdowns, model rollbacks, changes to safety features, etc. Defining this governance structure is a fundamental requirement and aligns with standards like ISO 42001.
We recommend formalising this structure using an AI RACI (Responsible, Accountable, Consulted, Informed) matrix. This matrix delineates not only who contributes to the system's development, but who holds the authority and accountability in an emergency (or just for the change process).
Key governance areas requiring RACI definition include:
Emergency Authority: Defining the person/s or role with the definitive authority to initiate a system shutdown, perform a critical rollback, or temporarily disable a safety guardrail. This role is the primary Accountable party for runtime stability
Model Lifecycle Management: Who is responsible for managing routine updates, performing model swaps (e.g., migrating to a newer foundation model), and executing necessary platform configuration changes
Safety Guardrail Ownership: Determining who owns the configuration, review process, and adjustment authority for content safety filters, input/output controls, and other safety mechanisms based on feedback and red-teaming exercises
Data Governance Oversight: Identifying the party ultimately accountable for the quality, fairness (bias mitigation), lineage, and permissioning of all data used for training and retrieval
6. Operations & Monitoring
Governance doesn't end at deployment. AI models can "drift" over time as the data they interact with changes. It’s a good idea to set up continuous monitoring to track:
Model Performance: Request time, latency, etc.
Confidence Scores: How sure is the AI about this answer? This can be combined with Evaluations
Guardrail Triggers: How often is the system trying to go out of bounds?
Logging and Auditing: Capture interactions and refinements for future improvements and, importantly, security auditing
Standing on Shoulders: Tools and Frameworks
You don't need to reinvent the wheel. Instead, lean on established frameworks and the Azure ecosystem to quickly get these controls in place.
ISO 42001: The international standard of AI management, which is becoming the gold standard for regulated industries.
Microsoft Responsible AI Standard: A practical framework that works at scale.
Azure Well-Architected AI Workload Assessment: A tool for self-assessing your AI workload designs and governance,
Microsoft Foundry (ex Azure AI Studio): Our primary hub for running safety evaluations and managing model deployments, agents, and agent workflows.
Azure AI APIM: A specialised AI-centric enhancement to Azure API Management Gateway. Read more.
Microsoft Purview: For data governance, sensitivity labels, and lineage. We are new to this space, but excited about how well it integrates.
Our advice? Start with the Azure Well-Architected AI Workload Assessment. It provides the most direct path to a secure Azure implementation without getting bogged down in academic theory. Design your solution, then complete a workload assessment to view your score and areas for improvement.
You should do this even if you don’t intend to use Azure. It’s mostly platform-agnostic.
The SixPivot Approach to AI Governance
When we work with clients, we don't just hand over a document. We build.
Phase 1: Assessment: We complete a Suitability Checklist to surface your risk profile, requirements, and identify where the "scary" stuff is
Phase 2: Solution Blueprint: We work with your team to produce a preliminary design for your workload and resources
Phase 3: Implementation: We iteratively help you build, deploy, configure and validate your solution
Phase 4: Documentation: We ensure the system is well documented, including a comprehensive “As-Built” artefact
Achieving the Microsoft AI Platform Advanced Specialisation wasn't about passing a single test. It required us to prove we've done this multiple times in the real world. Independent Auditors reviewed our technical architectures, verified our client successes, and checked that our team knows how to implement these guardrails at scale.
It demonstrates that when we talk about AI risk management, we're speaking from experience, not from a brochure.
How to Get Started
If you’re in the Concerned camp, start small. Pick a low-risk internal use case, implement basic identity controls and monitoring, and ship it. You’ll learn more from one week of production than from six months of theory.
If you're in the Cavalier camp, do an inventory this week. Identify your highest risk AI implementation and start an AIBOM. Governance debt compounds just like technical debt, and it's much cheaper to fix problems now than after an issue.
Additionally, everyone should:
Create a simple incident response plan specifically for AI failures
Set up basic monitoring for model confidence levels
Evaluate which framework (ISO 42001, NIST, or Microsoft RAI) fits your industry best
Moving Forward
AI governance isn't a "solved" problem because technology is still evolving, but the path to doing it safely is becoming clearer. SixPivot has clocked hundreds of hours of implementation and hard-won lessons across cloud providers, and while the landscape keeps shifting, the core principles outlined here have emerged.
Good governance is a competitive advantage. It's what allows you to move from a "cool experiment" to a production system that customers trust.
Need help implementing an AI governance framework that works? Let's talk about your specific context and how we can help you ship faster.



