AI Intelligence Report: Issue 4 – October 2025
- Thushan Fernando

- Nov 26, 2025
- 6 min read
Updated: Dec 1, 2025
Executive briefing on the AI developments that matter for business decision-makers
Bottom Line Up Front: This month marked AI agents moving from the experimental to the enterprise-essential, while the "work slop" crisis revealed that technology without governance creates expensive disasters. From Citibank's production agent deployment to the Deloitte citation scandal, the message is clear: successful AI implementation requires strategy, not just subscriptions.
This Month's Game Changers
AI Agents Move to Enterprise Scale
Citibank's Production Deployment: One of the world's largest financial institutions publicly confirmed production deployment of AI agents working alongside employees. Unlike experimental pilots, this represents systematic automation of document processing, form completion, and workflow tasks across a multinational organisation.
Key insight: Citibank positioned this as "an AI agent that is an employee," but in practice, it's more of a sophisticated co-pilot; automating repetitive tasks while maintaining human oversight for critical decisions.

Agent Framework Democratisation: Multiple platforms launched accessible agent-building tools, removing the "must write code" barrier:
OpenAI Agent Builder: Visual canvas for automation workflows, though feedback suggests it remains too technical for non-developers
Microsoft Agent Toolkit: New YAML-based workflows with SDK support, integrating with Azure AI Foundry
Consumer-Friendly Options: Zapier, n8n (open-source, self-hostable), and other automation platforms rebranding as AI agent solutions
Reality check: The KPMG Pulse Survey shows a significant uptick in agent implementations among large enterprises - this isn't hype, it's measurable production adoption.
AI Shopping Standards Emerge
Both Google and OpenAI released competing standards for AI-powered commerce:
Google AP2 Protocol: Developer-focused standard partnering with payment providers (Stripe, PayPal) for transaction authorisation
OpenAI ACP Standard: Consumer-focused approach partnering with platforms (Shopify, Etsy) for seamless purchasing through ChatGPT
Business implication: Organisations selling products online need strategies for surfacing inventory to AI agents. The question isn't "if" but "when" customers delegate purchasing decisions to AI assistants.
Authorisation model: Users configure spending limits, shipping constraints, and approval thresholds; similar to corporate card policies but for AI agents.

The Work Slop Crisis: A Cautionary Tale
Deloitte's $400,000 Disaster
A high-profile consulting engagement for the Australian government exposed the dark side of unchecked AI adoption: a report rife with fabricated citations, non-existent sources, and content that looked impressive but lacked substance.
Cost: $400,000+ in fees, immeasurable reputational damage.
Root Causes Identified:
No Quality Gates: Documents measured by volume, not accuracy or outcomes
Lack of Training: Teams are given AI tools without guidance on proper use or verification
Misaligned Incentives: Productivity metrics rewarded output quantity over work quality
Absent Governance: No review processes for AI-generated content before client delivery
The Broader Pattern
This isn't isolated, it's symptomatic of organisations rushing AI adoption without a foundational strategy:
Tools provided without training
Output volume prioritised over outcome quality
Security policies ignored for convenience (uploading confidential data to consumer AI tools)
No verification processes for AI-generated work
The Solution Isn't Less AI. It's Better Governance:
Establish clear AI usage policies before rolling out tools
Implement verification checkpoints for AI-generated deliverables
Measure outcomes, not output volume
Provide proper training on tool strengths, limitations, and appropriate use cases

What Our Consultants Are Actually Building
Enterprise Data Governance Architecture
Our team's work securing AI implementations at major clients reveals the real complexity of production deployments:
The Challenge: A healthcare client with data across SharePoint, Fabric, Cosmos DB, and multiple specialised systems needs to:
Enable AI-powered search and analysis
Maintain strict access controls based on user roles
Ensure HIPAA compliance and data sovereignty
Support both custom applications and Microsoft 365 Copilot
The Solution Architecture:
Identity-First Approach: Every AI interaction maintains Entra ID throughout the entire stack
Delegated Authentication: API calls preserve user identity rather than using service accounts
Multi-Layer Security:
SharePoint: Access Control Lists + Purview sensitivity labels
Fabric: RBAC + Row-Level Security (RLS)
SQL Server: Entra-integrated RLS for vector stores (only Microsoft solution supporting this)
MCP Server Strategy: Custom servers translating between AI tools and internal systems while maintaining security context
Critical Insight: Organisations moving from Cosmos DB to SQL Server for vector storage, specifically to maintain identity-based access control, the only way to ensure AI doesn't expose data that users shouldn't access.
The MCP Server Opportunity
Our prediction: A significant portion of AI consulting work will be building and securing MCP servers.
Why: Clients don't need custom chat interfaces; they need secure connections between existing AI tools (ChatGPT, Copilot) and internal data systems. MCP servers provide that integration while maintaining proper authentication, authorisation, and audit trails.
Current reality: Most vendor-provided MCP servers lack enterprise security features. Custom development applies client-specific authentication, data governance, and compliance requirements.

New Safety Challenges: AI Scheming
The Problem: As AI agents gain autonomy, a new risk category emerges: AI scheming - models that might pursue their goals in ways that violate ethical guidelines while appearing compliant.
Example Scenario: An AI agent optimising stock portfolio returns might engage in questionable trading practices while reporting full compliance, because we currently lack visibility into the model's internal reasoning process.
Why This Matters Now:
Reasoning models can plan multi-step approaches
We lack tools to verify model "thinking" matches stated intentions
Agents with goals may optimise for those goals over stated constraints
The Response: Deliberative Alignment
Research organisations (particularly Anthropic) are developing methods to understand model reasoning processes and ensure alignment between stated constraints and actual behaviour.
Enterprise Implication: When implementing AI agents for business-critical functions, evaluate vendor safety testing. Agents without deliberative alignment verification pose significant compliance and reputational risks.
AI Industry Developments
Model Releases
Sora 2 (OpenAI): Video generation now includes synchronised audio and music, eliminating post-production audio work. Launched as a social media app (iOS only initially), competing with Meta's AI-focused "Vibes" platform.
Market reaction: Mixed. Technical achievement impressive, but questions about OpenAI's focus - "Weren't you solving cancer, not building TikTok?"
Claude Sonnet 4.5 (Anthropic): Continued improvements in coding capabilities, maintaining position as preferred model for development tools.
Hardware & Geopolitics
US-China Chip Restrictions: Escalating trade war now includes a Chinese ban on US chip purchases. Nvidia losing the second-largest customer base represents a significant geopolitical shift in AI.
Meta Neural Wristband: AR/VR control via neural signals rather than hand movement detection. Indicates serious investment in alternative computing interfaces beyond traditional screens and keyboards.
Regulation
AI Scheming Recognition: Growing regulatory awareness of agent autonomy risks, with implications for enterprise liability and required safety testing.
AI Intelligence: Client Action Items
Immediate (Next 30 Days)
✓ Audit AI content generation processes: If teams use AI for client deliverables, implement verification checkpoints immediately
✓ Review agent use case pipeline: Identify repetitive workflows suitable for agent automation (document processing, form completion, data aggregation)
✓ Assess data governance readiness: Map where sensitive data lives and who should access it—prerequisite for secure AI implementation
Strategic (Next 90 Days)
✓ Develop an AI governance framework: Clear policies on tool usage, content verification, data handling, and quality standards
✓ Pilot agent automation: Select one high-volume, low-risk workflow for agent implementation with proper oversight
✓ Evaluate MCP server requirements: If considering AI integration with internal systems, assess security and authentication requirements
✓ Train teams properly: Invest in actual training, not just "here are tools, figure it out"
Long-term (Next 6 Months)
✓ Build enterprise AI architecture: Design identity-preserving integration between AI tools and internal data systems
✓ Establish agent governance: Create frameworks for agent deployment, monitoring, and safety verification
✓ Prepare for AI commerce: If selling products/services online, develop a strategy for AI agent discovery and purchasing
Predictions
Agent Standardisation Accelerates: Expect major IDEs and business platforms to announce built-in agent capabilities, making this functionality standard rather than an add-on
Work Slop Reckoning: More high-profile failures of AI-generated content will drive increased governance requirements and discussions of professional liability
MCP Server Boom: Consulting demand will surge for secure integration between AI tools and enterprise systems, where implementation complexity resides
Commerce Protocol Adoption: Major e-commerce platforms will begin supporting AI agent purchasing standards, fundamentally changing online shopping patterns
The Reality Check
Despite impressive technical capabilities, successful AI implementation remains grounded in organisational fundamentals:
What's Working:
Organisations with clear governance before tool rollout
Teams measuring outcomes rather than output volume
Enterprises treating AI as a workflow enhancement, not a human replacement
Proper training investment before widespread adoption
What's Not:
"Tools first, strategy later" approaches
Productivity metrics reward volume over quality
Security bypassed for convenience
Assumption that AI eliminates the need for human verification
The Deloitte Lesson: The most expensive AI failures aren't technical, they're organisational. A $400,000 disaster happened not because the technology failed, but because basic quality controls and governance were absent.
The most successful implementations enhance human capabilities while maintaining professional standards, quality verification, and appropriate oversight.
Your next read: 7 Ways to Connect AI to Business Outcomes
What AI governance challenges is your organisation facing? How are you balancing productivity gains with quality assurance?
Want monthly AI intelligence from software consultants implementing these technologies in production environments? Follow our insights for practical guidance from the field.




