
Human in the Loop" Is Not a Control
Automation bias, rubber-stamping, and the most dangerous assumption in AI governance
AI governance frameworks, ethics, risk management, and compliance.

Automation bias, rubber-stamping, and the most dangerous assumption in AI governance

Shadow tools, agentic risk, and the governance gap in AI-assisted development

When your AI agent holds the keys

Recovery, containment, authority

Ten minutes before the pre-read goes out, the general counsel forwards an email thread marked "URGENT."

Earlier this week, I had the privilege to present to India’s business leaders via Economic Times’ livestream, where thousands watched live and the recording ...

A Fortune 500 CTO called me last week, frustrated. His company had just burned through $2.3 million on an AI implementation that barely moved the needle. The...

In the rapidly evolving landscape of artificial intelligence, generative AI stands out as a groundbreaking innovation. Imagine a technology that can craft co...
Navigating AI regulations, building ethics frameworks, and staying compliant in India.
An easy-to-understand introduction to bias in AI systems with real-world examples and everyday analogies.
Technical deep-dive into bias in machine learning systems, including detection methods, mitigation strategies, and implementation best practices.
What is AI bias, why it's especially consequential in India, and a practical framework for detecting and mitigating it in your AI systems.
When your AI agent holds the keys - the new governance gap for agentic assistants.

How AI learns human prejudice through four channels — data, labeling, design, and deployment — and what you can do to spot it.

What to think about before you paste anything into an AI tool — storage, training, access risks, and practical protection steps.

Four ethical principles and a five-step decision framework for when workplace AI rules do not give you a clear answer.

Existing laws already apply to AI — civil rights, HIPAA, GLBA, and new state rules. Here is what matters for your work.
What Is AI Governance: AI systems are fundamentally different from traditional software — they are probabilistic, opaque, autonomous, and data-dependent.
AI Risks and Harms: Discrimination in automated decisions (hiring, lending, insurance).
Responsible AI Principles: Bias testing using demographic parity, equalized odds, and disparate impact analysis.
Building an AI Governance Program: AI governance officer or chief AI ethics officer.
AI Lifecycle Policies: Use case assessment and approval: when AI is (and isn't) the right solution.
AI Developers vs. Deployers vs. Providers: Developer: builds the model or AI system.
Third-Party AI Risk: Most organizations don't build AI — they buy it, embed it, or use it as a service.
AI Governance Maturity: Level 1 — Ad hoc: AI experiments without oversight or policy.
Cross-Functional AI Governance: AI impacts cannot be understood by examining technology alone.
How Data Privacy Laws Apply to AI: Notice requirements for AI-processed data.
AI and Intellectual Property Law: Can copyrighted material be used for AI training? Current legal landscape.
AI and Non-Discrimination Law: Title VII and AI in hiring, promotion, termination.
AI and Consumer Protection Law: Section 5 unfair or deceptive practices applied to AI.
AI and Product Liability: Design defects: flawed training, biased data, inadequate testing.
The EU AI Act Explained: Social scoring, manipulative AI, untargeted facial scraping, real-time biometric identification.
EU AI Act: Risk management system: continuous, iterative, throughout lifecycle.
General-Purpose AI Under the EU AI Act: What qualifies as a general-purpose AI model under the Act.
NIST AI Risk Management Framework: Core functions, categories, and subcategories.
ISO 42001 Explained: The first certifiable AI Management System (AIMS) standard.
NIST AI RMF vs. ISO 42001 vs. EU AI Act: OECD = principles, NIST = voluntary framework, ISO 42001 = certifiable standard, EU AI Act = law.
The OECD AI Principles: Inclusive growth and sustainable development.
AI Incident Management: Brittleness, opacity, and cascading effects distinguish AI incidents from IT incidents.
Transparency Obligations for AI: Transparency by risk tier: prohibited, high-risk, limited risk, minimal risk.
AI Impact Assessments: Privacy Impact Assessment (PIA/DPIA).
AI Vendor Contracts: Data ownership and data handling provisions.
Deploying AI Responsibly: Translating organizational policies to the deployment context.
Ongoing AI Governance: Internal audit vs. external audit vs. algorithmic audit.
Secondary Risks and Unintended Uses: Function creep: when AI is used beyond its intended purpose.
AI Decommissioning: Regulatory changes that render the system non-compliant.
External Communication Plans for AI: What stakeholders need to know about your AI — before anything goes wrong.
Agentic AI Governance: The shift from recommendation to action — and why it changes everything.
AI Governance for Financial Services: SR 11-7: Federal Reserve model risk management guidance applied to AI.