Skip to main content

Posts

White House National AI Policy Framework: What CISOs Need to Know and Do Now

The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026, and every CISO needs to read past the headlines. The document is not a law. It is not a regulation. It is a set of legislative recommendations directed at Congress — non-binding by design — outlining how the Trump administration believes the federal government should approach AI governance. What it is, practically speaking, is the clearest signal yet of where federal AI policy is headed and how that trajectory should reshape your organization’s approach to AI risk management, compliance planning, and governance program design. The framework follows Executive Order 14365, signed in December 2025, which directed federal agencies to identify and challenge state AI laws that conflict with national AI strategy. Together, these actions set up the central tension that enterprise security leaders now have to navigate: a federal posture that is explicitly moving toward preempting state-level AI...
Recent posts

IAM Metrics in Practice: Real Numbers, Real Scenarios, Real Conversations

A companion post to: IAM Metrics That Actually Matter: Proving Risk Reduction and Value to Every Level of the Organization The previous post laid out the framework: which IAM metrics matter, why they matter, and how to use them to tell a risk reduction and value story that resonates at every level of the organization. But frameworks without numbers are just theory. Security leaders need to see what these metrics actually look like when you run them against a real environment — the before states, the after states, the calculations, and the language you use to present them. This post walks through each major metric category with concrete examples drawn from the kinds of environments I have seen across more than two decades in this field. The numbers are composites — realistic representations of what organizations at different maturity levels actually look like — not a single case study. But they are close enough to reality that you should be able to map them directly to your own en...

OpenClaw and Personal AI Assistants: Emerging Threats and What CISOs Need to Do Now

OpenClaw became the fastest-growing GitHub project in history almost overnight. It crossed 300,000 stars in early 2026, surpassing milestones that took Linux and React years to reach. That kind of adoption velocity is a signal security teams cannot afford to miss — because it means OpenClaw is almost certainly already running inside your organization, on devices you manage, connected to accounts and data your security program is responsible for protecting. The security community has described OpenClaw as “an absolute nightmare” from a risk perspective. That assessment is accurate, and understanding why requires understanding what OpenClaw actually is and how it operates — because it is not a chatbot. It is something with fundamentally different security implications. What OpenClaw Actually Is OpenClaw markets itself as “the AI that actually does things.” That description is technically precise and should raise immediate flags for any security practitioner. Where traditional AI tools an...

IAM Metrics That Actually Matter: Proving Risk Reduction and Value to Every Level of the Organization

I have been in information security for more than twenty years, and one of the conversations I have had more times than I can count goes something like this: the security team has spent eighteen months building out an identity and access management program. They have deployed a new IGA platform, cleaned up thousands of orphaned accounts, enforced multi-factor authentication across the enterprise, and automated the joiner-mover-leaver lifecycle. And then someone in the CFO’s office asks a simple question: what did we actually get for that investment? If your answer is a technical presentation about policy enforcement rules and connector configurations, you have already lost the room. If your answer is a blank stare because you never built a metrics framework to begin with, you have lost the budget cycle too. IAM is one of the highest-value security investments an organization can make. Identity is the new perimeter. Credential-based attacks are the dominant breach vector. And access...

IAM for AI Agents: Why Your Identity Program Isn't Ready

AI agents are multiplying inside enterprise environments faster than identity governance programs can track them. They are being deployed by developers, operations teams, and business analysts — often without security involvement, without formal registration, and without the kind of access scoping discipline that any human identity would require. The service accounts they run under accumulate permissions. The credentials they use do not rotate. The ownership of those identities is tied to whoever built the agent, and when that person moves on, the agent keeps running with nobody accountable for what it can access or what it is doing. This is not a theoretical future risk. It is the current state in most organizations that have started adopting AI automation in any meaningful way. And it represents a significant gap in the IAM frameworks most security programs are built around — because those frameworks were designed for human identities, and AI agents are something fundamentally differ...