The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026, and every CISO needs to read past the headlines. The document is not a law. It is not a regulation. It is a set of legislative recommendations directed at Congress — non-binding by design — outlining how the Trump administration believes the federal government should approach AI governance. What it is, practically speaking, is the clearest signal yet of where federal AI policy is headed and how that trajectory should reshape your organization’s approach to AI risk management, compliance planning, and governance program design. The framework follows Executive Order 14365, signed in December 2025, which directed federal agencies to identify and challenge state AI laws that conflict with national AI strategy. Together, these actions set up the central tension that enterprise security leaders now have to navigate: a federal posture that is explicitly moving toward preempting state-level AI...
A companion post to: IAM Metrics That Actually Matter: Proving Risk Reduction and Value to Every Level of the Organization The previous post laid out the framework: which IAM metrics matter, why they matter, and how to use them to tell a risk reduction and value story that resonates at every level of the organization. But frameworks without numbers are just theory. Security leaders need to see what these metrics actually look like when you run them against a real environment — the before states, the after states, the calculations, and the language you use to present them. This post walks through each major metric category with concrete examples drawn from the kinds of environments I have seen across more than two decades in this field. The numbers are composites — realistic representations of what organizations at different maturity levels actually look like — not a single case study. But they are close enough to reality that you should be able to map them directly to your own en...