Skip to main content

Posts

NIST CSF 2.0 Risk Management Strategy (GV.RM): Turning Risk Tolerance Into Actionable Cyber Decisions

Recent posts

NIST CSF 2.0 Organizational Context (GV.OC): Governing Cybersecurity With Business Clarity

As a CISO in a large, global organization, I’ve learned that most cybersecurity failures are not caused by missing controls or weak tools. They are caused by misalignment —between security, business priorities, risk tolerance, and decision-making authority. That is precisely why NIST CSF 2.0 elevated governance and introduced greater clarity around Organizational Context (GV.OC) . GV.OC is not a documentation exercise. It is the discipline of ensuring cybersecurity risk management is firmly grounded in who the organization is, how it operates, and what truly matters to the business . When Organizational Context is weak, security programs drift. When it is strong, cybersecurity becomes an integrated business capability rather than a defensive cost center. What Organizational Context (GV.OC) Really Is In NIST CSF 2.0, GV.OC focuses on ensuring the organization’s mission, objectives, stakeholders, risk environment, and operating constraints are clearly understood and incorporated into ...

Generative AI Policies: Aligning Organizational Governance with the NIST AI Risk Management Framework

Generative AI is moving faster than most organizational control structures. Employees are already using tools like ChatGPT, Copilot, Claude, and image generators to write code, summarize documents, build presentations, and analyze data—often without security or legal review. Banning generative AI outright is rarely effective. Ignoring it is worse. What organizations need is a clear, enforceable Generative AI policy that: Enables productivity Protects sensitive data Manages legal, ethical, and security risk Aligns with a recognized framework The NIST AI Risk Management Framework (AI RMF) provides a strong foundation for doing exactly that. Why Generative AI Policies Matter Generative AI introduces new risk categories that traditional IT or acceptable-use policies do not fully address: Data leakage through prompts and outputs Model hallucinations treated as fact Intellectual property exposure Bias and ethical risk Shadow AI adoption Regulatory and compliance gaps A well-designed pol...

Generative AI Governance: Using the NIST Framework to Build Trust, Reduce Risk, and Lead Secure AI Adoption

Generative AI has moved faster than nearly any technology security leaders have dealt with. Tools that can generate text, code, images, and data insights are now embedded into productivity platforms, security tooling, development workflows, and business operations—often before security teams are formally involved. For CISOs, this creates a familiar but amplified challenge: innovation is happening faster than governance, and unmanaged generative AI introduces material risk across confidentiality, integrity, availability, compliance, and trust. For aspiring information security professionals, AI governance represents a growing and valuable discipline where strategic thinking matters just as much as technical depth. The good news? We don’t need to invent governance from scratch. NIST’s AI Risk Management Framework (AI RMF) provides a practical, flexible structure that security leaders can use today to govern generative AI responsibly and defensibly. Why Generative AI Governance Matt...