AI Risk Classification Model (NIST AI RMF / EU AI Act)

As organizations deploy AI across business, customer, and operational workflows, not every system carries the same level of risk.

This model provides a structured method for classifying AI use cases based on potential impact, regulatory exposure, and operational criticality. Clear risk tiers enable proportionate controls, consistent decision-making, and traceable governance without slowing responsible adoption.

The model is designed to integrate with existing legal, security, risk, and compliance processes rather than replace them.

  • This classification model is used to:

    • Apply consistent risk logic across all AI use cases

    • Ensure governance controls scale with impact

    • Support defensible decisions during audit and regulatory review

    • Reduce ambiguity in approvals, monitoring, and escalation

    Risk classification occurs early in the AI lifecycle and is revisited when material changes occur.

  • This model provides a structured way to classify AI systems based on risk, informed by NIST AI RMF concepts. It is intended to support consistency in governance decisions rather than serve as a compliance determination or scoring mechanism.

  • Tier 1 — Low Risk

    Characteristics

    • Internal or productivity-focused use

    • No automated decision-making

    • No sensitive or regulated data

    Example Use Cases

    • Internal summarization or knowledge tools

    • Non-customer-facing analytics

    Governance Requirements

    • Business owner approval

    • Basic documentation of purpose and data sources

    • Periodic review

    Tier 2 — Medium Risk

    Characteristics

    • Customer-facing or decision-support systems

    • Limited personal or regulated data

    • Human-in-the-loop oversight

    Example Use Cases

    • Customer service copilots

    • HR screening support tools

    Governance Requirements

    • Legal and security review

    • Documented safeguards and usage constraints

    • Defined monitoring plan

    Tier 3 — High Risk

    Characteristics

    • Automated or materially impactful decisions

    • Regulated or sensitive data

    • Financial, customer, or safety implications

    Example Use Cases

    • Eligibility or suitability determinations

    • Financial recommendations or scoring systems

    Governance Requirements

    • Executive and risk approval

    • Formal risk assessment or DPIA equivalent

    • Ongoing monitoring and defined incident escalation

    Tier 4 — Prohibited or Restricted

    Characteristics

    • Unacceptable risk under policy or regulation

    • Incompatible with organizational risk appetite

    Example Use Cases

    • Uses explicitly restricted by law, regulation, or internal policy

    • Deployments lacking minimum required safeguards

    Governance Requirements

    • Deployment blocked

    • Exception process documented if applicable

  • NIST AI Risk Management Framework

    • Govern: Risk ownership, accountability, and approval authority

    • Map: Use case context, intended purpose, and impact

    • Measure: Risk indicators and monitoring expectations

    • Manage: Controls, escalation paths, and response mechanisms

    EU AI Act (Operational Alignment)

    • Supports early identification of limited-risk versus high-risk use cases

    • Enables readiness for conformity assessment planning

    • Does not replace legal interpretation or formal regulatory determination

  • In regulated financial environments, additional considerations apply:

    • Customer impact and suitability risk

    • Data retention, auditability, and traceability

    • Second-line review for high-risk classifications

    • Documentation sufficient for regulatory examination and internal assurance

    High-risk classifications trigger enhanced governance and monitoring expectations consistent with financial services oversight.

  • Clear risk classification reduces ambiguity, accelerates responsible approvals, and ensures AI governance decisions are defensible under audit and regulatory review.

    This model enables organizations to adopt AI intentionally while maintaining accountability at scale.

Defines AI risk tiers and regulatory alignment across NIST AI RMF and the EU AI Act to support consistent governance decisions.

Previous
Previous

— EU AI Act Annex IV – AI System Technical Dossier

Next
Next

— AI Use Case Intake & Risk Triage Workflow