AI Risk Taxonomy

Navigate the AI Risk Landscape

Our comprehensive taxonomy classifies 47 risk categories across 6 domains with severity levels, real-world examples, regulatory framework mapping, and mitigation pathways.

Critical / Systemic
High
Medium
Low
Bias & Discrimination
Systemic Risk8 categories

Systematic unfairness in AI outputs that disproportionately affects specific demographic groups, leading to discriminatory outcomes in hiring, lending, healthcare, and criminal justice.

Gender BiasHigh

Differential treatment based on gender in model predictions and recommendations.

Racial BiasCritical

Systematic disadvantage to racial or ethnic groups in automated decision-making.

Cultural BiasHigh

Western-centric training data leading to misrepresentation of non-Western cultures.

Socioeconomic BiasHigh

Models that perpetuate or amplify existing economic inequalities.

Age BiasMedium

Differential accuracy or treatment across age demographics.

Disability BiasHigh

Inaccessible AI systems or biased outputs affecting disabled individuals.

Language BiasMedium

Reduced performance for non-English or minority language speakers.

Geographic BiasMedium

Models trained predominantly on data from specific regions.

Mapped to:EU AI Act Art. 10NIST AI RMF MAP 2.3ISO 42001 A.6.2
Security Threats
High Risk8 categories

Vulnerabilities in AI systems that can be exploited by adversaries to compromise model integrity, steal proprietary data, or cause unauthorized system behavior.

Prompt InjectionCritical

Crafted inputs that override system instructions to produce unauthorized outputs.

Data PoisoningCritical

Manipulation of training data to introduce backdoors or degrade model performance.

JailbreakingHigh

Techniques to bypass safety guardrails and content filters.

Model TheftHigh

Extraction of model weights or architecture through API queries.

Adversarial ExamplesHigh

Imperceptible input perturbations that cause misclassification.

Supply Chain AttacksCritical

Compromised dependencies, libraries, or pre-trained model weights.

Inference AttacksMedium

Extracting training data or membership information from model outputs.

Model InversionHigh

Reconstructing sensitive training data from model predictions.

Mapped to:NIST AI RMF MANAGE 2.4ISO 27001 A.12MITRE ATLAS
Misuse & Manipulation
High Risk8 categories

Intentional or unintentional deployment of AI for harmful purposes including disinformation, fraud, surveillance, and social manipulation at scale.

DeepfakesCritical

AI-generated synthetic media used for impersonation, fraud, or disinformation.

MisinformationCritical

Large-scale generation of false or misleading content.

Social EngineeringHigh

AI-powered phishing, voice cloning, and targeted manipulation.

Fraud & ScamsHigh

Automated generation of fraudulent communications and documents.

Surveillance AbuseCritical

Unauthorized mass surveillance and tracking using AI systems.

Autonomous WeaponsCritical

AI systems deployed in lethal autonomous weapon systems.

Market ManipulationHigh

AI-driven trading strategies that destabilize financial markets.

Political ManipulationCritical

AI-generated content designed to influence elections or public opinion.

Mapped to:EU AI Act Art. 5OECD AI Principle 1.2UNESCO Recommendation
Privacy Violations
High Risk8 categories

Risks to individual privacy through unauthorized data collection, processing, or exposure of personal information by AI systems.

PII LeakageCritical

Models that memorize and reproduce personally identifiable information from training data.

Training Data ExposureHigh

Unintended disclosure of copyrighted or confidential training data.

Inference AttacksHigh

Determining whether specific data was used in model training.

Re-identificationCritical

De-anonymizing individuals from supposedly anonymous datasets.

Behavioral ProfilingHigh

Building detailed profiles of individuals without consent.

Cross-system TrackingMedium

Linking user data across multiple AI systems to create comprehensive profiles.

Consent ViolationsHigh

Processing personal data without adequate informed consent.

Data RetentionMedium

Retaining personal data beyond necessary periods in model weights.

Mapped to:GDPR Art. 22CCPA §1798.100ISO 27701NIST Privacy Framework
Reliability & Safety
Medium Risk8 categories

Failures in AI system performance, consistency, and safety that can lead to incorrect outputs, system failures, or harmful real-world consequences.

HallucinationsHigh

Generation of plausible but factually incorrect information.

Output InconsistencyMedium

Different responses to semantically identical queries.

Edge Case FailuresHigh

Catastrophic failures on inputs outside the training distribution.

Model DriftMedium

Degradation of model performance over time as data distributions shift.

Cascading FailuresCritical

Errors in one AI component propagating through interconnected systems.

OverrelianceHigh

Users placing excessive trust in AI outputs without verification.

Lack of ExplainabilityMedium

Inability to understand or explain model decision-making processes.

Robustness FailuresHigh

Poor performance under noisy, corrupted, or adversarial conditions.

Mapped to:NIST AI RMF MEASURE 2.6ISO 42001 A.8EU AI Act Art. 15
Socioeconomic Impact
Systemic Risk7 categories

Broad societal consequences of AI deployment including labor market disruption, wealth concentration, democratic erosion, and environmental costs.

Job DisplacementHigh

Automation of human labor leading to unemployment and economic disruption.

Inequality AmplificationCritical

AI systems that widen the gap between technology haves and have-nots.

Market ConcentrationHigh

AI advantages accruing to a small number of dominant technology firms.

Digital DivideHigh

Unequal access to AI benefits across demographics and geographies.

Environmental ImpactMedium

Energy consumption and carbon footprint of training and running large AI models.

Democratic ErosionCritical

AI-enabled surveillance, censorship, and manipulation of democratic processes.

Cultural HomogenizationMedium

AI systems that promote dominant cultural norms at the expense of diversity.

Mapped to:OECD AI Principle 1.4UNESCO AI EthicsEU AI Act Recital 47

Ready to Assess Your AI Risk Exposure?

Access full risk scores, compliance mapping, and governance recommendations for 117+ AI models.