← Back to Blog
AI Security2026-03-18· 7 min read

The AI Act Is Here: What Security Managers Need to Know

What Is the AI Act?

The EU AI Act is the first regulation specifically targeting AI systems. It establishes a risk-based framework that categorizes AI systems into four risk levels, each with corresponding compliance requirements.

Risk Categories

Unacceptable Risk - AI systems that are outright banned. This includes social scoring systems, real-time biometric identification in public spaces (with limited exceptions), and AI that manipulates vulnerable groups.

High Risk - AI used in critical areas like employment, education, law enforcement, and critical infrastructure. These systems face the strictest requirements: risk assessments, data quality obligations, transparency, human oversight, and accuracy requirements.

Limited Risk - AI systems like chatbots that primarily require transparency obligations. Users must be informed they're interacting with an AI system.

Minimal Risk - AI systems like spam filters or AI-enabled video games. No specific requirements beyond existing legislation.

What Security Managers Need to Do

  1. Inventory your AI systems - Catalog every AI system in use, including third-party AI integrated into your products and processes
  2. Classify by risk level - Determine which risk category each system falls into
  3. Assess high-risk systems - For high-risk AI, conduct conformity assessments covering data governance, technical documentation, transparency, human oversight, and robustness
  4. Update governance frameworks - Extend your existing security governance to include AI-specific policies
  5. Supply chain implications - If your vendors use AI in their services to you, understand their compliance posture

Timeline

The AI Act entered into force in August 2024, with a phased implementation. Prohibited practices became enforceable in February 2025. Requirements for general-purpose AI models apply from August 2025. High-risk system requirements phase in through 2027.

The Security Angle

For security managers, the AI Act creates a new governance dimension. AI systems introduce risks that traditional security frameworks don't fully address - model drift, training data poisoning, adversarial attacks, and output manipulation. Your security program needs to evolve to cover these AI-specific threats.