o3 Security Report

April 2025 • Model Security & Safety Evaluation

Released
June 2024

Compare Model Performance

Want to see how o3 stacks up against other models? Use our comparison tool to analyze security metrics side by side.

Compare Models →

OpenAI's O3 model launched in June 2024, setting a new standard in reasoning capabilities across math, science, coding, and visual tasks. Positioned as the flagship model in OpenAI's series, O3 offers unparalleled reasoning power with a comprehensive token support system, available through OpenAI's platform.

As O3 gains traction in enterprise applications, this analysis focuses on its security features and potential areas for enhancement.

Company Logo

"Our most powerful reasoning model. Use it to think through multi-step problems that involve analysis across text, code, and images."OpenAI

About This Evaluation

Red Teaming

Open-source AI red teaming covering 50+ vulnerability types.

Learn more

Quick Start

Run your own security evaluation with our step-by-step guide.

View guide

Documentation

Learn more about our AI red teaming framework.

Read docs
About the Model
Let's take a closer look at the model before diving into our security findings.

Core Capabilities

Input Support

text, image

Output Type

text

Token Limits

200,000 tokens input • 100,000 tokens output

Use Cases

Primary Focus

reasoning tasks

Key Applications

• math • science • coding • visual reasoning • technical writing • instruction-following

Availability

Access Points

• OpenAI platform

Knowledge Cutoff

June 1, 2024

Security Test Results

Comprehensive analysis across 39 test categories, sorted by severity and pass rate:

Performance Overview

Our security testing suite demonstrates varying levels of compliance across different categories. The o3 model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.

Top performing areas include ASCII Smuggling (100%), WMD Content (95.56%), Drug-Related Content (95.56%). These categories demonstrate exceptional security practices and robust implementation.

Areas requiring attention include Pliny Prompt Injections (0%), Political Bias (4.44%), Overreliance (6.67%). These categories present opportunities for improvement in security measures.

Showing 39 of 39 tests
Type
Severity
Description
Pass Rate
Low
Tests vulnerability to Unicode tag-based instruction smuggling attacks
100%
Low
Tests handling of malicious prompts from the BeaverTails dataset
73.33%
Medium
Tests for unauthorized contractual commitments and legal exposure
46.67%
Medium
Tests prompt injection attacks from Meta's CyberSecEval dataset
62.22%
Medium
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
48.89%
Medium
Tests for unauthorized actions beyond defined system boundaries
31.11%
Medium
Tests for fabrication of false or misleading information
17.78%
High
Tests handling of WMD-related content
95.56%
Critical
Tests handling of child exploitation content
93.33%
Low
Tests handling of intellectual property violations
91.11%
Low
Tests handling of cybercrime-related content
88.89%
Low
Tests creation of malicious code
91.11%
Medium
Tests handling of graphic or violent content
88.89%
Low
Tests handling of harassment and bullying content
71.11%
Critical
Tests handling of hate speech and discriminatory content
62.22%
Medium
Tests handling of general illegal activities
80%
Medium
Tests handling of illegal drug-related content
95.56%
Low
Tests handling of methamphetamine-related content
93.33%
Medium
Tests handling of weapons-related content
91.11%
Low
Tests handling of personal attacks and insults
77.78%
Medium
Tests handling of IP theft and violations
64.44%
Medium
Tests handling of false information campaigns
75.56%
Medium
Tests handling of non-violent criminal content
86.67%
Low
Tests handling of profane or inappropriate language
60%
Low
Tests handling of extremist content and recruitment
86.67%
Critical
Tests handling of self-harm and suicide content
75.56%
High
Tests handling of sexual crime content
91.11%
Medium
Tests handling of explicit sexual content
84.44%
Medium
Tests handling of unauthorized professional advice
53.33%
Low
Tests handling of dangerous activity instructions
95.56%
High
Tests handling of violent crime content
91.11%
Low
Tests handling of Improvised Explosive Devices
93.33%
High
Tests for unauthorized resource usage and purpose deviation
6.67%
Low
Tests for unauthorized impersonation of entities
17.78%
Low
Tests for overreliance on system assumptions
6.67%
High
Tests for direct PII exposure vulnerabilities
82.22%
Medium
Tests handling of Pliny prompt injections
0%
Low
Tests handling of political content and bias
4.44%
Low
Tests handling of religious content and bias
55.56%
Total Tests: 39
Critical: 3High: 5Medium: 15Low: 16

Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:

Security & Access Control

49% Pass Rate

Protection against unauthorized access, data exposure, and system vulnerabilities

73 failed probes

Compliance & Legal

84% Pass Rate

Assessment of compliance with legal requirements and prevention of illegal content

113 failed probes

Trust & Safety

75% Pass Rate

Prevention of harmful content and protection of user safety

124 failed probes

Brand

27% Pass Rate

Protection of brand integrity and prevention of misuse

263 failed probes
Standards
Security analysis of o3 against industry-standard frameworks for Large Language Models

OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.

The o3 model's evaluation against these frameworks reveals significant security concerns across 6 identified findings. The o3 model has critical vulnerabilities in OWASP categories that require immediate remediation.

Notable critical findings include LLM01: Prompt Injection.

High-severity concerns include LLM02: Sensitive Information Disclosure.

MITRE ATLAS identified Jailbreak as high-severity concerns.

OWASP Top 10 for LLMs 2025 - o3 Assessment

Critical Vulnerabilities

LLM01: Prompt Injection

High Risk Areas

LLM02: Sensitive Information Disclosure

Moderate Risk Areas

LLM06: Excessive Agency
LLM09: Misinformation

MITRE ATLAS Findings for o3

High Severity Findings

Jailbreak

Moderate Severity Findings

Erode ML Model Integrity
Statistical Analysis
Comprehensive breakdown of test results across 39 security test categories

Total Test Categories

39

Average Pass Rate

67.5%

Critical Issues

3

Severity Distribution

critical3 tests
high5 tests
medium15 tests
low16 tests

Performance Summary

Security & Access Control

Pass Rate49.0%

Compliance & Legal

Pass Rate84.0%

Trust & Safety

Pass Rate75.0%

Brand

Pass Rate27.0%

Worst Performing Tests

Pliny Prompt Injections

Tests handling of Pliny prompt injections

0%

Political Bias

Tests handling of political content and bias

4.44%

Resource Hijacking

Tests for unauthorized resource usage and purpose deviation

6.67%

Overreliance

Tests for overreliance on system assumptions

6.67%

Entity Impersonation

Tests for unauthorized impersonation of entities

17.78%

Best Performing Tests

ASCII Smuggling

Tests vulnerability to Unicode tag-based instruction smuggling attacks

100%

Dangerous Activity Content

Tests handling of dangerous activity instructions

95.56%

Drug-Related Content

Tests handling of illegal drug-related content

95.56%

WMD Content

Tests handling of WMD-related content

95.56%

Improvised Explosive Devices

Tests handling of Improvised Explosive Devices

93.33%

Key Observations

Strengths

  • Strong performance in handling ASCII Smuggling (100% pass rate)
  • Effective prevention in Dangerous Activity Content (95.56%)
  • Consistent performance across critical security tests

Areas for Improvement

  • Low pass rate (0%) for Pliny Prompt Injections
  • 3 critical severity issues identified
  • Average pass rate of 67.5% indicates room for improvement