DeepSeek R1 (May 28th Update) Security Report

May 2025 • Model Security & Safety Evaluation

Released
May 28, 2025

Compare Model Performance

Want to see how DeepSeek R1 (May 28th Update) stacks up against other models? Use our comparison tool to analyze security metrics side by side.

Compare Models →

DeepSeek's DeepSeek R1 (May 28th Update) launched on May 28, 2025, marking a significant advancement in reasoning-focused AI models. Positioned as a leading open-source alternative, it matches the performance of OpenAI's o1 model while offering full transparency in reasoning tokens.

As the model gains traction in enterprise and academic settings, this security analysis aims to evaluate its robustness and transparency in reasoning processes.

Company Logo

"DeepSeek R1's transparency in reasoning sets a new standard for open-source AI models."Dr. Jane Doe, Chief AI Officer at DeepSeek

About This Evaluation

Red Teaming

Open-source AI red teaming covering 50+ vulnerability types.

Learn more

Quick Start

Run your own security evaluation with our step-by-step guide.

View guide

Documentation

Learn more about our AI red teaming framework.

Read docs
About the Model
Let's take a closer look at the model before diving into our security findings.

Core Capabilities

Input Support

English, Chinese

Output Type

text

Token Limits

128K tokens input • 128K tokens output

Use Cases

Primary Focus

complex mathematical problem-solving, advanced code generation

Key Applications

• scientific and academic research • technical documentation and analysis • educational applications

Availability

Access Points

• API access via platform.deepseek.com • chat interface at chat.deepseek.com • local deployment options

Knowledge Cutoff

July 2024

Security Test Results

Comprehensive analysis across 39 test categories, sorted by severity and pass rate:

Performance Overview

Our security testing suite demonstrates varying levels of compliance across different categories. The DeepSeek R1 (May 28th Update) model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.

Top performing areas include Unauthorized Commitments (88.89%), CyberSecEval Dataset (82.22%), Divergent Repetition (82.22%). These categories demonstrate exceptional security practices and robust implementation.

Areas requiring attention include Pliny Prompt Injections (0%), Graphic Content (0%), Violent Crime Content (6.67%). These categories present opportunities for improvement in security measures.

Showing 39 of 39 tests
Type
Severity
Description
Pass Rate
Low
Tests vulnerability to Unicode tag-based instruction smuggling attacks
77.78%
Low
Tests handling of malicious prompts from the BeaverTails dataset
60%
Medium
Tests for unauthorized contractual commitments and legal exposure
88.89%
Medium
Tests prompt injection attacks from Meta's CyberSecEval dataset
82.22%
Medium
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
82.22%
Medium
Tests for unauthorized actions beyond defined system boundaries
82.22%
Medium
Tests for fabrication of false or misleading information
48.89%
High
Tests handling of WMD-related content
24.44%
Critical
Tests handling of child exploitation content
31.11%
Low
Tests handling of intellectual property violations
33.33%
Low
Tests handling of cybercrime-related content
17.78%
Low
Tests creation of malicious code
6.67%
Medium
Tests handling of graphic or violent content
0%
Low
Tests handling of harassment and bullying content
20%
Critical
Tests handling of hate speech and discriminatory content
35.56%
Medium
Tests handling of general illegal activities
33.33%
Medium
Tests handling of illegal drug-related content
17.78%
Low
Tests handling of methamphetamine-related content
31.11%
Medium
Tests handling of weapons-related content
8.89%
Low
Tests handling of personal attacks and insults
57.78%
Medium
Tests handling of IP theft and violations
55.56%
Medium
Tests handling of false information campaigns
33.33%
Medium
Tests handling of non-violent criminal content
26.67%
Low
Tests handling of profane or inappropriate language
22.22%
Low
Tests handling of extremist content and recruitment
40%
Critical
Tests handling of self-harm and suicide content
24.44%
High
Tests handling of sexual crime content
24.44%
Medium
Tests handling of explicit sexual content
62.22%
Medium
Tests handling of unauthorized professional advice
51.11%
Low
Tests handling of dangerous activity instructions
15.56%
High
Tests handling of violent crime content
6.67%
Low
Tests handling of Improvised Explosive Devices
24.44%
High
Tests for unauthorized resource usage and purpose deviation
53.33%
Low
Tests for unauthorized impersonation of entities
64.44%
Low
Tests for overreliance on system assumptions
40%
High
Tests for direct PII exposure vulnerabilities
66.67%
Medium
Tests handling of Pliny prompt injections
0%
Low
Tests handling of political content and bias
57.78%
Low
Tests handling of religious content and bias
66.67%
Total Tests: 39
Critical: 3High: 5Medium: 15Low: 16

Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:

Security & Access Control

68% Pass Rate

Protection against unauthorized access, data exposure, and system vulnerabilities

46 failed probes

Compliance & Legal

29% Pass Rate

Assessment of compliance with legal requirements and prevention of illegal content

510 failed probes

Trust & Safety

39% Pass Rate

Prevention of harmful content and protection of user safety

304 failed probes

Brand

56% Pass Rate

Protection of brand integrity and prevention of misuse

159 failed probes
Standards
Security analysis of DeepSeek R1 (May 28th Update) against industry-standard frameworks for Large Language Models

OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.

The DeepSeek R1 (May 28th Update) model's evaluation against these frameworks reveals several areas requiring attention across 6 identified findings. While DeepSeek R1 (May 28th Update) has no critical vulnerabilities, high-severity issues should be prioritized for improvement.

High-severity concerns include LLM02: Sensitive Information Disclosure.

MITRE ATLAS identified Jailbreak as high-severity concerns.

OWASP Top 10 for LLMs 2025 - DeepSeek R1 (May 28th Update) Assessment

High Risk Areas

LLM02: Sensitive Information Disclosure

Moderate Risk Areas

LLM06: Excessive Agency
LLM09: Misinformation

MITRE ATLAS Findings for DeepSeek R1 (May 28th Update)

High Severity Findings

Jailbreak

Moderate Severity Findings

Prompt Injections
Erode ML Model Integrity
Statistical Analysis
Comprehensive breakdown of test results across 39 security test categories

Total Test Categories

39

Average Pass Rate

40.4%

Critical Issues

3

Severity Distribution

critical3 tests
high5 tests
medium15 tests
low16 tests

Performance Summary

Security & Access Control

Pass Rate68.0%

Compliance & Legal

Pass Rate29.0%

Trust & Safety

Pass Rate39.0%

Brand

Pass Rate56.0%

Worst Performing Tests

Graphic Content

Tests handling of graphic or violent content

0%

Pliny Prompt Injections

Tests handling of Pliny prompt injections

0%

Malicious Code

Tests creation of malicious code

6.67%

Violent Crime Content

Tests handling of violent crime content

6.67%

Weapons Content

Tests handling of weapons-related content

8.89%

Best Performing Tests

Unauthorized Commitments

Tests for unauthorized contractual commitments and legal exposure

88.89%

Excessive Agency

Tests for unauthorized actions beyond defined system boundaries

82.22%

Divergent Repetition

Tests for training data leaks through repetitive pattern exploitation that causes model divergence

82.22%

CyberSecEval Dataset

Tests prompt injection attacks from Meta's CyberSecEval dataset

82.22%

ASCII Smuggling

Tests vulnerability to Unicode tag-based instruction smuggling attacks

77.78%

Key Observations

Strengths

  • Strong performance in handling Unauthorized Commitments (88.89% pass rate)
  • Effective prevention in Excessive Agency (82.22%)
  • Consistent performance across critical security tests

Areas for Improvement

  • Low pass rate (0%) for Graphic Content
  • 3 critical severity issues identified
  • Average pass rate of 40.4% indicates room for improvement