Gemini 2.5 Flash Security Report

May 2025 • Model Security & Safety Evaluation

Released
April 17, 2025

Compare Model Performance

Want to see how Gemini 2.5 Flash stacks up against other models? Use our comparison tool to analyze security metrics side by side.

Compare Models →

Google's Gemini 2.5 Flash launched on April 17, 2025, marking a significant advancement in hybrid reasoning capabilities while maintaining the speed and cost efficiency of its predecessor. Available through the Gemini API via Google AI Studio and Vertex AI, this model is currently in preview mode.

As the model enters the preview phase, this analysis aims to evaluate its security features and identify areas for potential improvement, ensuring robust protection as enterprise adoption grows.

Company Logo

"Building upon the popular foundation of 2.0 Flash, this new version delivers a major upgrade in reasoning capabilities, while still prioritizing speed and cost. Gemini 2.5 Flash is our first fully hybrid reasoning model, giving developers the ability to turn thinking on or off."Google

About This Evaluation

Red Teaming

Open-source AI red teaming covering 50+ vulnerability types.

Learn more

Quick Start

Run your own security evaluation with our step-by-step guide.

View guide

Documentation

Learn more about our AI red teaming framework.

Read docs
About the Model
Let's take a closer look at the model before diving into our security findings.

Core Capabilities

Input Support

text

Output Type

text

Token Limits

24,576 tokens input • 24,576 tokens output

Use Cases

Primary Focus

hybrid reasoning

Key Applications

• complex task analysis • math problem solving • research question analysis

Availability

Access Points

• Gemini API via Google AI Studio • Vertex AI

Knowledge Cutoff

January 2025

Security Test Results

Comprehensive analysis across 39 test categories, sorted by severity and pass rate:

Performance Overview

Our security testing suite demonstrates varying levels of compliance across different categories. The Gemini 2.5 Flash model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.

Top performing areas include Unauthorized Commitments (82.22%), Divergent Repetition (77.78%), Excessive Agency (75.56%). These categories show satisfactory security practices.

Areas requiring attention include Disinformation Campaigns (20%), Graphic Content (35.56%), Dangerous Activity Content (37.78%). These categories present opportunities for improvement in security measures.

Showing 39 of 39 tests
Type
Severity
Description
Pass Rate
Low
Tests vulnerability to Unicode tag-based instruction smuggling attacks
55.56%
Low
Tests handling of malicious prompts from the BeaverTails dataset
62.22%
Medium
Tests for unauthorized contractual commitments and legal exposure
82.22%
Medium
Tests prompt injection attacks from Meta's CyberSecEval dataset
48.89%
Medium
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
77.78%
Medium
Tests for unauthorized actions beyond defined system boundaries
75.56%
Medium
Tests for fabrication of false or misleading information
62.22%
High
Tests handling of WMD-related content
57.78%
Critical
Tests handling of child exploitation content
42.22%
Low
Tests handling of intellectual property violations
46.67%
Low
Tests handling of cybercrime-related content
46.67%
Low
Tests creation of malicious code
46.67%
Medium
Tests handling of graphic or violent content
35.56%
Low
Tests handling of harassment and bullying content
66.67%
Critical
Tests handling of hate speech and discriminatory content
68.89%
Medium
Tests handling of general illegal activities
53.33%
Medium
Tests handling of illegal drug-related content
60%
Low
Tests handling of methamphetamine-related content
40%
Medium
Tests handling of weapons-related content
48.89%
Low
Tests handling of personal attacks and insults
51.11%
Medium
Tests handling of IP theft and violations
53.33%
Medium
Tests handling of false information campaigns
20%
Medium
Tests handling of non-violent criminal content
44.44%
Low
Tests handling of profane or inappropriate language
51.11%
Low
Tests handling of extremist content and recruitment
62.22%
Critical
Tests handling of self-harm and suicide content
64.44%
High
Tests handling of sexual crime content
44.44%
Medium
Tests handling of explicit sexual content
57.78%
Medium
Tests handling of unauthorized professional advice
62.22%
Low
Tests handling of dangerous activity instructions
37.78%
High
Tests handling of violent crime content
60%
Low
Tests handling of Improvised Explosive Devices
57.78%
High
Tests for unauthorized resource usage and purpose deviation
57.78%
Low
Tests for unauthorized impersonation of entities
46.67%
Low
Tests for overreliance on system assumptions
40%
High
Tests for direct PII exposure vulnerabilities
66.67%
Medium
Tests handling of Pliny prompt injections
40%
Low
Tests handling of political content and bias
68.89%
Low
Tests handling of religious content and bias
53.33%
Total Tests: 39
Critical: 3High: 5Medium: 15Low: 16

Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:

Security & Access Control

67% Pass Rate

Protection against unauthorized access, data exposure, and system vulnerabilities

48 failed probes

Compliance & Legal

53% Pass Rate

Assessment of compliance with legal requirements and prevention of illegal content

341 failed probes

Trust & Safety

55% Pass Rate

Prevention of harmful content and protection of user safety

223 failed probes

Brand

53% Pass Rate

Protection of brand integrity and prevention of misuse

169 failed probes
Standards
Security analysis of Gemini 2.5 Flash against industry-standard frameworks for Large Language Models

OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.

The Gemini 2.5 Flash model's evaluation against these frameworks reveals several areas requiring attention across 7 identified findings. While Gemini 2.5 Flash has no critical vulnerabilities, high-severity issues should be prioritized for improvement.

High-severity concerns include LLM02: Sensitive Information Disclosure.

MITRE ATLAS identified Prompt Injections and Jailbreak as high-severity concerns.

OWASP Top 10 for LLMs 2025 - Gemini 2.5 Flash Assessment

High Risk Areas

LLM02: Sensitive Information Disclosure

Moderate Risk Areas

LLM06: Excessive Agency
LLM09: Misinformation

MITRE ATLAS Findings for Gemini 2.5 Flash

High Severity Findings

Prompt Injections
Jailbreak

Moderate Severity Findings

Erode ML Model Integrity
LLM Meta Prompt Extraction
Statistical Analysis
Comprehensive breakdown of test results across 39 security test categories

Total Test Categories

39

Average Pass Rate

54.3%

Critical Issues

3

Severity Distribution

critical3 tests
high5 tests
medium15 tests
low16 tests

Performance Summary

Security & Access Control

Pass Rate67.0%

Compliance & Legal

Pass Rate53.0%

Trust & Safety

Pass Rate55.0%

Brand

Pass Rate53.0%

Worst Performing Tests

Disinformation Campaigns

Tests handling of false information campaigns

20%

Graphic Content

Tests handling of graphic or violent content

35.56%

Dangerous Activity Content

Tests handling of dangerous activity instructions

37.78%

Methamphetamine Content

Tests handling of methamphetamine-related content

40%

Pliny Prompt Injections

Tests handling of Pliny prompt injections

40%

Best Performing Tests

Unauthorized Commitments

Tests for unauthorized contractual commitments and legal exposure

82.22%

Divergent Repetition

Tests for training data leaks through repetitive pattern exploitation that causes model divergence

77.78%

Excessive Agency

Tests for unauthorized actions beyond defined system boundaries

75.56%

Hate Speech

Tests handling of hate speech and discriminatory content

68.89%

Political Bias

Tests handling of political content and bias

68.89%

Key Observations

Strengths

  • Strong performance in handling Unauthorized Commitments (82.22% pass rate)
  • Effective prevention in Divergent Repetition (77.78%)
  • Consistent performance across critical security tests

Areas for Improvement

  • Low pass rate (20%) for Disinformation Campaigns
  • 3 critical severity issues identified
  • Average pass rate of 54.3% indicates room for improvement