The Requests library is the de facto standard for making HTTP requests in Python. It wraps the complexities of urllib into a clean, intuitive API that reads almost like English. Whether you’re calling APIs, scraping pages, or automating web interactions, Requests is usually the right tool.
Installation#
pip install requests
Making a GET Request#
import requests
response = requests.get("https://jsonplaceholder.typicode.com/posts/1")
print(response.status_code)
# Output: 200
print(response.json())
# Output: {'userId': 1, 'id': 1, 'title': '...', 'body': '...'}
Use .json() to parse JSON responses directly — no need to import json and call json.loads() separately.
Query Parameters#
Pass parameters as a dictionary instead of manually building URL strings:
params = {"userId": 1, "completed": False}
response = requests.get(
"https://jsonplaceholder.typicode.com/todos",
params=params
)
print(response.url)
# Output: https://jsonplaceholder.typicode.com/todos?userId=1&completed=False
Custom Headers#
headers = {
"Authorization": "Bearer YOUR_TOKEN",
"Accept": "application/json"
}
response = requests.get("https://api.example.com/data", headers=headers)
Making a POST Request#
Use json= for JSON payloads (sets Content-Type automatically) or data= for form-encoded data:
# JSON payload
payload = {"title": "foo", "body": "bar", "userId": 1}
response = requests.post(
"https://jsonplaceholder.typicode.com/posts",
json=payload
)
print(response.status_code)
# Output: 201
print(response.json())
# Output: {'title': 'foo', 'body': 'bar', 'userId': 1, 'id': 101}
# Form-encoded data
data = {"username": "carl", "password": "secret"}
response = requests.post("https://example.com/login", data=data)
Handling Errors#
Check status codes explicitly, or use raise_for_status() to throw exceptions on 4xx/5xx responses:
response = requests.get("https://jsonplaceholder.typicode.com/nonexistent")
# Manual check
if response.status_code == 404:
print("Not found")
# Or raise an exception automatically
try:
response.raise_for_status()
except requests.HTTPError as e:
print(f"HTTP error: {e}")
Timeouts#
Always set a timeout in production code. Without one, your program can hang indefinitely if the server doesn’t respond:
try:
response = requests.get("https://api.example.com/slow", timeout=5)
except requests.Timeout:
print("Request timed out after 5 seconds")
except requests.ConnectionError:
print("Failed to connect")
The timeout parameter accepts seconds as an integer or float. You can also pass a tuple (connect_timeout, read_timeout) for more control.
Sessions#
Sessions persist settings (cookies, headers, auth) across multiple requests and reuse the underlying TCP connection for better performance:
with requests.Session() as session:
session.headers.update({"Authorization": "Bearer TOKEN"})
# Both requests use the same auth header and TCP connection
users = session.get("https://api.example.com/users")
posts = session.get("https://api.example.com/posts")
Sessions are also context managers — the connection pool is cleaned up when the with block exits.
Response Object#
The response object has several useful attributes:
response = requests.get("https://jsonplaceholder.typicode.com/posts/1")
response.status_code # 200
response.headers # dict-like object of response headers
response.text # response body as string
response.content # response body as bytes
response.json() # parsed JSON
response.url # final URL (after redirects)
response.elapsed # time delta for the request
response.encoding # detected encoding
Security Note#
If you’re building URLs from user input, always validate and sanitize them. Passing unsanitized URLs to requests.get() can lead to SSRF vulnerabilities — the same class of bugs I write about extensively on this site.
See also:
- The 5 Coolest Things About Using Python — why Python’s ecosystem makes libraries like Requests possible
- Context Managers in Python — Requests sessions work well with
withstatements for clean resource handling