🚀 Lesson 8 — Middleware, Dependency Injection, Background Tasks & Caching (Redis)
This lesson transforms your FastAPI from “basic backend” into enterprise-grade architecture.
These skills are used heavily in:
✔ Large-scale microservices
✔ AI model inference platforms
✔ Logging, monitoring, caching systems
✔ High-performance APIs
Let’s begin. 🔥
🎯 What You Will Learn Today
✔ Middleware
- Logging
- Timing
- Auth checks
- Request/response modification
✔ Dependency Injection
- DB sessions
- Shared services
- Reusable logic
✔ Background Tasks
- Emails
- Logs
- Notifications
- Async workflows
✔ Caching
- Redis
- In-memory cache
- Speed up ML inference
🧠 PART A — Middleware in FastAPI
Middleware = code that runs before and after every request.
Used for:
- Logging
- Performance timing
- Authentication
- Rate limiting
- Security filters
🔥 1. Basic Logging Middleware
from fastapi import FastAPI, Request
import time
app = FastAPI()
@app.middleware("http")
async def log_requests(request: Request, call_next):
start = time.time()
response = await call_next(request)
duration = time.time() - start
print(f"{request.method} {request.url} took {duration:.4f}s")
return response
Output example:
GET /users took 0.0023s
This is extremely important for monitoring.
🔐 2. Middleware for Authentication
(Not secure as JWT but useful for internal systems)
@app.middleware("http")
async def check_api_key(request: Request, call_next):
if request.headers.get("x-api-key") != "SECRET":
return JSONResponse(status_code=403, content={"error": "Forbidden"})
return await call_next(request)
Clients must send:
x-api-key: SECRET
🧰 PART B — Dependency Injection (DI)
One of FastAPI’s most powerful features.
Used to inject:
✔ DB Sessions
✔ AI Models
✔ Config objects
✔ Cache clients
✔ Shared services
🔧 1. Basic DI Example
from fastapi import Depends
def get_message():
return "Hello from DI!"
@app.get("/test")
def test(msg: str = Depends(get_message)):
return {"msg": msg}
🏦 2. DI with Database Session (Most Common)
From Lesson 6:
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
Used in routes:
@app.get("/users")
def list_users(db: Session = Depends(get_db)):
return db.query(User).all()
This is industry-standard code.
🤖 3. DI for Loading AI Models (Excellent for ML apps)
from transformers import pipeline
def get_summarizer():
return pipeline("summarization")
@app.post("/summarize")
def summarize(text: str, summarizer = Depends(get_summarizer)):
return summarizer(text)
Advantages:
✔ Model loads only once
✔ Reused across requests
✔ Very fast
🧵 PART C — Background Tasks
Used for:
- Emails
- Logging
- Post-processing
- Analytics
- Sending notifications
- Saving reports
📩 1. Background Task Example
from fastapi import BackgroundTasks
def send_email(email):
print(f"Sending email to {email}")
@app.post("/register")
def register(email: str, bg: BackgroundTasks):
bg.add_task(send_email, email)
return {"msg": "User registered"}
FastAPI returns instantly → email is sent in background.
🧪 2. Background Task for Logging Events
def save_log(data):
with open("logs.txt", "a") as f:
f.write(data + "\n")
@app.post("/action")
def do_action(bg: BackgroundTasks):
bg.add_task(save_log, "User performed action")
return {"status": "done"}
Perfect for analytics, tracking, auditing.
🚀 PART D — Caching (Redis + In-memory)
Caching = SPEED.
Used to:
- Reduce DB calls
- Speed up ML inference
- Handle high-traffic APIs
- Avoid repeated heavy computations
🔥 1. Install Redis Client
pip install redis
📌 2. Connect to Redis
import redis
cache = redis.Redis(host='localhost', port=6379, db=0)
📦 3. Cache GET API Results
@app.get("/fib/{n}")
def fib(n: int):
# Check cache
cached = cache.get(n)
if cached:
return {"value": int(cached), "cached": True}
# Compute Fibonacci
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
# Save to cache
cache.set(n, a)
return {"value": a, "cached": False}
If you run /fib/50 several times:
- First time computes
- Next times return instantly via Redis
🤖 4. Cache ML model results (HUGE SPEEDUP)
@app.post("/sentiment")
def analyze(text: str):
key = f"sentiment:{text}"
# check cache
if cache.get(key):
return {"cached": True, "result": cache.get(key).decode()}
# run model
result = sentiment(text)[0]["label"]
cache.set(key, result)
return {"cached": False, "result": result}
This is exactly what real-world AI apps do.
🏭 5. Complete Enterprise Pattern
FastAPI
↓
Middleware
↓
Dependency Injection (DB, Redis, ML model)
↓
Route
↓
Background tasks
↓
Cache Layer
↓
DB / ML Model / Other services
This pattern is used by:
- Swiggy / Zomato
- Amazon
- Uber
- Every modern AI SaaS backend
📌 Lesson 8 Summary
You learned:
✔ Middleware (logging, auth, tracing)
✔ Dependency Injection (clean & reusable services)
✔ Background tasks (emails, logs, async jobs)
✔ Redis caching (massive speed boost)
✔ Enterprise-grade backend design
This is EXACTLY the knowledge required for senior-level backend/ML platform roles.
🚀 Ready for Lesson 9 — Scaling & Deployment (Docker, Uvicorn workers, CI/CD, Kubernetes)?
Shall I continue with Lesson 9?