


🔥 MODULE 0 — Python Backend Foundations
(Deep Notes + 20 Problems + Parallel Flask & FastAPI Notebooks)
This module is the bedrock.
If this layer is weak → Flask/FastAPI mastery collapses in interviews and production.
From Day 1, everything is side-by-side:
- 🧪 Flask Notebook
- ⚡ FastAPI Notebook
- 🧠 Same concept → different execution model
- 🏗️ One project evolves: Flask → FastAPI
🧠 What You’ll Build in Module 0
A User & Order Backend Core (logic-first, framework-second)
users/
orders/
auth/
validation/
error_handling/
👉 In Module 0, we build pure Python backend logic
👉 In later modules, Flask/FastAPI simply host this logic
PART A — Python Backend Internals (DEEP THEORY)
0.1 Python Execution Model (Interview Gold)
What REALLY happens when Python runs?
- Source code → Bytecode
- Bytecode executed by Python Virtual Machine
- Stack-based execution
- Frames, locals, globals
Stack vs Heap
| Concept | Stack | Heap |
|---|---|---|
| Stores | Function frames | Objects |
| Speed | Fast | Slower |
| Lifetime | Scope-based | Reference-based |
⚠️ Interview Trap:
“Python passes objects by reference” ❌
Correct: Pass-by-object-reference
0.2 Memory Management & Garbage Collection
Reference Counting
a = []
b = a
del a
Object lives because b still points to it.
Cyclic Garbage Collection
- Detects reference cycles
- Runs periodically
- Can pause execution (important for APIs)
Why Backend Engineers Care
- Memory leaks in long-running APIs
- Flask apps dying slowly
- FastAPI async services ballooning memory
0.3 Mutability, Identity & Equality
a = [1,2]
b = [1,2]
a == b # True
a is b # False
Backend Bug Factory
- Mutable default arguments
- Shared state across requests
- Global variables in APIs
⚠️ Interview Trap:
def add_item(item, lst=[]):
lst.append(item)
return lst
0.4 Functions, Closures & Decorators (Core for APIs)
Why Decorators Matter
- Flask routes
- FastAPI dependencies
- Auth
- Logging
- Rate limiting
def timing(fn):
def wrapper(*args, **kwargs):
...
return wrapper
You will later:
- Build auth decorators (Flask)
- Build dependency injectors (FastAPI)
0.5 Iterators, Generators & Streaming APIs
Generator = Lazy Execution
def read_large_file():
for line in open("big.log"):
yield line
Why APIs Love Generators
- Streaming responses
- Reduced memory
- Faster first byte
0.6 Exceptions & Error Propagation
try:
...
except ValueError as e:
raise CustomAPIError(str(e))
You’ll map:
- Python exceptions → HTTP responses
- Flask error handlers
- FastAPI exception middleware
0.7 Concurrency Fundamentals (CRITICAL)
Threading
- I/O parallelism
- GIL limits CPU usage
Multiprocessing
- True parallelism
- Heavy memory cost
Async / Await
- Event loop
- Cooperative multitasking
- NOT magic speed
⚠️ Interview Trap:
Async makes code faster ❌
Async improves concurrency, not CPU speed
PART B — LIVE TEST DATA (Used Everywhere)
USERS = [
{"id": 1, "name": "Raj", "email": "raj@gmail.com"},
{"id": 2, "name": "Amit", "email": None}, # NULL
]
ORDERS = [
{"id": 101, "user_id": 1, "amount": 2500},
{"id": 102, "user_id": 2, "amount": None}, # NULL
]
We intentionally inject:
None- Missing fields
- Bad types
PART C — PARALLEL NOTEBOOK STRUCTURE (MANDATORY)
📓 Notebook 0A — Core Python Logic
Pure Python (framework-agnostic)
- validation.py
- services.py
- exceptions.py
📓 Notebook 0B — Flask Integration
@app.route("/users")
def get_users():
return jsonify(get_all_users())
📓 Notebook 0C — FastAPI Integration
@app.get("/users")
async def get_users():
return get_all_users()
👉 SAME logic
👉 DIFFERENT execution model
PART D — 20 HANDS-ON PROBLEMS (INTERVIEW-GRADE)
🧩 Level 1 (Foundations)
- Detect mutable default argument bug
- Implement custom exception hierarchy
- Write deep copy manually
- Implement custom context manager
- Validate nested dict with nulls
🧩 Level 2 (Backend-Oriented)
- Implement request-scoped storage (no globals)
- Write decorator for timing API calls
- Create lazy data loader using generator
- Convert sync function → async-safe
- Handle partial failures gracefully
🧩 Level 3 (Concurrency & Traps)
- Identify blocking code in async function
- Implement retry with exponential backoff
- Simulate race condition & fix it
- Design thread-safe counter
- Memory leak detection exercise
🧩 Level 4 (Framework-Ready)
- Build Flask error handler from Python exception
- Build FastAPI exception middleware
- Write shared validation used by both
- Convert logic from sync to async
- Design framework-agnostic service layer
PART E — PROJECT EVOLUTION (Module 0)
Phase 1 — Pure Python
core/
├── models.py
├── services.py
├── validators.py
├── errors.py
Phase 2 — Flask Wrapper
flask_app/
├── routes.py
├── app.py
Phase 3 — FastAPI Wrapper
fastapi_app/
├── main.py
├── dependencies.py
👉 Same core
👉 Different orchestration
INTERVIEW OUTCOMES AFTER MODULE 0
You will confidently answer:
- Why FastAPI async can be slower
- Why Flask globals break under load
- How Python memory leaks kill APIs
- Difference between coroutine & thread
- How decorators power frameworks
🚀 NEXT STEP
If you say “Start Module 0.1”, I will:
- Begin Python Execution Model
- Provide code + diagrams
- Solve Problem #1 live
- Show Flask vs FastAPI impact immediately
Just say Start Module 0.1 👇




🔥 MODULE 0.1 — Python Execution Model (Backend Engineer Level)
This is one of the MOST asked-but-poorly-answered interview topics.
If you master this, you automatically sound like a senior backend engineer.
We’ll go in this exact order:
- How Python really executes code
- Stack, Heap, Frames
- Locals vs Globals vs Closures
- Why this matters for Flask & FastAPI
- One real interview-grade problem (solved)
1️⃣ From .py File → Running Program (REAL FLOW)
What Python ACTUALLY does
your_code.py
↓
Lexer & Parser
↓
Abstract Syntax Tree (AST)
↓
Bytecode (.pyc)
↓
Python Virtual Machine (PVM)
📌 Key Insight
- Python is interpreted + bytecode compiled
- NOT line-by-line interpretation
- NOT native machine code
2️⃣ Bytecode (Why Backend Engineers Care)
Let’s see it.
def add(a, b):
return a + b
add(2, 3)
Python converts this to bytecode instructions like:
LOAD_FAST
LOAD_FAST
BINARY_ADD
RETURN_VALUE
Why this matters
- Bytecode is stack-based
- Explains why Python is slower than C
- Explains async overhead
- Explains decorator cost
⚠️ Interview Trap
“Python is slow because it’s interpreted” ❌
Correct:
Python is slow because it runs on a stack-based VM with dynamic typing
3️⃣ Python Stack vs Heap (CRITICAL)
Call Stack
Stores:
- Function calls
- Local variables
- Execution state
Heap
Stores:
- Objects
- Lists, dicts, class instances
def foo():
x = [1,2,3]
bar(x)
def bar(y):
y.append(4)
📌 x and y are references on stack
📌 [1,2,3] is object on heap
4️⃣ Frames (The Hidden Backbone)
Each function call creates a frame object:
- Instruction pointer
- Local variables
- Reference to globals
- Reference to enclosing scope
def outer():
a = 10
def inner():
return a
return inner()
Here:
inner()frame points toouter()frame- This is how closures work
5️⃣ Locals vs Globals vs Builtins
Python name lookup order (LEGB Rule):
- Local
- Enclosing
- Global
- Builtins
x = 10
def f():
print(x)
Backend Impact
- Global variables are shared across requests
- Dangerous in Flask
- Dangerous in FastAPI workers
⚠️ Interview Trap:
“Globals are safe if I don’t modify them” ❌
They are still shared memory
6️⃣ PASS-BY-OBJECT-REFERENCE (VERY IMPORTANT)
def modify(lst):
lst.append(100)
a = []
modify(a)
print(a)
✔️ Output: [100]
Python:
- Passes reference to object
- Rebinding ≠ modifying
def reassign(x):
x = 10
Does NOT change original object.
7️⃣ Why This Matters in Flask & FastAPI
Flask
- One process
- Multiple threads
- Shared globals = 🔥 race conditions
FastAPI
- Multiple workers
- Event loop
- Shared memory inside worker
❌ BAD
users = []
✔️ GOOD
- Request-scoped data
- Dependency injection
- Database/session per request
8️⃣ VISUAL MENTAL MODEL (USE IN INTERVIEWS)
[ Request ]
↓
[ Thread / Event Loop Task ]
↓
[ Stack Frame ]
↓
[ Heap Objects ]
Multiple requests:
- Separate stack frames
- Same heap → shared objects
🧪 PROBLEM #1 — Interview-Grade
❓ Question
What is the output?
def add_item(item, lst=[]):
lst.append(item)
return lst
print(add_item(1))
print(add_item(2))
❌ Wrong Answer (Common)
[1]
[2]
✅ Correct Answer
[1]
[1, 2]
WHY?
lstcreated ONCE- Stored in function object
- Lives on heap
- Shared across calls
✅ Correct Fix
def add_item(item, lst=None):
if lst is None:
lst = []
lst.append(item)
return lst
9️⃣ Flask vs FastAPI IMPACT (Mini Demo)
Flask
cache = []
@app.route("/add/<int:x>")
def add(x):
cache.append(x)
return {"cache": cache}
❌ Race conditions
❌ Shared state
❌ Not thread-safe
FastAPI
cache = []
@app.get("/add/{x}")
async def add(x: int):
cache.append(x)
return {"cache": cache}
❌ Still broken
❌ Async does NOT fix shared memory
🎯 What You MUST Remember (Interview Gold)
- Python uses stack-based VM
- Objects live on heap
- Frames hold references
- Globals are shared
- Async ≠ safe
- Flask & FastAPI BOTH suffer if Python fundamentals are weak
🚀 NEXT STEP
If you say “Continue to Module 0.2”, we’ll deep dive into:
🔥 Module 0.2 — Memory Management & Garbage Collection
- Reference counting
- Cyclic GC
- Memory leaks in APIs
- Profiling memory
- Flask vs FastAPI memory behavior
Just say: Continue to Module 0.2 👇




🔥 MODULE 0.2 — Python Memory Management & Garbage Collection
(Backend • Production • Interview-Critical)
This module explains why APIs slowly die, why memory keeps growing, and why “just restart the pod” is not engineering.
If you master this, you’ll sound senior+ immediately.
HOW THIS MODULE CONNECTS TO FLASK & FASTAPI
- Flask/FastAPI apps are long-running processes
- Memory bugs don’t crash immediately
- They surface under load, concurrency, async
- GC behavior directly affects latency & throughput
1️⃣ Python Memory Model (REALITY)
Python uses TWO garbage collection systems:
🧠 1. Reference Counting (Primary)
🔁 2. Cyclic Garbage Collector (Backup)
2️⃣ Reference Counting (Core Mechanism)
Every Python object has:
ref_count = number of references pointing to it
Example
a = []
b = a
[] ← a
[] ← b
ref_count = 2
del a
[] ← b
ref_count = 1
When ref_count == 0 → object is destroyed immediately.
Why This Is FAST
- No “stop-the-world” GC
- Deterministic cleanup
- Good for APIs
3️⃣ The BIG PROBLEM: Cyclic References
Classic Cycle
a = []
a.append(a)
a → list → a
Ref count NEVER becomes 0 ❌
Result
- Memory leak
- Reference counting fails
- GC needed
4️⃣ Cyclic Garbage Collector (GC)
Python’s GC detects reference cycles.
How it works (simplified)
- Tracks container objects (list, dict, class)
- Periodically scans for unreachable cycles
- Frees them
⚠️ GC is:
- CPU expensive
- Can pause execution
- BAD for latency-sensitive APIs
5️⃣ GC Generations (IMPORTANT)
Python divides objects into 3 generations:
| Generation | Objects | Frequency |
|---|---|---|
| Gen 0 | New objects | Frequent |
| Gen 1 | Survived Gen 0 | Medium |
| Gen 2 | Long-lived | Rare |
Objects surviving GC move up generations.
📌 Long-lived API objects often end in Gen 2
6️⃣ Why Backend Engineers CARE About GC
Flask / FastAPI Problems
- Memory grows slowly
- Latency spikes
- Random slow requests
- OOM kills in Kubernetes
Causes
- Caches
- Closures
- Global variables
- Reference cycles
- Unclosed DB sessions
7️⃣ REAL MEMORY LEAK (API STYLE)
❌ BAD CODE
cache = []
def handle_request(data):
cache.append(data)
cachegrows forever- GC will NOT save you
- Reference count never drops
❌ BAD WITH CLOSURE
handlers = []
def register_handler():
data = [1,2,3]
def handler():
return data
handlers.append(handler)
📌 data is trapped in closure
📌 Never freed
📌 Classic production leak
8️⃣ Flask vs FastAPI Memory Behavior
Flask
- Thread-based
- Globals shared across threads
- Memory leaks affect entire app
FastAPI
- Multiple workers
- Each worker leaks separately
- Looks “fine” until traffic increases
⚠️ Interview Trap:
“FastAPI is async so memory leaks are less” ❌
Memory leaks are Python-level, not framework-level.
9️⃣ Manual GC Control (Advanced)
import gc
gc.collect()
Useful When?
- Batch jobs
- Controlled cleanup
- Rare edge cases
⚠️ NEVER call GC on every request ❌
It will destroy performance.
🔬 GC Inspection (INTERVIEW GOLD)
import gc
gc.get_count()
gc.get_stats()
Use cases:
- Detect leaks
- Debug memory spikes
- Production diagnostics
10️⃣ Context Managers = Memory Safety
❌ BAD
conn = db.connect()
# exception happens
conn.close() # never called
✅ GOOD
with db.connect() as conn:
...
📌 Why?
__exit__always runs- Releases references
- Prevents leaks
11️⃣ FastAPI & Async Memory Trap
async def handler():
data = await fetch_big_data()
return process(data)
❌ If data is stored globally or cached improperly
❌ Async does NOT clean memory automatically
12️⃣ PROBLEM #2 — INTERVIEW-GRADE
❓ What leaks memory?
handlers = []
def register():
data = {"x": 100}
def handler():
return data["x"]
handlers.append(handler)
✅ ANSWER
datalives forever- Closure holds reference
handlerskeeps function alive- GC cannot free it
✅ FIX
def register():
data = {"x": 100}
def handler(d=data):
return d["x"]
OR better:
- Avoid closures for long-lived handlers
- Use stateless functions
13️⃣ Flask & FastAPI PARALLEL NOTEBOOK DEMO
Flask (Memory Leak)
leak = []
@app.route("/leak")
def leak_memory():
leak.append([1]*100000)
return "leaking"
FastAPI (Same Bug)
leak = []
@app.get("/leak")
async def leak_memory():
leak.append([1]*100000)
return "leaking"
❌ Both leak
❌ Framework irrelevant
✔️ Python fundamentals matter
🎯 INTERVIEW SUMMARY (MEMORIZE)
- Python uses reference counting
- Cycles require GC
- GC causes pauses
- Globals + closures cause leaks
- Async does NOT fix memory
- Flask & FastAPI behave the same at memory level
🧪 HOMEWORK (MANDATORY)
- Write a program that leaks memory
- Fix it using context managers
- Inspect GC stats before & after
- Run the same logic in Flask & FastAPI
🚀 NEXT STEP
If you say “Continue to Module 0.3”, we will cover:
🔥 Module 0.3 — Mutability, Identity & Shared State
isvs==- Mutable defaults
- Shared state bugs
- Thread-safety
- Flask vs FastAPI request isolation
Say Continue to Module 0.3 👇




🔥 MODULE 0.3 — Mutability, Identity & Shared State
(The #1 Hidden Cause of Production Bugs in Flask & FastAPI)
If you understand this module deeply, you will instantly detect bugs that many senior engineers miss.
This module explains:
- Why APIs behave differently under load
- Why bugs appear only in production
- Why async doesn’t save you
- Why “works on my machine” is meaningless
1️⃣ Mutability vs Immutability (REAL MEANING)
Immutable Objects
int,float,str,tuple,frozenset- Cannot be changed in-place
- New object created on change
x = 10
x += 1 # new object
Mutable Objects
list,dict,set, custom classes- Modified in-place
- Same memory location
lst = [1,2]
lst.append(3)
📌 Backend impact:
- Mutable objects are shared dangers
- Immutable objects are safe by default
2️⃣ Identity (is) vs Equality (==) — Interview Favorite
a = [1,2]
b = [1,2]
a == b # True (values equal)
a is b # False (different objects)
Why is exists
- Checks memory identity
- Used for:
None- Singleton checks
- Performance optimizations
if x is None:
...
⚠️ Interview Trap:
a = 256
b = 256
a is b # True (implementation detail)
DO NOT rely on this.
3️⃣ Mutable Default Arguments (API KILLER)
❌ BUGGY CODE
def add_user(user, users=[]):
users.append(user)
return users
Why this is dangerous
userscreated once- Stored in function object
- Shared across requests
In APIs
- Data leaks between users
- Race conditions
- Security bugs
✅ Correct Pattern
def add_user(user, users=None):
if users is None:
users = []
users.append(user)
return users
4️⃣ Shared State Across Requests (CRITICAL)
❌ Global State
users = []
def create_user(u):
users.append(u)
In APIs:
- Flask threads → race conditions
- FastAPI async tasks → race conditions
- Multiple workers → inconsistent state
⚠️ Interview Trap
“FastAPI async avoids shared state issues” ❌
Async only changes scheduling, not memory sharing.
5️⃣ Flask Shared State Example (BROKEN)
users = []
@app.route("/add")
def add():
users.append(request.args["name"])
return {"users": users}
What goes wrong
- Two requests at same time
- Inconsistent list
- Data corruption
- Non-deterministic bugs
6️⃣ FastAPI Shared State Example (ALSO BROKEN)
users = []
@app.get("/add")
async def add(name: str):
users.append(name)
return {"users": users}
❌ Same bug
❌ Async ≠ isolation
7️⃣ Thread Safety vs Async Safety (VERY IMPORTANT)
Thread Safety
- Multiple OS threads
- Shared memory
- Requires locks
Async Safety
- Single thread
- Cooperative multitasking
- Shared memory still exists
📌 Both require careful state management
8️⃣ Race Condition (Core Concept)
counter = 0
def increment():
global counter
counter += 1
Two threads:
- Read counter = 0
- Increment
- Write 1 twice
Expected: 2
Actual: 1 ❌
9️⃣ Fixing Shared State (Correct Patterns)
Pattern 1 — Request-Scoped Data
- Local variables
- Function parameters
- Dependency injection
Pattern 2 — Immutable Data
config = ("host", "port")
Pattern 3 — Locks (LAST RESORT)
from threading import Lock
lock = Lock()
with lock:
counter += 1
⚠️ Locks reduce performance
⚠️ Avoid in async unless necessary
10️⃣ Flask: Correct State Handling
@app.route("/sum")
def compute():
data = [1,2,3] # request-scoped
return {"sum": sum(data)}
✔️ Safe
✔️ Isolated
11️⃣ FastAPI: Correct State Handling
@app.get("/sum")
async def compute():
data = [1,2,3]
return {"sum": sum(data)}
✔️ Safe
✔️ Stateless
12️⃣ Shared State via Dependency Injection (FastAPI)
def get_db():
db = connect()
try:
yield db
finally:
db.close()
📌 Lifecycle-controlled
📌 Clean memory
📌 No leaks
13️⃣ PROBLEM #3 — INTERVIEW-GRADE
❓ What is the output?
x = [1,2,3]
def f():
x.append(4)
f()
print(x)
✅ Answer
[1,2,3,4]
Why?
xis mutable- Function modifies same object
❓ Variant (TRAP)
x = [1,2,3]
def f():
x = x + [4]
f()
✅ Answer
UnboundLocalError
Why?
- Assignment makes
xlocal - Read before assignment
14️⃣ Flask & FastAPI Parallel Notebook Exercise
Task
- Build
/counterendpoint - Hit it concurrently
- Observe incorrect results
- Fix using:
- Request scope
- External storage (Redis)
- Locks (last)
🎯 INTERVIEW TAKEAWAYS (MEMORIZE)
- Mutability causes shared-state bugs
is≠==- Globals break APIs
- Async does NOT isolate memory
- Stateless APIs scale
- Frameworks don’t save bad Python
🧪 HOMEWORK
- Write a Flask API with shared mutable list
- Break it under concurrency
- Fix it using request-scoped logic
- Repeat in FastAPI
- Explain why async didn’t help
🚀 NEXT STEP
If you say “Continue to Module 0.4”, we’ll cover:
🔥 Module 0.4 — Functions, Closures & Decorators (Auth, Logging, DI)
- How Flask routes are decorators
- How FastAPI dependencies work
- Closure memory traps
- Writing your own auth decorator
Say Continue to Module 0.4 👇




🔥 MODULE 0.4 — Functions, Closures & Decorators
(Auth • Logging • Dependency Injection • Interview Gold)
This module explains how Flask and FastAPI are actually built.
If you master this, frameworks stop feeling “magical”.
Everything you’ll see here directly powers:
- Flask
@app.route - FastAPI
Depends() - Auth middleware
- Logging, rate limiting, retries
1️⃣ Functions Are Objects (CORE TRUTH)
In Python:
def add(a, b):
return a + b
This means:
addis an object- Can be passed, stored, returned
- Has attributes
- Has a closure
print(type(add)) # <class 'function'>
📌 Backend impact:
- Routes are functions
- Middlewares wrap functions
- Decorators replace functions
2️⃣ Higher-Order Functions (Foundation)
A function that:
- Takes a function
- Returns a function
def apply(fn, x, y):
return fn(x, y)
apply(add, 2, 3)
This concept powers every web framework.
3️⃣ Closures (POWERFUL + DANGEROUS)
What is a Closure?
A function that remembers variables from its outer scope.
def outer():
x = 10
def inner():
return x
return inner
fn = outer()
print(fn()) # 10
📌 x survives even after outer() exits
📌 Stored inside inner.__closure__
4️⃣ Closure Memory Trap (PRODUCTION BUG)
handlers = []
def register():
big_data = [1] * 1_000_000
def handler():
return len(big_data)
handlers.append(handler)
❌ big_data is never freed
❌ Memory leak
❌ GC cannot help
✅ Safer Alternative
def handler(big_data):
return len(big_data)
📌 Stateless functions scale
📌 Closures should be short-lived
5️⃣ Decorators — How They REALLY Work
Decorator = Closure + Higher-Order Function
def my_decorator(fn):
def wrapper(*args, **kwargs):
print("Before")
result = fn(*args, **kwargs)
print("After")
return result
return wrapper
Usage:
@my_decorator
def hello():
print("Hello")
Equivalent to:
hello = my_decorator(hello)
⚠️ Interview Trap:
Decorators modify functions ❌
Correct:
Decorators replace functions
6️⃣ Why functools.wraps Is CRITICAL
from functools import wraps
def deco(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
return fn(*args, **kwargs)
return wrapper
Without @wraps:
- Function name lost
- Docstring lost
- Flask routing breaks
- FastAPI OpenAPI breaks
📌 Always use @wraps
7️⃣ Flask Routes ARE Decorators
@app.route("/users")
def get_users():
return users
Internally:
route()returns a decorator- Registers function in routing table
- Returns wrapped function
Simplified mental model:
get_users = app.route("/users")(get_users)
8️⃣ Flask Custom Auth Decorator (REAL)
from functools import wraps
from flask import request, abort
def require_token(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
if request.headers.get("X-TOKEN") != "secret":
abort(401)
return fn(*args, **kwargs)
return wrapper
Usage:
@app.route("/secure")
@require_token
def secure():
return "OK"
📌 Closure captures fn
📌 Decorator injects auth logic
9️⃣ FastAPI Does NOT Use Decorators for Auth (IMPORTANT)
FastAPI uses Dependency Injection instead.
from fastapi import Depends, Header, HTTPException
def require_token(x_token: str = Header()):
if x_token != "secret":
raise HTTPException(status_code=401)
Usage:
@app.get("/secure")
async def secure(dep=Depends(require_token)):
return "OK"
📌 No function wrapping
📌 No closure trap
📌 Better composability
10️⃣ Decorators vs Dependencies (INTERVIEW COMPARISON)
| Topic | Flask | FastAPI |
|---|---|---|
| Auth | Decorators | Dependencies |
| Execution | Wrap function | Inject arguments |
| Order | Stacking sensitive | Deterministic |
| Testability | Harder | Easier |
| Async | Awkward | Native |
11️⃣ Parameterized Decorators (ADVANCED)
def rate_limit(limit):
def decorator(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
print(f"Limit = {limit}")
return fn(*args, **kwargs)
return wrapper
return decorator
Usage:
@rate_limit(10)
def api():
pass
📌 Flask uses this pattern internally
12️⃣ PROBLEM #4 — INTERVIEW-GRADE
❓ What prints?
funcs = []
for i in range(3):
def f():
return i
funcs.append(f)
print([fn() for fn in funcs])
❌ Common Answer
[0, 1, 2]
✅ Correct Answer
[2, 2, 2]
WHY?
- Closure captures variable, not value
iends as2
✅ Fix
for i in range(3):
def f(i=i):
return i
13️⃣ Flask vs FastAPI — Same Logic, Different Wiring
Flask Logging Decorator
def log(fn):
@wraps(fn)
def wrapper(*args, **kwargs):
print("Request")
return fn(*args, **kwargs)
return wrapper
FastAPI Logging Dependency
def log():
print("Request")
@app.get("/items")
async def items(dep=Depends(log)):
return []
📌 Same goal
📌 Cleaner execution in FastAPI
🎯 INTERVIEW TAKEAWAYS (MEMORIZE)
- Functions are objects
- Decorators replace functions
- Closures can leak memory
- Flask relies on decorators
- FastAPI relies on DI
@wrapsis mandatory- Closure bugs appear under load
🧪 HOMEWORK (MANDATORY)
- Write a Flask auth decorator
- Break it by forgetting
@wraps - Fix it
- Rewrite same logic using FastAPI dependency
- Explain why FastAPI approach scales better
🚀 NEXT STEP
If you say “Continue to Module 0.5”, we’ll cover:
🔥 Module 0.5 — Iterators, Generators & Streaming APIs
- Lazy execution
- Streaming responses
- Memory-safe large data APIs
- Flask vs FastAPI streaming differences
Say Continue to Module 0.5 👇




🔥 MODULE 0.5 — Iterators, Generators & Streaming APIs
(Memory-Safe • High-Scale • Interview-Critical)
This module explains how Netflix-scale APIs send massive data without dying
and why generators are mandatory knowledge for backend engineers.
If you master this, you’ll:
- Eliminate memory spikes
- Build streaming APIs
- Explain async vs streaming correctly in interviews
1️⃣ Iterators vs Generators (CORE DIFFERENCE)
Iterator
- Object with
__iter__()and__next__() - Manually managed state
class Counter:
def __init__(self, n):
self.n = n
self.i = 0
def __iter__(self):
return self
def __next__(self):
if self.i >= self.n:
raise StopIteration
val = self.i
self.i += 1
return val
⚠️ Verbose
⚠️ Error-prone
Generator
- Uses
yield - Python manages state
- Lazy execution
def counter(n):
for i in range(n):
yield i
✔️ Cleaner
✔️ Safer
✔️ Faster to write
2️⃣ Why Generators Matter in APIs
❌ BAD (Loads Everything)
def get_users():
return list(db.fetch_all())
- Loads entire dataset
- High memory usage
- Slow first byte
✅ GOOD (Streaming)
def get_users():
for user in db.fetch_all():
yield user
📌 Data sent gradually
📌 Low memory
📌 Faster response start
3️⃣ Generator Execution Model (INTERVIEW GOLD)
def gen():
print("A")
yield 1
print("B")
yield 2
Execution:
call gen() → no execution
next() → prints A, yields 1
next() → prints B, yields 2
next() → StopIteration
📌 Code executes only when requested
4️⃣ Generators + Exceptions
def read():
try:
yield "data"
finally:
print("cleanup")
📌 finally executes even if client disconnects
📌 CRITICAL for API cleanup
5️⃣ Streaming APIs — REAL WORLD NEED
Use cases
- Large CSV export
- Logs streaming
- ML inference output
- Video chunks
- Event feeds
6️⃣ Flask Streaming API (WSGI)
from flask import Response
def generate():
for i in range(5):
yield f"{i}\n"
@app.route("/stream")
def stream():
return Response(generate(), mimetype="text/plain")
📌 Generator → chunks
📌 Memory safe
📌 Blocking per request thread
⚠️ Flask uses thread per request
7️⃣ FastAPI Streaming API (ASGI)
from fastapi.responses import StreamingResponse
def generate():
for i in range(5):
yield f"{i}\n"
@app.get("/stream")
async def stream():
return StreamingResponse(generate(), media_type="text/plain")
📌 Non-blocking
📌 Event-loop friendly
📌 Scales better for many clients
8️⃣ Streaming + Async (SUBTLE TRAP)
async def gen():
for i in range(5):
yield i
❌ INVALID — async generators behave differently
❌ Requires async for
Correct Async Generator
async def gen():
for i in range(5):
yield i
Used with:
async for x in gen():
...
FastAPI supports both sync & async generators
9️⃣ Memory Impact Comparison (INTERVIEW TABLE)
| Pattern | Memory | Latency | Scale |
|---|---|---|---|
| List return | High | Slow start | Poor |
| Generator | Low | Fast start | Good |
| Async generator | Lowest | Best | Excellent |
10️⃣ Streaming with Live Test Data
ORDERS = [
{"id": 1, "amount": 100},
{"id": 2, "amount": None},
{"id": 3, "amount": 300},
]
Generator with Validation
def stream_orders():
for o in ORDERS:
if o["amount"] is None:
continue
yield f"{o}\n"
📌 Null-safe
📌 Lazy
📌 Memory-safe
11️⃣ Generator Pitfall — One-Time Use
g = (i for i in range(3))
list(g) # [0,1,2]
list(g) # []
⚠️ Interview Trap:
Generators are reusable ❌
12️⃣ Generator + Database Cursor (REAL PROD)
def fetch_rows(cursor):
while True:
row = cursor.fetchone()
if not row:
break
yield row
✔️ No full table load
✔️ Cursor-based streaming
13️⃣ PROBLEM #5 — INTERVIEW-GRADE
❓ What prints?
def gen():
yield 1
yield 2
g = gen()
print(next(g))
print(list(g))
✅ Answer
1
[2]
📌 Generator resumes from last yield
14️⃣ Flask vs FastAPI — STREAMING SUMMARY
| Topic | Flask | FastAPI |
|---|---|---|
| Streaming | Yes | Yes |
| Blocking | Thread-blocking | Non-blocking |
| Async | Awkward | Native |
| Scale | Medium | High |
🎯 INTERVIEW TAKEAWAYS (MEMORIZE)
- Generators are lazy
- Streaming avoids memory spikes
- Generators are one-time use
- Flask streaming blocks threads
- FastAPI streaming scales better
- Async ≠ streaming automatically
🧪 HOMEWORK (MANDATORY)
- Create API that streams 1M numbers
- Measure memory usage
- Implement same API using list (compare)
- Do it in Flask & FastAPI
- Explain difference in interviews
🚀 NEXT STEP
If you say “Continue to Module 0.6”, we’ll cover:
🔥 Module 0.6 — Exceptions, Error Propagation & API Error Design
- Custom exception hierarchies
- Flask error handlers
- FastAPI exception middleware
- Mapping Python errors → HTTP responses
Say Continue to Module 0.6 👇




🔥 MODULE 0.6 — Exceptions, Error Propagation & API Error Design
(Production-Grade • Interview-Critical • Framework-Agnostic Core)
This module separates toy APIs from real backend systems.
Great APIs don’t just work — they fail correctly.
You’ll learn:
- How Python exceptions actually propagate
- How to design clean error hierarchies
- How Flask & FastAPI convert exceptions → HTTP
- How to avoid leaking internals (security + stability)
1️⃣ Python Exception Model (REAL MECHANICS)
What happens when an exception is raised?
raise Exception
↓
Unwind current stack frame
↓
Look for matching except
↓
Bubble up until handled or crash
📌 Python does stack unwinding, not jumping
📌 Cleanup happens via finally
2️⃣ Exception Hierarchy (INTERVIEW FAVORITE)
All exceptions inherit from:
BaseException
├── Exception
│ ├── ValueError
│ ├── TypeError
│ ├── KeyError
│ └── ...
├── SystemExit
└── KeyboardInterrupt
⚠️ Interview Trap:
except BaseException:
...
❌ Catches KeyboardInterrupt
❌ Breaks graceful shutdown
✔️ Always catch Exception
3️⃣ Why Backend APIs NEED Custom Exceptions
❌ BAD (Leaky)
int("abc") # ValueError
Client sees:
{"error": "invalid literal for int()"}
❌ Leaks internals
❌ Inconsistent API responses
✅ GOOD (Controlled)
class ValidationError(Exception):
pass
Raise domain-level errors, not raw Python ones.
4️⃣ Designing an API Exception Hierarchy (CORE)
class APIError(Exception):
status_code = 500
message = "Internal Server Error"
class ValidationError(APIError):
status_code = 400
message = "Invalid input"
class NotFoundError(APIError):
status_code = 404
message = "Resource not found"
📌 Framework-agnostic
📌 Reusable in Flask & FastAPI
📌 Clean mapping to HTTP
5️⃣ Error Propagation Across Layers
[ Route ]
↓
[ Service ]
↓
[ Validation ]
↓
raise ValidationError
↑
[ Error Handler ]
↑
HTTP Response
📌 Lower layers never return HTTP
📌 Only raise Python exceptions
📌 Framework maps them
6️⃣ Flask Error Handling (WSGI STYLE)
Global Error Handler
from flask import jsonify
@app.errorhandler(APIError)
def handle_api_error(err):
return jsonify({"error": err.message}), err.status_code
Route Example
@app.route("/users/<int:id>")
def get_user(id):
if id not in USERS:
raise NotFoundError()
📌 Flask matches exception type
📌 Converts to HTTP response
7️⃣ FastAPI Exception Handling (ASGI STYLE)
Custom Exception Handler
from fastapi import Request
from fastapi.responses import JSONResponse
@app.exception_handler(APIError)
async def api_error_handler(request: Request, exc: APIError):
return JSONResponse(
status_code=exc.status_code,
content={"error": exc.message}
)
📌 Async-friendly
📌 Non-blocking
📌 Cleaner separation
8️⃣ Validation Errors: Flask vs FastAPI
Flask (Manual)
if not email:
raise ValidationError("email required")
FastAPI (Automatic)
from pydantic import BaseModel
class User(BaseModel):
email: str
FastAPI auto-raises:
- 422 Unprocessable Entity
- Structured error response
⚠️ Interview Trap:
FastAPI has no validation errors ❌
FastAPI raises them before your function runs
9️⃣ Exception Chaining (ADVANCED)
try:
int("abc")
except ValueError as e:
raise ValidationError("Bad number") from e
📌 Keeps root cause
📌 Better debugging
📌 Cleaner logs
10️⃣ Finally Blocks = Resource Safety
def process():
conn = connect()
try:
do_work(conn)
finally:
conn.close()
📌 Always runs
📌 Prevents leaks
📌 Critical in APIs
11️⃣ Error Logging vs Error Response (IMPORTANT)
❌ BAD
return {"error": str(e)}
✅ GOOD
- Log full exception internally
- Return sanitized message externally
log.exception(e)
return {"error": "Something went wrong"}
12️⃣ Flask vs FastAPI Error Flow (INTERVIEW TABLE)
| Topic | Flask | FastAPI |
|---|---|---|
| Handling | Decorators | Middleware |
| Async | No | Yes |
| Validation | Manual | Automatic |
| Error Schema | Manual | OpenAPI |
| Testability | Medium | High |
13️⃣ PROBLEM #6 — INTERVIEW-GRADE
❓ What happens?
def f():
try:
return 1
finally:
return 2
print(f())
✅ Answer
2
📌 finally overrides return
📌 Very common interview trap
14️⃣ Flask & FastAPI Parallel Notebook Exercise
Task
- Build
/divide?a=10&b=0 - Raise Python error
- Convert to clean API error
- Ensure:
- HTTP 400
- Safe message
- Logged internally
🎯 INTERVIEW TAKEAWAYS (MEMORIZE)
- Exceptions unwind stack
- Catch
Exception, notBaseException - Design domain errors
- Never leak internals
- Flask & FastAPI differ in plumbing, not concept
finallyalways executes
🧪 HOMEWORK (MANDATORY)
- Create APIError hierarchy
- Raise errors in service layer
- Map them in Flask
- Map same errors in FastAPI
- Explain difference in interviews
🚀 NEXT STEP
If you say “Continue to Module 0.7”, we’ll cover:
🔥 Module 0.7 — Concurrency: Threads, Async & Event Loop (CRITICAL)
- Threading vs multiprocessing
- Async internals
- Event loop mechanics
- Flask vs FastAPI concurrency model
- Why async can be slower
Say Continue to Module 0.7 👇



🔥 MODULE 0.7 — Concurrency: Threads, Async & the Event Loop
(The Most Misunderstood Topic in Backend Interviews)
This module explains why async sometimes feels slower, why FastAPI doesn’t magically fix performance, and how concurrency actually works in Python.
If you master this, you can:
- Choose the right concurrency model
- Debug production slowdowns
- Answer “Why async?” confidently in interviews
BIG PICTURE (MEMORIZE THIS)
Concurrency ≠ Parallelism
Async ≠ Faster
FastAPI ≠ Magic
1️⃣ Concurrency vs Parallelism (FOUNDATION)
Concurrency
- Multiple tasks in progress
- Interleaved execution
- Single CPU core possible
Parallelism
- Multiple tasks at the same time
- Requires multiple CPU cores
- True simultaneous execution
📌 Python async → concurrency
📌 Python multiprocessing → parallelism
2️⃣ The GIL (Global Interpreter Lock)
What is the GIL?
- Mutex protecting Python bytecode execution
- Only one thread executes Python code at a time
Thread A → holds GIL → executes
Thread B → waits
Why GIL Exists
- Simplifies memory management
- Faster single-threaded performance
- Safer reference counting
⚠️ Interview Trap:
GIL makes Python single-threaded ❌
Correct:
GIL limits CPU-bound threads, not I/O-bound ones
3️⃣ Threading in Python (I/O Concurrency)
When threads HELP
- Network calls
- File I/O
- Database calls
- Waiting on external services
from threading import Thread
def task():
fetch_api()
Thread(target=task).start()
📌 While one thread waits for I/O → another runs
📌 GIL is released during I/O
When threads FAIL
- CPU-heavy loops
- Data processing
- ML inference (pure Python)
📌 GIL prevents parallel execution
4️⃣ Multiprocessing (TRUE PARALLELISM)
from multiprocessing import Process
Process(target=task).start()
How it works
- Separate OS processes
- Separate memory
- No shared GIL
Downsides
- High memory usage
- Slow inter-process communication
- Serialization overhead
📌 Used by Gunicorn workers
📌 Used for CPU-bound workloads
5️⃣ Async / Await (Event Loop Model)
What async REALLY is
- Single-threaded
- Cooperative multitasking
- Explicit yield points
async def task():
await fetch_data()
📌 await = “pause here, run someone else”
📌 No preemption like threads
6️⃣ The Event Loop (CORE MECHANICS)
Think of the event loop as:
while tasks_exist:
pick_ready_task()
run_until_await()
Components
- Task queue
- Futures
- Callbacks
- Await points
📌 Async code runs until it hits await
7️⃣ Async ≠ Non-Blocking (CRITICAL TRAP)
async def bad():
time.sleep(5) # BLOCKS EVENT LOOP ❌
This:
- Blocks entire server
- Freezes all requests
- Destroys concurrency
Correct
async def good():
await asyncio.sleep(5)
8️⃣ Flask Concurrency Model
Flask = WSGI
- One request → one thread
- Blocking execution
- Scales via:
- Threads
- Processes (Gunicorn workers)
Client → Thread → Handler → Response
📌 Simple
📌 Predictable
📌 Good for sync workloads
9️⃣ FastAPI Concurrency Model
FastAPI = ASGI
- Event loop
- Async tasks
- Non-blocking I/O
Client → Event Loop → Task → await → switch
📌 Excellent for I/O-heavy systems
📌 Bad if blocking code sneaks in
10️⃣ WHY ASYNC CAN BE SLOWER (INTERVIEW GOLD)
Scenario
async def compute():
total = 0
for i in range(10_000_000):
total += i
return total
❌ CPU-bound
❌ Blocks event loop
❌ Worse than sync
📌 Async adds overhead
📌 Context switching costs
📌 No GIL escape
11️⃣ Mixing Sync & Async (REAL PRODUCTION)
FastAPI allows BOTH
@app.get("/sync")
def sync_route():
return slow_io()
@app.get("/async")
async def async_route():
await async_io()
📌 Sync runs in threadpool
📌 Async runs in event loop
⚠️ Interview Trap:
FastAPI endpoints must be async ❌
12️⃣ Blocking Detection (VERY IMPORTANT)
❌ Dangerous Calls in Async
time.sleeprequests.get- CPU-heavy loops
- Pandas operations
✅ Safe Alternatives
asyncio.sleephttpx.AsyncClient- Move CPU work to process pool
13️⃣ Flask vs FastAPI — CONCURRENCY TABLE
| Topic | Flask | FastAPI |
|---|---|---|
| Model | Threads | Event loop |
| Async | Limited | Native |
| CPU tasks | OK | BAD |
| I/O tasks | OK | Excellent |
| Debugging | Easier | Harder |
14️⃣ PROBLEM #7 — INTERVIEW-GRADE
❓ What happens?
async def f():
print("A")
await asyncio.sleep(0)
print("B")
asyncio.run(f())
✅ Answer
A
B
But internally:
- Task yields control
- Event loop schedules resume
📌 This explains async scheduling
15️⃣ Flask & FastAPI Parallel Experiment
Experiment
- Endpoint calls external API
- Hit with 100 concurrent requests
Result
- Flask → threads pile up
- FastAPI → handles smoothly
Change ONE line:
- Replace async HTTP call with
requests.get
💥 FastAPI collapses
🎯 INTERVIEW TAKEAWAYS (MEMORIZE)
- GIL blocks CPU-bound threads
- Async improves I/O concurrency
- Async does NOT speed up CPU
- Blocking calls kill event loops
- Flask = thread concurrency
- FastAPI = cooperative multitasking
- Wrong model → worse performance
🧪 HOMEWORK (MANDATORY)
- Build CPU-heavy endpoint
- Run in Flask
- Run same in FastAPI async
- Measure latency
- Explain why FastAPI was slower
🚀 NEXT STEP
If you say “Continue to Module 0.8”, we’ll finish Module 0 with:
🔥 Module 0.8 — Context Managers, Resource Lifecycles & Cleanup
- DB sessions
- File handles
- Request lifecycle safety
- Flask teardown vs FastAPI lifespan
Say Continue to Module 0.8 👇




🔥 MODULE 0.8 — Context Managers, Resource Lifecycles & Cleanup
(DB Sessions • Files • Network • Production Safety)
This is the final pillar of Module 0.
If you get this wrong, memory leaks, connection leaks, and outages are guaranteed.
This module explains:
- How Python guarantees cleanup
- Why
withis non-negotiable in backend code - How Flask & FastAPI manage request lifecycles
- How to design leak-free services
1️⃣ Why Resource Management Matters in APIs
APIs deal with:
- Database connections
- File handles
- Network sockets
- Locks
- Caches
❌ BAD
conn = db.connect()
data = conn.query()
# crash
📌 Connection never closed
📌 Pool exhausted
📌 Production outage
✅ GOOD
with db.connect() as conn:
data = conn.query()
📌 Guaranteed cleanup
📌 Exception-safe
📌 Thread-safe
2️⃣ What a Context Manager REALLY Is
A context manager is any object that implements:
__enter__()
__exit__(exc_type, exc, tb)
Execution Flow
__enter__ → block → __exit__
Even if:
- Exception occurs
- Function returns early
- Client disconnects
3️⃣ Custom Context Manager (INTERVIEW CORE)
class DBSession:
def __enter__(self):
print("OPEN")
return self
def __exit__(self, exc_type, exc, tb):
print("CLOSE")
Usage:
with DBSession():
print("WORK")
Output:
OPEN
WORK
CLOSE
📌 __exit__ always runs
4️⃣ Contextlib (Cleaner Pattern)
from contextlib import contextmanager
@contextmanager
def db_session():
conn = connect()
try:
yield conn
finally:
conn.close()
📌 Preferred in backend code
📌 Shorter
📌 Safer
5️⃣ Context Managers + Exceptions (CRITICAL)
@contextmanager
def resource():
print("Acquire")
try:
yield
finally:
print("Release")
Even if:
with resource():
raise Exception("Boom")
Output:
Acquire
Release
📌 Leak-proof design
6️⃣ Flask Request Lifecycle & Cleanup
Flask Flow
Request
↓
before_request
↓
view function
↓
after_request
↓
teardown_request
Correct DB Usage
@app.before_request
def open_db():
g.db = connect()
@app.teardown_request
def close_db(exc):
g.db.close()
📌 Request-scoped
📌 Thread-safe
📌 Clean
7️⃣ Flask Anti-Pattern (VERY COMMON)
db = connect() # global ❌
❌ Shared across threads
❌ Leaks under exceptions
❌ Breaks scaling
8️⃣ FastAPI Resource Lifecycle (CLEANER)
FastAPI uses lifespan events.
from contextlib import asynccontextmanager
@asynccontextmanager
async def lifespan(app):
app.state.db = connect()
yield
app.state.db.close()
app = FastAPI(lifespan=lifespan)
📌 Startup → acquire
📌 Shutdown → cleanup
📌 Explicit lifecycle
9️⃣ FastAPI Request-Scoped Dependencies (BEST PRACTICE)
def get_db():
db = connect()
try:
yield db
finally:
db.close()
Usage:
@app.get("/users")
def users(db=Depends(get_db)):
return db.query()
📌 Per-request cleanup
📌 No global leaks
📌 Testable
10️⃣ Async Context Managers (IMPORTANT)
from contextlib import asynccontextmanager
@asynccontextmanager
async def resource():
await open()
try:
yield
finally:
await close()
📌 Required for async DB drivers
📌 Works with FastAPI async routes
11️⃣ Flask vs FastAPI — Lifecycle Comparison
| Aspect | Flask | FastAPI |
|---|---|---|
| Startup hooks | Limited | Native |
| Shutdown hooks | Weak | Strong |
| Request cleanup | teardown_request | Dependencies |
| Async cleanup | Hard | Native |
| Safety | Manual | Structured |
12️⃣ PROBLEM #8 — INTERVIEW-GRADE
❓ What prints?
class A:
def __enter__(self):
print("enter")
return self
def __exit__(self, exc_type, exc, tb):
print("exit")
with A():
print("inside")
raise ValueError()
✅ Answer
enter
inside
exit
📌 Exception does NOT skip cleanup
13️⃣ Real Production Bug (COMMON)
def get_users():
with db.connect() as conn:
return conn.query()
# connection closed AFTER return
✔️ SAFE
Why?
withcompletes before return__exit__executes
14️⃣ Flask & FastAPI Parallel Notebook Exercise
Task
Build /items endpoint that:
- Opens DB
- Queries data
- Raises exception sometimes
- NEVER leaks connection
Implement:
- Flask using
g+ teardown - FastAPI using dependency +
yield
🎯 MODULE 0 — FINAL INTERVIEW SUMMARY
You now understand:
- Python execution model
- Memory & GC
- Mutability & shared state
- Closures & decorators
- Streaming & generators
- Exceptions & error mapping
- Concurrency models
- Resource lifecycles
📌 This is senior-level Python backend knowledge
🚀 NEXT STEP — MODULE 1
If you say “Start Module 1”, we’ll begin:
🔥 MODULE 1 — HTTP, REST & API Design (DEEP)
- HTTP internals
- Idempotency
- Status code traps
- REST vs RPC
- API contracts
- Flask vs FastAPI routing
Say Start Module 1 👇