Excellent.
DAY 26 is one of the most misunderstood topics in Python. Today we kill myths and replace them with clear mental models.
If you understand this day properly, you will:
- Stop misusing threads
- Choose the right concurrency model
- Answer GIL questions confidently in interviews
- Design scalable Python systems
🧠 DAY 26 — Python Concurrency Model
(GIL, threading, multiprocessing & asyncio)
🔑 CORE IDEA OF DAY 26
Python concurrency is limited by the GIL for CPU-bound work,
but excels at I/O-bound concurrency.
Everything today revolves around choosing the right model.
1️⃣ What Is Concurrency vs Parallelism
- Concurrency: multiple tasks in progress
- Parallelism: multiple tasks executing simultaneously
Python supports:
- Concurrency very well
- Parallelism only via multiprocessing (not threads)
2️⃣ The GIL (Global Interpreter Lock) — No Myths




What the GIL is:
- A mutex inside CPython
- Allows only one thread to execute Python bytecode at a time
- Protects Python memory management (refcounting)
What the GIL is NOT:
❌ A Python language feature
❌ Present in all Python implementations
❌ A bug
3️⃣ Why the GIL Exists (Design Reality)
CPython uses:
- Reference counting for GC
- Non-thread-safe memory operations
Without GIL:
- Every object access needs a lock
- Python would become much slower
Design choice:
Make single-threaded code fast and simple
4️⃣ Threads in Python (threading)
Key facts:
- OS-level threads
- Share memory
- Subject to GIL
Consequence:
- ❌ No CPU parallelism for Python code
- ✅ Excellent for I/O-bound tasks
Example:
import threading
def task():
download_file()
t1 = threading.Thread(target=task)
t2 = threading.Thread(target=task)
Useful for:
- Network calls
- Disk I/O
- Waiting on external systems
5️⃣ Why Threads Still Matter (IMPORTANT)
While one thread:
- Waits on I/O
- Releases GIL
Another thread:
- Executes Python code
So threads do overlap I/O latency.
6️⃣ Multiprocessing (multiprocessing)
What it does:
- Spawns separate processes
- Each has its own GIL
- True parallelism on multiple cores
from multiprocessing import Process
def cpu_task():
heavy_compute()
p = Process(target=cpu_task)
p.start()
Tradeoffs:
- High memory usage
- Inter-process communication (IPC) cost
- Serialization (pickle) overhead
Use for:
- CPU-bound workloads
- Data processing
- Parallel computation
7️⃣ Threads vs Processes (Decision Table)
| Task Type | Best Choice |
|---|---|
| CPU-bound | multiprocessing |
| I/O-bound | threading |
| Many sockets | asyncio |
| Mixed | hybrid approach |
8️⃣ Asyncio — Cooperative Concurrency
Async is:
- Single-threaded
- Single-process
- Event-loop based
async def fetch():
await network_call()
Key idea:
awaitgives control back to event loop- No blocking allowed
9️⃣ Asyncio Mental Model




- One thread
- Many coroutines
- Cooperative scheduling
- No preemption
If a coroutine blocks → everything blocks.
🔟 When Asyncio Is Perfect
Use asyncio when:
- Thousands of concurrent I/O tasks
- Network-heavy systems
- APIs, microservices, web servers
Avoid asyncio for:
- CPU-heavy computation
- Blocking libraries
- Simple scripts
11️⃣ Why Python Threads Don’t Speed Up CPU Work (Interview Gold)
Because:
- GIL allows only one thread to execute Python bytecode
- Threads context-switch but don’t run in parallel
- CPU-bound threads fight over GIL
Correct interview answer:
“Threads improve concurrency, not parallelism, in CPython.”
12️⃣ Releasing the GIL (Advanced Insight)
Some C extensions:
- NumPy
- Pandas
- SciPy
Release the GIL during heavy computation → real parallelism.
This is why Python works well in data science.
🔥 INTERVIEW TRAPS (DAY 26)
Q1
Does Python support multithreading?
✔ Yes (but GIL-limited)
Q2
Why multiprocessing is slower to start?
✔ Process creation
✔ Memory duplication
✔ IPC overhead
Q3
Can asyncio use multiple cores?
❌ No
✔ One event loop per thread/process
Q4
Which model for web server handling 10k connections?
✔ Asyncio
🧠 DAY 26 MENTAL CHECKLIST
Before adding concurrency:
- CPU-bound or I/O-bound?
- Shared memory needed?
- Task count?
- Blocking calls?
- Debuggability concerns?
📝 DAY 26 ASSIGNMENT (MANDATORY)
1️⃣ Predict behavior (NO RUNNING):
import threading
x = 0
def f():
global x
for _ in range(1_000_000):
x += 1
t1 = threading.Thread(target=f)
t2 = threading.Thread(target=f)
t1.start(); t2.start()
t1.join(); t2.join()
print(x)
Will it be exactly 2_000_000? Why/why not?
2️⃣ Explain clearly:
- Why Python threads don’t scale CPU-bound tasks
- Difference between threading and asyncio
3️⃣ Design question:
How would you design a Python service that downloads 50k URLs and then performs heavy CPU parsing on results?
(Hint: hybrid model)
🔜 DAY 27 PREVIEW
DAY 27 — Performance & Optimization
(Profiling, time complexity, memory tricks & real tuning)
You’ll learn:
- How to find real bottlenecks
- Profiling tools
- Micro-optimizations that matter
- Optimizations that don’t
When ready, say 👉 “START DAY 27”