Perfect.
DAY 8 is where Python performance myths die and real data-structure engineering begins.
Lists are the most abused structure in Python — and also the most powerful when understood correctly.
🧠 DAY 8 — Lists: Dynamic Arrays, Over-Allocation & Copy Traps
🔑 CORE IDEA OF DAY 8
Python lists are dynamic arrays, not linked lists.
Their performance comes from smart over-allocation, not magic.
1️⃣ What a Python List REALLY Is
In CPython, a list is:
- A contiguous array of object references
- NOT a linked list
- NOT storing values directly
Conceptually:
list
├── size (used slots)
├── allocated (capacity)
└── array → [ref, ref, ref, ...]
Each element is a pointer to a PyObject, not the object itself.
2️⃣ Visual Memory Model (CRITICAL)




Example:
lst = [10, "hi", [1, 2]]
Memory:
lst ──▶ [ ref → 10, ref → "hi", ref → list ]
This explains:
- Heterogeneous lists
- Shallow copy traps
- Mutation side effects
3️⃣ Why append() Is Fast (Amortized O(1))
When you do:
lst.append(x)
Python:
- Checks if there’s spare capacity
- If yes → just add reference (O(1))
- If no → allocate bigger array, copy refs
Over-allocation strategy (simplified):
new_capacity ≈ old_capacity * 1.125
This ensures:
- Few reallocations
- Amortized O(1) append
4️⃣ Why insert() Is Slow
lst.insert(0, x)
Python must:
- Shift all existing references right
- Update indices
Time complexity:
O(n)
Same for:
del lst[0]
lst.pop(0)
5️⃣ Indexing vs Slicing (Huge Difference)
x = lst[5] # O(1)
y = lst[2:8] # O(n)
Why slicing is slow:
- New list created
- References copied
⚠️ Slicing never returns a view.
6️⃣ Shallow Copy vs Deep Copy (INTERVIEW CLASSIC)
Shallow copy
a = [[1], [2]]
b = a[:]
Memory:
a ──▶ [ ref → [1], ref → [2] ]
b ──▶ [ ref → [1], ref → [2] ]
Mutate inner list:
a[0].append(100)
Result:
Both a and b affected
Deep copy
import copy
b = copy.deepcopy(a)
Now inner objects are copied too.
7️⃣ The Most Dangerous List Trap (🔥🔥🔥)
a = [[0]] * 3
Looks like:
[[0], [0], [0]]
Reality:
[ ref → [0], ref → [0], ref → [0] ]
a[0][0] = 99
print(a)
Output:
[[99], [99], [99]]
🚨 This causes real production bugs.
8️⃣ += vs + on Lists (IMPORTANT)
a = [1, 2]
b = a
a += [3]
- In-place mutation
baffected
a = a + [4]
- New list created
bunchanged
Same pattern as strings — but lists are mutable.
9️⃣ List Comprehensions (More Than Syntax Sugar)
[x*x for x in range(10)]
Why faster than loops?
- Implemented in C
- Fewer Python bytecode instructions
- Local variable optimization
🔟 When NOT to Use Lists
Avoid lists when:
- Frequent membership tests → use
set - Key-value mapping → use
dict - Need immutability → use
tuple
Choosing the wrong structure = silent performance bug.
🔥 INTERVIEW TRAPS (DAY 8)
Q1
a = [1, 2, 3]
b = a
a.pop()
print(b)
✔ [1, 2]
(Mutation)
Q2
a = [1, 2, 3]
b = a[:]
a.pop()
print(b)
✔ [1, 2, 3]
(Shallow copy)
Q3
lst = []
for i in range(10000):
lst = lst + [i]
❌ O(n²)
✔ Use append
🧠 DAY 8 MENTAL CHECKLIST
Before using a list, ask:
- Am I mutating or rebinding?
- Is this copy shallow or deep?
- Is slicing happening in a loop?
- Is list really the right structure?
📝 DAY 8 ASSIGNMENT (MANDATORY)
1️⃣ Predict output (NO RUNNING):
a = [[1, 2]] * 2
b = a[:]
a[0].append(3)
print(a)
print(b)
2️⃣ Explain clearly:
- Why list append is amortized O(1)
- Why slicing is O(n)
3️⃣ Design question:
Why does Python choose dynamic arrays instead of linked lists?
🔜 DAY 9 PREVIEW
DAY 9 — Tuples: Immutability, Hashing & Performance Tradeoffs
You’ll learn:
- Why tuples exist
- When tuples are faster than lists
- Tuple packing/unpacking internals
- Why tuples can be dict keys
When ready, say 👉 “START DAY 9”