How to use this guide: Start with Pattern 0 every single time. Match the problem's "shape" to a pattern. Then apply the template. Most LeetCode problems are one of ~17 patterns in disguise — recognizing the pattern is 80% of the solution.
Read the prompt and match the "shape":
| Problem Shape | Pattern |
|---|---|
| Contiguous subarray/substring | Sliding Window or Prefix Sum |
| Sorted / need min/max boundary / "min feasible" | Binary Search (index or answer-space) |
| "Top K / Kth / merge K sorted / stream" | Heap |
| "All combinations / generate all / choose or not" | Backtracking |
| "Shortest path / fewest steps" | BFS (unweighted) / Dijkstra (weighted) |
| "Dependency / ordering / prerequisites" | Topological Sort |
| "Connected components / dynamic connectivity" | DFS/BFS / Union-Find |
| "Optimal value with repeated substructure" | DP (1D / 2D / interval / bitmask) |
| "Overlap intervals / meeting rooms" | Sort + sweep / heap ends |
| "Parentheses / next greater/smaller" | Stack (often monotonic) |
| Tree asks "subtree info / aggregate from leaves" | Postorder DFS returning a tuple |
| "Prefix search / autocomplete / word dictionary" | Trie |
| "Reach end / maximize reach / earliest finish" | Greedy |
| "Range sum / range min-max with updates" | Segment Tree / BIT |
| "Sliding window max/min" | Monotonic Deque |
| Bit operations / XOR / subsets of bits | Bit Manipulation |
| "Is palindrome / longest palindrome" | Two Pointers (expand or l/r shrink) |
| "Longest consecutive sequence" (unsorted) | Hash Set (Arrays & Hashing, not Sliding Window) |
Decision shortcuts:
- Two indices over the same array → Two Pointers or Sliding Window
- Two separate lists/strings → DP or Merge
- Graph with costs → Dijkstra; graph with 0/1 costs → 0-1 BFS; negative costs → Bellman-Ford
- "All" / "every" / "generate" → Backtracking or DP
- "Does pair/complement exist?" → Two-Sum hash map
- "k numbers sum to target" → sort + fix outer + two-pointer inner
- "Directed graph has cycle?" → DFS 3-color; "undirected?" → simple visited + skip parent
Recognize: duplicates, counts, grouping, "exists", anagrams, prefix sums, "subarray sum = k", "longest subarray with property"
# Count occurrences
cnt = {}
for x in nums:
cnt[x] = cnt.get(x, 0) + 1
# First-index map (earliest only — never overwrite)
pos = {}
for i, x in enumerate(nums):
if x not in pos:
pos[x] = iPitfall: "Longest/first" problems usually need the earliest index only — don't overwrite it.
Count subarrays with sum = k:
def subarraySum(nums, k):
count = 0
prefix = 0
seen = {0: 1} # empty prefix has been seen once
for n in nums:
prefix += n
if prefix - k in seen: # explicit check: don't add 0 for missing keys
count += seen[prefix - k]
seen[prefix] = seen.get(prefix, 0) + 1
return countWhy
if prefix - k in seeninstead ofcount += seen.get(..., 0): the explicit guard makes the intent readable — we only count when a valid complement exists. Both are correct; this form is easier to explain in an interview.
Longest subarray with sum = k: store the earliest index of each prefix sum (not the count).
def longestSubarraySum(nums, k):
first = {0: -1}
pref = best = 0
for i, x in enumerate(nums):
pref += x
if pref - k in first:
best = max(best, i - first[pref - k])
if pref not in first:
first[pref] = i # earliest only
return best- Convert "pair/triple existence" → sort then two pointers
- Convert "grouping by value" → sort then sweep/merge
def twoSum(nums, target):
seen = {}
for i, x in enumerate(nums):
if target - x in seen:
return [seen[target - x], i]
seen[x] = iGeneralize: "does a pair / triple / complement exist?" → build the map as you scan, look up what you need.
def sortColors(nums):
lo = mid = 0; hi = len(nums) - 1
while mid <= hi:
if nums[mid] == 0: nums[lo], nums[mid] = nums[mid], nums[lo]; lo += 1; mid += 1
elif nums[mid] == 1: mid += 1
else: nums[mid], nums[hi] = nums[hi], nums[mid]; hi -= 1Also called 3-pointer partition. Use whenever you need to partition an array into 3 groups in O(n) / O(1).
Recognize: "longest consecutive sequence", "no sorting allowed", elements are integers.
Common misconception: this looks like sliding window but the array is unsorted — sliding window needs a sorted/ordered structure. Use a set for O(1) membership instead.
def longestConsecutive(nums):
num_set = set(nums)
best = 0
for n in num_set:
if n - 1 not in num_set: # only start from the beginning of a sequence
cur, length = n, 1
while cur + 1 in num_set:
cur += 1
length += 1
best = max(best, length)
return bestThe
if n - 1 not in num_setguard ensures each sequence is traversed exactly once → O(n) total even though there's a nested loop.
Time/Space: O(n) / O(n) — or O(n log n) if sorting is required
Recognize: sorted array, palindrome check, partitioning, remove duplicates in-place, merge two sorted sequences
l, r = 0, len(nums) - 1
while l < r:
if condition_needs_smaller:
r -= 1
else:
l += 1write = 0
for read in range(len(nums)):
if keep(nums[read]):
nums[write] = nums[read]
write += 1
return write # write = new lengthPitfall: Be explicit about what
writemeans — it's the next free slot, so the final valid array isnums[:write].
Recognize: "valid palindrome", "is string/number a palindrome", "check if reversed equals original"
# Basic palindrome (exact match)
def isPalindrome(s):
l, r = 0, len(s) - 1
while l < r:
if s[l] != s[r]:
return False
l += 1; r -= 1
return True
# Valid palindrome — skip non-alphanumeric, ignore case (LC 125)
def validPalindrome(s):
l, r = 0, len(s) - 1
while l < r:
while l < r and not s[l].isalnum(): l += 1
while l < r and not s[r].isalnum(): r -= 1
if s[l].lower() != s[r].lower():
return False
l += 1; r -= 1
return TrueThe two-pointer palindrome check is O(n) / O(1). For "almost palindrome" (allow removing one char), try both
isPalindrome(s[l+1:r+1])andisPalindrome(s[l:r])when a mismatch is found.
def threeSum(nums):
nums.sort(); res = []
for i in range(len(nums) - 2):
if i > 0 and nums[i] == nums[i-1]: continue # skip outer duplicates
l, r = i + 1, len(nums) - 1
while l < r:
s = nums[i] + nums[l] + nums[r]
if s < 0: l += 1
elif s > 0: r -= 1
else:
res.append([nums[i], nums[l], nums[r]])
while l < r and nums[l] == nums[l+1]: l += 1 # skip inner dups
while l < r and nums[r] == nums[r-1]: r -= 1
l += 1; r -= 1
return resGeneralizes to k-Sum: fix (k-2) outer pointers recursively, run two-pointer at the innermost level. Always sort first.
Time/Space: O(n) / O(1)
Recognize: substring/subarray, "longest", "minimum window", "at most K distinct", "replace K chars", "all subarrays of size K"
Example — Longest Substring Without Repeating Characters (LC 3):
def lengthOfLongestSubstring(s):
seen = {} # char → last seen index
l = best = 0
for r, ch in enumerate(s):
if ch in seen and seen[ch] >= l:
l = seen[ch] + 1 # jump left past the duplicate
seen[ch] = r
best = max(best, r - l + 1)
return best
seen[ch] >= lis the guard — the previous occurrence must still be inside the current window to matter. Without it, a character seen before the window's left boundary would wrongly shrink the window.
General template (count-based shrink, works for "at most K distinct" etc.):
def variable_window(s, is_invalid):
cnt = {}
l = best = 0
for r, ch in enumerate(s):
cnt[ch] = cnt.get(ch, 0) + 1
while is_invalid(cnt): # shrink left until valid
cnt[s[l]] -= 1
if cnt[s[l]] == 0:
del cnt[s[l]]
l += 1
best = max(best, r - l + 1)
return bestdef max_sum_fixed_k(nums, k):
cur = sum(nums[:k])
best = cur
for i in range(k, len(nums)):
cur += nums[i] - nums[i - k]
best = max(best, cur)
return bestWhen asked for "exactly K distinct / sum / occurrences":
exactly(K) = atMost(K) - atMost(K - 1)
Implement atMost(k) with the variable window template above, then subtract.
Pitfall: Sliding window fails when the condition is non-monotonic (e.g., "min-length subarray with sum ≥ k" with negatives present). Use prefix sums + binary search or DP instead.
Time/Space: O(n) / O(k) where k = window constraint space
Recognize: parentheses, parsing, "next greater/smaller element", histogram areas, temperature spans, buildings with ocean view
def nextGreater(nums):
ans = [-1] * len(nums)
st = [] # stack of indices
for i, x in enumerate(nums):
while st and nums[st[-1]] < x: # x is greater than top
ans[st.pop()] = x
st.append(i)
return ansFor "next smaller": flip the comparison to
nums[st[-1]] > x.
def isValid(s):
st = []
match = {')': '(', ']': '[', '}': '{'}
for ch in s:
if ch in match:
if not st or st[-1] != match[ch]:
return False
st.pop()
else:
st.append(ch)
return not stPush index; when stack top is "blocked" by shorter bar, pop and compute area h * (i - st[-1] - 1). Append sentinel -1 at the end.
Pitfall: "Strictly greater" vs "greater or equal" changes whether duplicates pop correctly. Use
<for "next strictly greater".
Time/Space: O(n) / O(n)
Recognize: sorted array OR answer is monotonic ("minimum feasible X", "maximize the minimum", "pack in M days", "minimum speed")
def first_true(lo, hi, ok):
"""Returns the smallest value in [lo, hi] where ok(value) is True."""
while lo < hi:
mid = (lo + hi) // 2
if ok(mid):
hi = mid
else:
lo = mid + 1
return loUsage: first_true(1, max_val, lambda cap: can_ship_in_D_days(cap))
Key idea: one half is always sorted → use that half to decide direction.
def search_rotated(nums, target):
l, r = 0, len(nums) - 1
while l <= r:
m = (l + r) // 2
if nums[m] == target:
return m
if nums[l] <= nums[m]: # left half is sorted
if nums[l] <= target < nums[m]:
r = m - 1
else:
l = m + 1
else: # right half is sorted
if nums[m] < target <= nums[r]:
l = m + 1
else:
r = m - 1
return -1import bisect
def searchRange(nums, target):
lo = bisect.bisect_left(nums, target)
hi = bisect.bisect_right(nums, target) - 1
if lo >= len(nums) or nums[lo] != target:
return [-1, -1]
return [lo, hi]Or manually: for leftmost, use hi = mid when nums[mid] >= target; for rightmost, use lo = mid + 1 when nums[mid] <= target.
def findPeakElement(nums):
lo, hi = 0, len(nums) - 1
while lo < hi:
mid = (lo + hi) // 2
if nums[mid] < nums[mid + 1]:
lo = mid + 1 # ascending — peak is to the right
else:
hi = mid # descending — peak is here or to the left
return loPitfall: Always define the loop invariant — "the answer is still in
[lo, hi]" — and prove both branches maintain it.
Time/Space: O(log n) / O(1)
Recognize: reverse, detect cycle, find middle, merge sorted lists, reorder
dummy = ListNode(0)
tail = dummy
# ... build list by setting tail.next = new_node; tail = tail.next
return dummy.nextdef reverse(head):
prev, cur = None, head
while cur:
nxt = cur.next
cur.next = prev
prev, cur = cur, nxt
return prevslow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
# slow is now at the middledef mergeTwoLists(l1, l2):
dummy = ListNode(0); tail = dummy
while l1 and l2:
if l1.val <= l2.val:
tail.next = l1; l1 = l1.next
else:
tail.next = l2; l2 = l2.next
tail = tail.next
tail.next = l1 or l2
return dummy.nextdef detectCycle(head):
slow = fast = head
while fast and fast.next:
slow = slow.next
fast = fast.next.next
if slow == fast: # cycle confirmed
slow = head
while slow != fast: # find entry point
slow = slow.next
fast = fast.next
return slow
return Nonedef removeNthFromEnd(head, n):
dummy = ListNode(0); dummy.next = head
fast = slow = dummy
for _ in range(n + 1): # advance fast n+1 steps
fast = fast.next
while fast: # move both until fast hits end
slow = slow.next; fast = fast.next
slow.next = slow.next.next # delete target
return dummy.nextWhen fast reaches
None, slow is right before the Nth-from-end node. The+1offset lets slow stop at the predecessor (needed for deletion).
Pitfall: Save
nxt = cur.nextbefore any rewiring — you will lose the list otherwise.
Recognize: path sum, subtree property, LCA, diameter, validate BST, level order, zigzag
The "return-a-tuple" trick powers: diameter, balanced check, largest BST subtree, camera coverage.
def dfs(node):
if not node:
return base_tuple # e.g., (True, 0) for (is_balanced, height)
left = dfs(node.left)
right = dfs(node.right)
return combine(left, right, node)Example — tree diameter:
def diameterOfBinaryTree(root):
best = 0
def dfs(node):
nonlocal best
if not node: return 0
l, r = dfs(node.left), dfs(node.right)
best = max(best, l + r)
return 1 + max(l, r)
dfs(root)
return bestdef levelOrder(root):
if not root: return []
current = [root]; res = []
while current:
level = []
next_level = []
for node in current:
level.append(node.val)
if node.left: next_level.append(node.left)
if node.right: next_level.append(node.right)
res.append(level)
current = next_level
return resTwo-list swap:
currentholds this level's nodes,next_levelcollects the next. Swap at the end of each level. No deque needed — naturally expresses "process level by level".
Key insight: in-order DFS (left → node → right) visits a valid BST in strictly ascending order. Collect values into a list and check that every element is greater than the previous.
Validate BST:
def isValidBST(root):
vals = []
def inorder(node):
if not node:
return
inorder(node.left)
vals.append(node.val)
inorder(node.right)
inorder(root)
return all(vals[i] < vals[i + 1] for i in range(len(vals) - 1))The list makes the logic obvious: a valid BST's in-order traversal must be strictly increasing.
<not<=because duplicates are not allowed in a standard BST.
Kth Smallest in BST — same in-order traversal, stop early:
def kthSmallest(root, k):
vals = []
def inorder(node):
if not node or len(vals) == k:
return
inorder(node.left)
vals.append(node.val)
inorder(node.right)
inorder(root)
return vals[-1]Closest value — binary search on the BST structure (no traversal needed):
def closestValue(root, target):
closest = root.val
while root:
if abs(root.val - target) < abs(closest - target):
closest = root.val
root = root.left if target < root.val else root.right
return closestdef lowestCommonAncestor(root, p, q):
if not root or root is p or root is q:
return root
left = lowestCommonAncestor(root.left, p, q)
right = lowestCommonAncestor(root.right, p, q)
return root if left and right else left or rightLogic: if both left and right return non-None, current node is the LCA. Otherwise bubble up whichever side found a match.
def pathSum(root, target):
res = []
def dfs(node, remaining, path):
if not node: return
path.append(node.val)
if not node.left and not node.right and remaining == node.val:
res.append(path[:])
else:
dfs(node.left, remaining - node.val, path)
dfs(node.right, remaining - node.val, path)
path.pop() # backtrack
dfs(root, target, [])
return resUse this whenever you need to collect/aggregate along a root-to-leaf path. The
path.pop()is the backtrack step.
Recognize: top K, kth largest/smallest, merge K sorted lists, streaming median, scheduling by earliest finish time
import heapq
def topK(nums, k):
h = []
for x in nums:
heapq.heappush(h, x)
if len(h) > k:
heapq.heappop(h) # evict the smallest
return sorted(h, reverse=True)For top K largest: use a min-heap of size K (keeps K largest, evicts smallest). For top K smallest: use a max-heap of size K with negated values.
import heapq
def mergeKLists(lists):
dummy = ListNode(0); tail = dummy
h = []
for i, node in enumerate(lists):
if node:
heapq.heappush(h, (node.val, i, node))
while h:
val, i, node = heapq.heappop(h)
tail.next = node; tail = tail.next
if node.next:
heapq.heappush(h, (node.next.val, i, node.next))
return dummy.nextTime: O(N log K) where N = total elements, K = number of lists
import heapq
class MedianFinder:
def __init__(self):
self.lo = [] # max-heap (negate values): stores lower half
self.hi = [] # min-heap: stores upper half
def addNum(self, num):
heapq.heappush(self.lo, -num)
heapq.heappush(self.hi, -heapq.heappop(self.lo))
if len(self.hi) > len(self.lo):
heapq.heappush(self.lo, -heapq.heappop(self.hi))
def findMedian(self):
if len(self.lo) > len(self.hi):
return -self.lo[0]
return (-self.lo[0] + self.hi[0]) / 2Recognize: "generate all", subsets/permutations, constraint satisfaction, word search, N-Queens, Sudoku
Core template: make a choice → recurse → undo the choice (backtrack).
def subsets(nums):
res = []; path = []
def bt(i):
if i == len(nums):
res.append(path[:]); return
bt(i + 1) # skip nums[i]
path.append(nums[i])
bt(i + 1) # take nums[i]
path.pop()
bt(0)
return resdef permute(nums):
res = []; used = [False] * len(nums); path = []
def bt():
if len(path) == len(nums):
res.append(path[:]); return
for i in range(len(nums)):
if used[i]: continue
used[i] = True; path.append(nums[i])
bt()
path.pop(); used[i] = False
bt()
return resdef combinationSum(candidates, target):
res = []; path = []
def bt(i, remaining):
if remaining == 0:
res.append(path[:]); return
if remaining < 0 or i == len(candidates):
return
# include candidates[i] — allow reuse: recurse with same i
path.append(candidates[i])
bt(i, remaining - candidates[i])
path.pop()
# skip candidates[i] — no reuse: recurse with i+1
bt(i + 1, remaining)
bt(0, target)
return resPitfall — Duplicates: Sort first, then skip duplicates at the same recursion depth:
if i > start and candidates[i] == candidates[i-1]: continue
def exist(board, word):
rows, cols = len(board), len(board[0])
def dfs(r, c, i):
if i == len(word): return True
if not (0 <= r < rows and 0 <= c < cols): return False
if board[r][c] != word[i]: return False
tmp, board[r][c] = board[r][c], '#' # mark visited
found = any(dfs(r+dr, c+dc, i+1) for dr, dc in [(0,1),(0,-1),(1,0),(-1,0)])
board[r][c] = tmp # restore (backtrack)
return found
return any(dfs(r, c, 0) for r in range(rows) for c in range(cols))Trick: mutate the board cell to
'#'to mark it visited without an extra set — then restore it. Time: O(rows × cols × 4^len(word)).
Time: O(2^n) subsets, O(n!) permutations — backtracking prunes aggressively in practice.
Recognize: prefix search, dictionary lookup with wildcards, autocomplete, word search II (many words), "starts with"
class TrieNode:
def __init__(self):
self.ch = {}
self.end = False
class Trie:
def __init__(self):
self.root = TrieNode()
def insert(self, word):
node = self.root
for c in word:
if c not in node.ch:
node.ch[c] = TrieNode()
node = node.ch[c]
node.end = True
def search(self, word):
node = self.root
for c in word:
if c not in node.ch: return False
node = node.ch[c]
return node.end
def startsWith(self, prefix):
node = self.root
for c in prefix:
if c not in node.ch: return False
node = node.ch[c]
return TrueStore a list of top-3 suggestions at each Trie node, updated on insert. Query is then O(prefix_length) — no DFS needed at query time.
Build a Trie from all words. DFS the grid, walk the Trie simultaneously. When node.end == True, record the found word and clear node.end to avoid duplicates.
Time: O(W × L) to build, O(prefix_len) to query
Recognize: connected components, "number of islands", shortest steps (unweighted), prerequisites/ordering, graph coloring
def bfs_shortest(start, adj):
q = [start]; dist = {start: 0}
while q:
u = q.pop(0)
for v in adj[u]:
if v not in dist:
dist[v] = dist[u] + 1
q.append(v)
return distdef numIslands(grid):
if not grid: return 0
rows, cols = len(grid), len(grid[0])
visited = set(); count = 0
def bfs(r, c):
q = [(r, c)]; visited.add((r, c))
while q:
row, col = q.pop(0)
for dr, dc in [(1,0),(-1,0),(0,1),(0,-1)]:
nr, nc = row+dr, col+dc
if 0<=nr<rows and 0<=nc<cols and grid[nr][nc]=='1' and (nr,nc) not in visited:
visited.add((nr, nc)); q.append((nr, nc))
for r in range(rows):
for c in range(cols):
if grid[r][c] == '1' and (r,c) not in visited:
bfs(r, c); count += 1
return countdef topoSort(n, edges):
adj = [[] for _ in range(n)]
indeg = [0] * n
for u, v in edges:
adj[u].append(v); indeg[v] += 1
q = [i for i in range(n) if indeg[i] == 0]
order = []
while q:
u = q.pop(0)
order.append(u)
for v in adj[u]:
indeg[v] -= 1
if indeg[v] == 0: q.append(v)
return order if len(order) == n else [] # [] = cycle detecteddef isBipartite(graph):
color = {}
for start in range(len(graph)):
if start in color: continue
q = [start]; color[start] = 0
while q:
u = q.pop(0)
for v in graph[u]:
if v not in color:
color[v] = 1 - color[u]; q.append(v)
elif color[v] == color[u]:
return False
return Truedef hasCycle(n, graph):
state = [0] * n # 0=unvisited, 1=visiting, 2=visited
def dfs(node):
if state[node] == 1: # back edge → cycle
return True
if state[node] == 2: # already fully processed
return False
state[node] = 1 # mark visiting
for nei in graph[node]:
if dfs(nei):
return True
state[node] = 2 # mark finished
return False
for i in range(n):
if state[i] == 0:
if dfs(i):
return True
return FalseUse 3-color (not simple
visited) for directed graphs. For undirected graphs, simplevisited+ "don't go to parent" is enough.
import heapq
def dijkstra(n, adj, src):
INF = float('inf')
dist = [INF] * n; dist[src] = 0
h = [(0, src)]
while h:
d, u = heapq.heappop(h)
if d != dist[u]: continue # stale entry — skip
for v, w in adj[u]:
nd = d + w
if nd < dist[v]:
dist[v] = nd
heapq.heappush(h, (nd, v))
return distStale check:
if d != dist[u]: continueis more efficient than avisitedset.
from collections import deque
def zeroOneBFS(n, adj, src):
dist = [float('inf')] * n; dist[src] = 0
q = deque([src])
while q:
u = q.popleft()
for v, w in adj[u]:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
if w == 0: q.appendleft(v) # 0-cost: to front
else: q.append(v) # 1-cost: to back
return distWhy it works: deque acts as a priority queue when weights are only 0 or 1. O(V + E) vs O((V+E) log V) for Dijkstra.
class DSU:
def __init__(self, n):
self.p = list(range(n))
self.rank = [0] * n
def find(self, x):
while self.p[x] != x:
self.p[x] = self.p[self.p[x]] # path compression (halving)
x = self.p[x]
return x
def union(self, a, b):
ra, rb = self.find(a), self.find(b)
if ra == rb: return False # already connected
if self.rank[ra] < self.rank[rb]: ra, rb = rb, ra
self.p[rb] = ra
if self.rank[ra] == self.rank[rb]: self.rank[ra] += 1
return TrueKruskal MST: sort edges by weight → greedily union if they connect different components.
def bellmanFord(n, edges, src):
dist = [float('inf')] * n; dist[src] = 0
for _ in range(n - 1): # relax n-1 times
for u, v, w in edges:
if dist[u] + w < dist[v]:
dist[v] = dist[u] + w
# Check for negative cycle: if any edge can still be relaxed
for u, v, w in edges:
if dist[u] + w < dist[v]:
return None # negative cycle exists
return distRecognize: "max/min/count" with choices per index; "number of ways"; "rob"; "decode ways"; "jump game cost"
Rule: always define dp[i] in one sentence before writing the transition. The answer is usually dp[n] or max(dp).
Pattern 1 — Take/Skip (House Robber family)
dp[i] = max money robbing houses 0..i.
def rob(nums):
n = len(nums)
if n == 1:
return nums[0]
dp = [0] * n
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1])
for i in range(2, n):
dp[i] = max(dp[i - 1], # skip house i
dp[i - 2] + nums[i]) # rob house i
return dp[n - 1]Pattern 2 — Count Ways (Climb Stairs)
dp[i] = number of ways to reach step i.
def climbStairs(n):
dp = [0] * (n + 1)
dp[0] = 1 # one way to stand at the ground
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i - 1] + dp[i - 2] # came from step i-1 or i-2
return dp[n]Pattern 3 — Kadane (Max Subarray)
dp[i] = max subarray sum ending at index i.
def maxSubArray(nums):
n = len(nums)
dp = [0] * n
dp[0] = nums[0]
for i in range(1, n):
dp[i] = max(nums[i], # start fresh at i
dp[i - 1] + nums[i]) # extend previous subarray
return max(dp)The transition
max(nums[i], dp[i-1] + nums[i])is the core of Kadane: either start a new subarray here or extend the best one ending before here.
Pattern 4 — LIS (Longest Increasing Subsequence) — O(n²) dp[]
dp[i] = length of longest increasing subsequence ending at index i.
def lengthOfLIS(nums):
n = len(nums)
dp = [1] * n # every element is a subsequence of length 1
for i in range(1, n):
for j in range(i):
if nums[j] < nums[i]:
dp[i] = max(dp[i], dp[j] + 1)
return max(dp)O(n log n) with patience sorting (binary search on
tails[]) if needed for large inputs, but thedp[]version is easier to derive and explain in an interview.
Pattern 5 — Coin Change (min coins)
dp[i] = minimum coins needed to make amount i.
def coinChange(coins, amount):
dp = [float('inf')] * (amount + 1)
dp[0] = 0 # 0 coins needed for amount 0
for i in range(1, amount + 1):
for coin in coins:
if coin <= i:
dp[i] = min(dp[i], dp[i - coin] + 1)
return dp[amount] if dp[amount] != float('inf') else -1Recognize: two indices i, j; grid movement; string alignment; "pick items up to weight W"
Pattern 1 — Grid DP (min path sum)
def minPathSum(grid):
m, n = len(grid), len(grid[0])
dp = [[0] * n for _ in range(m)]
dp[0][0] = grid[0][0]
for i in range(m):
for j in range(n):
if i == j == 0: continue
best = float('inf')
if i > 0: best = min(best, dp[i-1][j])
if j > 0: best = min(best, dp[i][j-1])
dp[i][j] = best + grid[i][j]
return dp[m-1][n-1]Pattern 2 — String DP (LCS / Edit Distance family)
def lcs(a, b):
n, m = len(a), len(b)
dp = [[0] * (m + 1) for _ in range(n + 1)]
for i in range(n - 1, -1, -1):
for j in range(m - 1, -1, -1):
dp[i][j] = (1 + dp[i+1][j+1]) if a[i] == b[j] else max(dp[i+1][j], dp[i][j+1])
return dp[0][0]Edit distance: dp[i][j] = dp[i+1][j+1] if chars match, else 1 + min(dp[i+1][j], dp[i][j+1], dp[i+1][j+1]).
Pattern 3 — Knapsack
# 0/1 knapsack (each item used at most once)
def knapsack01(weights, values, W):
dp = [0] * (W + 1)
for w, v in zip(weights, values):
for cap in range(W, w - 1, -1): # BACKWARDS prevents reuse
dp[cap] = max(dp[cap], dp[cap - w] + v)
return dp[W]
# Unbounded knapsack (item can be reused)
def knapsackUnbounded(weights, values, W):
dp = [0] * (W + 1)
for w, v in zip(weights, values):
for cap in range(w, W + 1): # FORWARDS allows reuse
dp[cap] = max(dp[cap], dp[cap - w] + v)
return dp[W]Recognize: "buy/sell with cooldown / transaction limit / fee", multiple states per index
# Best Time to Buy/Sell Stock with Cooldown
def maxProfit(prices):
hold = float('-inf') # holding a stock
cash = 0 # not holding, can buy
cool = 0 # cooldown (just sold)
for p in prices:
hold, cash, cool = max(hold, cash - p), max(cash, cool), hold + p
return max(cash, cool)Key insight: define states that capture all relevant history, then write transitions. With
ktransactions:dp[i][k][0/1](day, txns left, holding/not).
# Count all palindromic substrings — O(n²) expand-from-center
def countSubstrings(s):
count = 0
def expand(l, r):
nonlocal count
while l >= 0 and r < len(s) and s[l] == s[r]:
count += 1; l -= 1; r += 1
for i in range(len(s)):
expand(i, i) # odd-length
expand(i, i + 1) # even-length
return count
# 2D DP: dp[i][j] = True if s[i..j] is palindrome
# dp[i][j] = (s[i] == s[j]) and (j - i < 2 or dp[i+1][j-1])Manacher's algorithm does this in O(n) but expand-from-center is interview-sufficient and far easier to recall.
Recognize: "best result for subarray [l..r]", burst balloons, palindrome partition, matrix chain multiplication, stone merging
# Template: dp[l][r] = best answer for interval l..r
# Enumerate all interval lengths, then split points
n = len(arr)
dp = [[0] * n for _ in range(n)]
for length in range(2, n + 1): # interval length
for l in range(n - length + 1):
r = l + length - 1
for k in range(l, r): # split point
dp[l][r] = max(dp[l][r], dp[l][k] + dp[k+1][r] + cost(l, k, r))Time: O(n³) — acceptable for n ≤ 500.
Recognize: n ≤ 20, "visit all nodes / assign all tasks / cover all states", TSP-like, "state = set of chosen items"
# dp[mask] or dp[mask][i]
# mask is a bitmask representing which elements have been used
# iterate over all 2^n subsets
n = 5
dp = [float('inf')] * (1 << n)
dp[0] = 0 # base case: empty set
for mask in range(1 << n):
for i in range(n):
if mask & (1 << i): continue # i already in mask
new_mask = mask | (1 << i)
dp[new_mask] = min(dp[new_mask], dp[mask] + cost(mask, i))Recognize: "minimum operations", "can you reach the end", "choose earliest finishing", "maximize number of tasks/events", "assign to minimize max"
Key test: Can you state a monotonic invariant — a property that stays true and never needs to be undone? If yes, greedy works. If no, it's probably DP.
def canJump(nums):
far = 0
for i, x in enumerate(nums):
if i > far: return False # can't reach index i
far = max(far, i + x)
return Truedef maxEvents(intervals):
intervals.sort(key=lambda x: x[1]) # sort by END time
count = 0; last_end = float('-inf')
for start, end in intervals:
if start > last_end: # no overlap
count += 1; last_end = end
return countimport heapq
def minMeetingRooms(intervals):
intervals.sort() # sort by start
h = [] # min-heap of end times
for start, end in intervals:
if h and h[0] <= start:
heapq.heapreplace(h, end) # reuse earliest-freed room
else:
heapq.heappush(h, end) # need new room
return len(h)Recognize: merge overlapping, minimum rooms/resources needed, event scheduling, sweep line
def merge(intervals):
intervals.sort()
res = []
for s, e in intervals:
if not res or res[-1][1] < s:
res.append([s, e])
else:
res[-1][1] = max(res[-1][1], e)
return resdef insert(intervals, newInterval):
res = []; i = 0; n = len(intervals)
while i < n and intervals[i][1] < newInterval[0]:
res.append(intervals[i]); i += 1
while i < n and intervals[i][0] <= newInterval[1]:
newInterval[0] = min(newInterval[0], intervals[i][0])
newInterval[1] = max(newInterval[1], intervals[i][1])
i += 1
res.append(newInterval)
res.extend(intervals[i:])
return resdef minRooms_sweep(intervals):
events = []
for s, e in intervals:
events.append((s, 1))
events.append((e, -1))
events.sort(key=lambda x: (x[0], x[1])) # end (-1) before start (+1) at same time
rooms = max_rooms = 0
for _, delta in events:
rooms += delta
max_rooms = max(max_rooms, rooms)
return max_roomsRecognize: "maximum/minimum in every window of size K", "subarray min/max", "largest rectangle" variations, "shortest subarray with sum ≥ k"
Core insight: A standard sliding window with
max()is O(nk). The deque keeps a decreasing sequence of candidates —maxis always the front. Each element is pushed and popped at most once → O(n) total.
from collections import deque
def maxSlidingWindow(nums, k):
dq = deque() # stores indices; front = index of current max
res = []
for i, x in enumerate(nums):
# Remove indices outside the window
while dq and dq[0] < i - k + 1:
dq.popleft()
# Remove indices whose values are smaller than x (they can never be max)
while dq and nums[dq[-1]] < x:
dq.pop()
dq.append(i)
if i >= k - 1:
res.append(nums[dq[0]]) # front is always the max
return resSame template — flip < to > in the inner while loop.
When a DP recurrence is dp[i] = max(dp[j]) + cost over a valid range of j, a monotonic deque turns O(n²) DP into O(n).
Time/Space: O(n) / O(k)
Recognize: "range sum / range min / range max query", "point update then query", "inversion count", "count of smaller numbers after self"
Rule of thumb: If you need
O(log n)point updates AND range queries, reach for BIT (Fenwick Tree) for sums, or Segment Tree for anything more complex (range min/max, lazy propagation).
class BIT:
def __init__(self, n):
self.n = n
self.tree = [0] * (n + 1) # 1-indexed
def update(self, i, delta):
while i <= self.n:
self.tree[i] += delta
i += i & (-i) # move to parent
def query(self, i): # prefix sum [1..i]
s = 0
while i > 0:
s += self.tree[i]
i -= i & (-i) # move to predecessor
return s
def range_query(self, l, r): # sum [l..r]
return self.query(r) - self.query(l - 1)class SegTree:
def __init__(self, nums):
self.n = len(nums)
self.tree = [float('inf')] * (4 * self.n)
self.build(nums, 1, 0, self.n - 1)
def build(self, nums, node, start, end):
if start == end:
self.tree[node] = nums[start]
else:
mid = (start + end) // 2
self.build(nums, 2*node, start, mid)
self.build(nums, 2*node+1, mid+1, end)
self.tree[node] = min(self.tree[2*node], self.tree[2*node+1])
def update(self, node, start, end, idx, val):
if start == end:
self.tree[node] = val
else:
mid = (start + end) // 2
if idx <= mid: self.update(2*node, start, mid, idx, val)
else: self.update(2*node+1, mid+1, end, idx, val)
self.tree[node] = min(self.tree[2*node], self.tree[2*node+1])
def query(self, node, start, end, l, r):
if r < start or end < l: return float('inf') # out of range
if l <= start and end <= r: return self.tree[node] # fully covered
mid = (start + end) // 2
return min(self.query(2*node, start, mid, l, r),
self.query(2*node+1, mid+1, end, l, r))Usage:
st = SegTree(nums)
st.update(1, 0, st.n-1, idx, new_val)
result = st.query(1, 0, st.n-1, l, r)from math import gcd
# GCD / LCM — water jug, divisibility
lcm = lambda a, b: a * b // gcd(a, b)
can_fill = lambda x, y, t: t <= x + y and t % gcd(x, y) == 0
# Modular arithmetic (large counts)
MOD = 10**9 + 7
result = (a * b) % MOD
# Modular inverse (Fermat's little theorem, MOD must be prime)
inv = pow(a, MOD - 2, MOD)
# Fast exponentiation
pow(base, exp, mod) # Python built-in: O(log exp)
# Integer square root (avoid float precision issues)
import math
isqrt = math.isqrt(n) # exact integer sqrt, no float# nCr mod p using Pascal's triangle — O(n²), use when n ≤ 1000
def build_pascal(n, MOD):
C = [[0] * (n+1) for _ in range(n+1)]
for i in range(n+1):
C[i][0] = 1
for j in range(1, i+1):
C[i][j] = (C[i-1][j-1] + C[i-1][j]) % MOD
return C
# nCr mod p using Fermat's little theorem — O(n), use when n ≤ 10^6
def nCr(n, r, MOD):
if r > n: return 0
num = den = 1
for i in range(r):
num = num * (n - i) % MOD
den = den * (i + 1) % MOD
return num * pow(den, MOD - 2, MOD) % MOD # den^(MOD-2) = modular inverseWhen: counting arrangements, paths in grid (only right/down), parentheses expressions, or any "choose k from n" question.
# Test if bit k is set
(n >> k) & 1
# Set bit k
n | (1 << k)
# Clear bit k
n & ~(1 << k)
# Remove lowest set bit
n & (n - 1)
# Count set bits
bin(n).count('1') # or: import math; math.popcount(n) in Python 3.10+
# XOR cancellation (find single number)
def singleNumber(nums):
x = 0
for v in nums: x ^= v
return x
# Iterate over all non-empty subsets of a mask
sub = mask
while sub:
process(sub)
sub = (sub - 1) & mask
# Check if n is a power of 2
n > 0 and (n & (n - 1)) == 0
# Lowest set bit (useful in BIT / DSU)
lowbit = n & (-n)Covered partly in Patterns 14 & 15. Here are the 3 classic interval problems and their exact approaches.
| Problem | Sort by | Data structure | Decision |
|---|---|---|---|
| Merge intervals | Start | Array | Merge if overlap |
| Non-overlapping intervals (min remove) | End | Counter | Keep if no overlap |
| Meeting Rooms II (min rooms) | Start | Min-heap of ends | Reuse or add room |
| Task scheduler (min idle) | Frequency | Max-heap | Always schedule most frequent next |
| Pattern | Time | Space | Notes |
|---|---|---|---|
| Arrays / Hashing | O(n) | O(n) | |
| Two-Sum hash lookup | O(n) | O(n) | Single pass |
| Dutch National Flag | O(n) | O(1) | 3-pointer in-place partition |
| Longest Consecutive Seq | O(n) | O(n) | Hash set, start only at seq beginning |
| Two Pointers (l/r) | O(n) | O(1) | Sorted array |
| Palindrome check | O(n) | O(1) | l/r shrink, skip non-alnum for LC 125 |
| 3Sum | O(n²) | O(1) | Sort + fix outer + inner two pointers |
| Sliding Window | O(n) | O(k) | k = window constraint |
| Monotonic Stack | O(n) | O(n) | Each element pushed+popped once |
| Monotonic Deque | O(n) | O(k) | Sliding window max/min |
| Binary Search | O(log n) | O(1) | |
| Linked List ops | O(n) | O(1) | |
| Nth from end | O(n) | O(1) | Two pointers with gap |
| Tree DFS | O(n) | O(h) | h = height; O(log n) balanced, O(n) skewed |
| Tree BFS | O(n) | O(w) | w = max width ≈ n/2 last level |
| LCA | O(n) | O(h) | Recursive postorder |
| Heap (top K) | O(n log k) | O(k) | |
| K-way merge | O(N log k) | O(k) | N = total elements |
| Backtracking subsets | O(2^n) | O(n) | |
| Backtracking perms | O(n! × n) | O(n) | |
| Grid word search | O(rows×cols×4^L) | O(L) | L = word length |
| Trie insert/search | O(L) | O(A × L × N) | L=word len, A=alphabet, N=words |
| BFS / DFS (graph) | O(V + E) | O(V) | |
| Directed cycle (3-color) | O(V + E) | O(V) | DFS with white/gray/black states |
| Dijkstra | O((V+E) log V) | O(V) | Min-heap |
| 0-1 BFS | O(V + E) | O(V) | Deque instead of heap |
| Bellman-Ford | O(V × E) | O(V) | Handles negative edges |
| Union-Find | O(α(n)) ≈ O(1) | O(n) | With path compression + union by rank |
| Topological Sort | O(V + E) | O(V) | Kahn's (BFS-based) |
| 1D DP (dp[] bottom-up) | O(n) | O(n) dp array; O(1) if space-optimised | |
| State machine DP | O(n × states) | O(states) | Stock buy/sell family |
| Palindrome (expand) | O(n²) | O(1) | Expand-from-center |
| 2D DP | O(n × m) | O(m) rolling | |
| Interval DP | O(n³) | O(n²) | n ≤ 500 typical |
| Bitmask DP | O(2^n × n) | O(2^n) | n ≤ 20 typical |
| LIS (patience) | O(n log n) | O(n) | |
| BIT (Fenwick) | O(log n) per op | O(n) | Range sum + point update |
| Segment Tree | O(log n) per op | O(n) | Range min/max/sum + updates |
| nCr mod p (Fermat) | O(n) | O(1) | For n ≤ 10^6, MOD prime |
| Problem Type | Pattern Combo |
|---|---|
| "Shortest path with constraint (fuel/cost)" | BFS on (node, constraint_state) |
| "Top K with streaming updates" | Heap + Hash Map (for deletion) |
| "Count valid subarrays" | Sliding Window + Two Pointers or Prefix Sum |
| "Word search in grid with many words" | Backtracking + Trie (prune paths early) |
| "Schedule tasks with cooldown" | Greedy + Heap or math formula |
| "Range query with updates" | Segment Tree or BIT |
| "All paths / min-cost path in DAG" | Topological sort + DP |
| "Partition into subsets" | Backtracking + bitmask DP if n ≤ 20 |
| "Minimize max / maximize min" | Binary search on answer + greedy/BFS check |
| "Palindrome substrings / partitions" | Interval DP or expand-from-center |