DAA Guide
Quiz Syllabus · 30th April 2026

UCS415: Design & Analysis
of Algorithms

Complete zero-to-exam study guide. Every concept explained from scratch with examples, traces, and exam-style MCQs. Read top to bottom once — you will score full marks.

Complexity Analysis Divide & Conquer Greedy Dynamic Programming Backtracking Graphs
01

Introduction & Complexity Analysis

Foundations: data structures, algorithm analysis, Big-O, recurrence relations

1.1 · Essential Data Structures

Before analysing algorithms, you need to know what each data structure does and its time complexity. Every algorithm uses one or more of these.

StructureWhat it isKey operations & timeReal use
Stack Last In First Out (LIFO). Think of a stack of plates. Push: O(1) · Pop: O(1) · Peek: O(1) Function call stack, undo/redo, DFS
Queue First In First Out (FIFO). Think of a queue at a counter. Enqueue: O(1) · Dequeue: O(1) BFS, scheduling, print spooler
Binary Tree Each node has at most 2 children (left, right). Search/Insert/Delete: O(n) worst, O(log n) in BST Expression trees, Huffman coding
Binary Search Tree Left child < parent < right child — always. Search/Insert: O(log n) avg, O(n) worst Ordered sets, maps
Min-Heap Complete binary tree. Parent ≤ both children. Root = minimum element. Insert: O(log n) · Extract-min: O(log n) · Peek-min: O(1) Priority queue, Dijkstra, Huffman
Max-Heap Parent ≥ both children. Root = maximum element. Insert: O(log n) · Extract-max: O(log n) Heap sort, scheduling
Heap Property

A heap is built from an array. For index i: left child = 2i+1, right child = 2i+2, parent = ⌊(i-1)/2⌋. Building a heap from n elements takes O(n) — NOT O(n log n), which surprises many students.

1.2 · Asymptotic Notation

We analyse algorithms by how their running time grows as input size n → ∞. We ignore constants and lower-order terms because they don't matter at scale.

The Three Notations

Big-O — Upper Bound

f(n) = O(g(n)) means f grows no faster than g.
"The algorithm takes at most this long."

Example: Insertion sort is O(n²). It may be faster, but never worse than n².

Omega — Lower Bound

f(n) = Ω(g(n)) means f grows at least as fast as g.
"The algorithm takes at least this long."

Example: Any comparison sort is Ω(n log n) — you can't do better.

Theta — Tight Bound (Most Useful)

f(n) = Θ(g(n)) means f grows at exactly the same rate as g (within constants).
This is what we usually mean when we say "the complexity is n log n".

Merge sort is Θ(n log n) — it's always n log n, no better or worse.

How to simplify: drop constants & lower terms

Simplification Rules

Given f(n) = 7n³ + 5n² + 100n + 9:

1
Drop the constant: 9 → gone
2
Drop lower-order terms: 100n and 5n² → gone (n³ dominates)
3
Drop the coefficient: 7 → gone
4
Result: f(n) = Θ(n³)

Order of Growth — Memorise this hierarchy

O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!)
ComplexityNameExample algorithmn=1000 operations (approx)
O(1)ConstantArray access, hash lookup1
O(log n)LogarithmicBinary search10
O(n)LinearLinear search, Kadane's1,000
O(n log n)LinearithmicMerge sort, Quick sort avg10,000
O(n²)QuadraticBubble sort, Insertion sort1,000,000
O(2ⁿ)ExponentialNaive Fibonacci, brute-force2^1000 ≈ ∞
log n in algorithms always means log₂n (base 2). log₂1000 ≈ 10. Always check: if an algorithm halves the problem each time, it's logarithmic.

1.3 · Recurrence Relations

Recursive algorithms have running times defined by recurrences: equations that express T(n) in terms of T of smaller inputs. We solve them to find the closed-form complexity.

Method 1: Master Theorem (fastest method — use this first)

For recurrences of the form T(n) = aT(n/b) + f(n) where a ≥ 1, b > 1:

Master Theorem — Three Cases

First compute the watershed function: n^(log_b a)

1
Case 1: If f(n) = O(n^(log_b a − ε)) for some ε > 0
T(n) = Θ(n^(log_b a))
Meaning: subproblem work dominates.
2
Case 2: If f(n) = Θ(n^(log_b a))
T(n) = Θ(n^(log_b a) · log n)
Meaning: all levels contribute equally.
3
Case 3: If f(n) = Ω(n^(log_b a + ε)) for some ε > 0
T(n) = Θ(f(n))
Meaning: combining work dominates.

Worked Examples — trace every step

Example 1: Merge Sort

T(n) = 2T(n/2) + n

Here: a=2, b=2, f(n)=n
Watershed = n^(log₂ 2) = n^1 = n
Compare f(n)=n with watershed=n → they are equal → Case 2
T(n) = Θ(n log n)

Example 2: Binary Search

T(n) = T(n/2) + 1

Here: a=1, b=2, f(n)=1
Watershed = n^(log₂ 1) = n^0 = 1
Compare f(n)=1 with watershed=1 → equal → Case 2
T(n) = Θ(log n)

Example 3: T(n) = 2T(n/2) + n²

Here: a=2, b=2, f(n)=n²
Watershed = n^(log₂ 2) = n
Compare f(n)=n² with watershed=n → n² grows faster → Case 3
T(n) = Θ(n²)

Common Recurrences — Must Memorise

RecurrenceSolutionAlgorithm
T(n) = T(n/2) + 1Θ(log n)Binary search
T(n) = 2T(n/2) + 1Θ(n)Tree traversal
T(n) = 2T(n/2) + nΘ(n log n)Merge sort
T(n) = T(n/2) + nΘ(n)
T(n) = T(n−1) + 1Θ(n)Linear scan
T(n) = T(n−1) + nΘ(n²)Selection sort
T(n) = 2T(n−1) + 1Θ(2ⁿ)Tower of Hanoi
T(n) = T(n−1)+T(n−2)+cΘ(2ⁿ)Naive Fibonacci
Master Theorem does NOT apply to: T(n) = T(n−1) + f(n) (not n/b form). Use substitution or back-substitution for these.

1.4 · Analysing Iterative & Recursive Algorithms

Rule for iterative analysis: count innermost operations

// Example 1: Single loop
for i = 1 to n:
    print(i)          // runs exactly n times → O(n)

// Example 2: Nested loops
for i = 1 to n:
    for j = 1 to n:
        print(i, j)   // runs n × n = n² times → O(n²)

// Example 3: Dependent loops
for i = 1 to n:
    for j = i to n:
        print(i, j)   // n + (n-1) + ... + 1 = n(n+1)/2 → O(n²)

// Example 4: Logarithmic inner loop
for i = 1 to n:
    j = 1
    while j < n:
        j = j * 2    // inner runs log₂n times → O(n log n) total
For the while loop with j = j*2: j takes values 1, 2, 4, 8, ..., n. It reaches n in log₂n steps. This pattern is extremely common in exam questions.
Practice MCQs — Complexity Analysis
Q1. T(n) = T(n−1) + T(n−2) + T(n−3) for n>3, T(n)=n otherwise. What relation must hold between T(4), T(5), T(6) for the algorithm's order to become constant?
A T(4) = T(5) = T(6)
B T(4) + T(6) = 2T(5)
C T(4) − T(6) = T(5)
D T(4) + T(5) = T(6)
Answer: A · From your previous paper Q1

For T(n) to be O(1) — constant — T(n) must stop growing. Constant growth means the function value does not change with n. The only way T(4), T(5), T(6) remain constant is if T(4) = T(5) = T(6). If they were different (like option D: T(4)+T(5)=T(6)), T would keep increasing — that's linear or exponential, not constant. The actual values here are T(4)=6, T(5)=11 — they are NOT equal, so the algorithm is NOT constant order in reality. The question asks what MUST be true IF it were constant.

Q2. What is the time complexity of T(n) = 4T(n/2) + n?
A Θ(n)
B Θ(n log n)
C Θ(n²)
D Θ(n² log n)
Answer: C

a=4, b=2, f(n)=n. Watershed = n^(log₂4) = n^2. Compare f(n)=n with watershed=n². Since n grows slower than n² (n = O(n^(2−1))), this is Case 1. T(n) = Θ(n^(log₂4)) = Θ(n²). The combine work (n per level) is dominated by the explosion of subproblems.

Q3. Which of the following is NOT O(n²)?
A 100n² + 50n + 1
B n² / 1000
C n² · log n
D n · n
Answer: C

O(n²) means the function grows no faster than cn² for some constant c. Options A (drop constants → n²), B (constant factor → n²), D (= n²) all qualify. Option C is n² · log n, which grows faster than n² since log n → ∞. So n² log n is NOT O(n²) — it's O(n² log n). A common mistake is thinking any function with n² is O(n²).


02

Divide & Conquer

Binary search, merge sort, quicksort, maximum subarray, peak element

2.1 · The General Method

Every divide-and-conquer algorithm follows exactly three steps:

1
Divide: Break the problem of size n into a smaller subproblems of size n/b each.
2
Conquer: Solve each subproblem recursively. Base case: when the subproblem is small enough, solve directly.
3
Combine: Merge the subproblem solutions into the final answer.
The recurrence T(n) = aT(n/b) + f(n) directly maps to this: a = number of subproblems, b = factor by which size reduces, f(n) = cost to divide + combine.

2.2 · Binary Search

Finds a target in a sorted array by repeatedly halving the search space.

BinarySearch(A, low, high, target):
    if low > high: return -1  // not found
    mid = (low + high) / 2
    if A[mid] == target: return mid
    else if A[mid] < target: return BinarySearch(A, mid+1, high, target)
    else: return BinarySearch(A, low, mid-1, target)
Complexity

Time: O(log n) — halves each step.
Space: O(log n) recursive, O(1) iterative.

Prerequisite

Array MUST be sorted. If unsorted: sort first (O(n log n)), then search — or just use linear search O(n).

Finding a pair a,b in sorted array A[] such that a+b > x is O(1) — just check A[n-1] + A[n-2] (the two largest). Don't overthink it. This appeared as Q3 in your paper.

2.3 · Merge Sort

Splits array in half, recursively sorts each half, then merges the two sorted halves.

MergeSort(A, low, high):
    if low < high:
        mid = (low + high) / 2
        MergeSort(A, low, mid)      // sort left half
        MergeSort(A, mid+1, high)  // sort right half
        Merge(A, low, mid, high)    // merge: O(n) step
PropertyValue
RecurrenceT(n) = 2T(n/2) + n
Time (all cases)Θ(n log n) — always, no exceptions
SpaceO(n) extra — needs auxiliary array for merging
Stable?Yes — equal elements maintain relative order
In-place?No — needs extra space

Trace on [5, 2, 8, 1, 9, 3]

Split:  [5,2,8] | [1,9,3]
        [5,2]|[8] | [1,9]|[3]
        [5]|[2]|  | [1]|[9]|

Merge:  [2,5] | [8] | [1,9] | [3]
        [2,5,8]   | [1,3,9]
        [1,2,3,5,8,9]  ← final

2.4 · Quick Sort — Critical for Exam

Picks a pivot, partitions array so all elements less than pivot are left and greater are right, then recursively sorts both halves.

QuickSort(A, low, high):
    if low < high:
        pivot_index = Partition(A, low, high)
        QuickSort(A, low, pivot_index - 1)   // left of pivot
        QuickSort(A, pivot_index + 1, high)  // right of pivot
CaseWhen?TimeRecurrence
BestPivot always splits array equally in halfΘ(n log n)T(n) = 2T(n/2) + n
AverageRandom pivot, random inputΘ(n log n)T(n) = T(n/9) + T(9n/10) + n (roughly)
WorstPivot always picks smallest or largestΘ(n²)T(n) = T(n−1) + n
⚠ EXAM CRITICAL — From your paper Q2

Algorithm Q sorts in descending order using last element as pivot.

Input 1: [5, 4, 3, 2, 1] — already sorted descending
Input 2: [1, 2, 3, 4, 5] — sorted ascending

Partition always performs exactly n−1 comparisons regardless of input values. Both inputs create the same structure of recursive calls (each pivot creates 0-element and n−1-element partitions — worst case for both!). Therefore c1 = c2. Answer: C.

PropertyQuick SortMerge Sort
Best caseO(n log n)O(n log n)
Worst caseO(n²)O(n log n)
SpaceO(log n) — in-place!O(n) extra
Stable?NoYes
Preferred whenAverage performance, in-place neededWorst-case guarantee, stable needed

2.5 · Maximum Subarray & Peak Element

Maximum Subarray (D&C approach)

Divide array in half. Max subarray is either: entirely in left half, entirely in right half, or crosses the midpoint. The crossing case can be found in O(n). Recurrence: T(n) = 2T(n/2) + n = O(n log n). (Kadane's DP does it in O(n) — covered in Topic 4.)

Peak Element

An element is a peak if it is greater than or equal to both neighbours. Find it using binary search:

1
Check A[mid] vs A[mid+1].
2
If A[mid] < A[mid+1] → peak must be in RIGHT half (there's a larger element there).
3
Else → peak is in LEFT half (or at mid itself).
4
Repeat. Time: O(log n).
Practice MCQs — Divide & Conquer
Q4. Algorithm Q sorts numbers in descending order using the last element as pivot. c1 and c2 are the comparisons made for inputs [5,4,3,2,1] and [1,2,3,4,5] respectively. Which holds?
A c1 < c2
B c1 > c2
C c1 = c2
D c1 − c2 < c2 − c1
Answer: C · From your previous paper Q2

Every partition call does exactly (high − low) comparisons, regardless of the values in the array. For both inputs, the partition with last element as pivot produces the same worst-case split (pivot ends up at one extreme in both cases when sorting descending). The total number of comparisons across all recursive calls is identical. Therefore c1 = c2.

Q5. Sorted array A[] and threshold x. Complexity of finding a pair a,b in A[] such that a+b > x?
A O(1)
B O(log n)
C O(n)
D O(n log n)
Answer: A · From your previous paper Q3

The question asks whether ANY pair exists with sum > x. The maximum possible sum in a sorted array is A[n−1] + A[n−2] (the two largest elements). If even this maximum sum is ≤ x, no pair exists. If it is > x, that pair itself is our answer. We need exactly ONE check: compare A[n−1]+A[n−2] with x. That's O(1). Students who think O(n) are solving the harder problem of finding ALL such pairs.

Q6. Which sorting algorithm guarantees O(n log n) in the worst case AND is stable?
A Quick Sort
B Merge Sort
C Heap Sort
D Shell Sort
Answer: B

Merge Sort: always O(n log n) regardless of input, and stable (equal elements keep original relative order). Quick Sort: O(n²) worst case. Heap Sort: O(n log n) worst case but NOT stable (heap operations may reorder equal elements). Shell Sort: O(n log²n) typically, not a guaranteed O(n log n) algorithm.


03

Greedy Algorithms

Activity selection, platforms, coin change, Huffman coding, job sequencing, fractional knapsack

3.1 · The Greedy Strategy

A greedy algorithm builds a solution step by step, always making the choice that looks best right now (locally optimal) without reconsidering past choices. It works correctly only when the problem has:

Property 1: Greedy Choice

A globally optimal solution can be constructed by making locally optimal choices. Each greedy choice is safe.

Property 2: Optimal Substructure

An optimal solution to the problem contains optimal solutions to its subproblems.

Greedy does NOT always give the optimal answer. The classic failure case is coin change with non-standard denominations. Always verify the problem has the greedy-choice property before applying greedy.

3.2 · Activity Selection Problem

Given n activities each with a start time sᵢ and finish time fᵢ, select the maximum number of mutually compatible activities (no two overlap).

Greedy Rule

Always pick the activity with the earliest finish time that is compatible with previously selected activities.

Sort activities by finish time. Scan left to right. Accept activity if its start ≥ finish time of last accepted activity.

Worked Example

ActivityStartFinishSelected?
A114✓ (first)
A235✗ (starts at 3, A1 ends at 4 — overlap)
A306✗ (starts at 0, A1 ends at 4 — overlap)
A457✓ (starts at 5 ≥ 4)
A589✓ (starts at 8 ≥ 7)

Time: O(n log n) for sorting + O(n) scan = O(n log n) total.

From your paper Q7: Activity selection selects the MAXIMUM size subset of MUTUALLY COMPATIBLE activities. Answer: B.

3.3 · Minimum Number of Platforms

Given arrival and departure times of trains, find the minimum number of platforms needed so no train waits.

Algorithm
1
Sort arrival array and departure array separately.
2
Use two pointers i (arrivals) and j (departures), starting at 0.
3
If arrival[i] ≤ departure[j]: a new train arrives before one departs → need one more platform. platforms++. i++.
4
Else: a train departs → free up a platform. platforms--. j++.
5
Track maximum platforms needed at any point.

Time: O(n log n)

Example: Arrivals [900, 940, 950, 1100, 1500, 1800], Departures [910, 1200, 1120, 1130, 1900, 2000]

After sorting departures: [910, 1120, 1130, 1200, 1900, 2000]. Trace: max platforms needed = 3.

3.4 · Coin Change Problem

Given coin denominations and a target amount, find the minimum number of coins to make change.

When Greedy Works

Standard denominations like {1, 5, 10, 25}: always pick the largest coin that fits. Greedy gives optimal answer.

Example: Amount = 41, coins = {25, 10, 5, 1}
Greedy: 25+10+5+1 = 4 coins ✓

When Greedy Fails

Non-standard denominations like {1, 3, 4}: greedy fails.

Amount = 6:
Greedy: 4+1+1 = 3 coins ✗
Optimal: 3+3 = 2 coins ✓

Use Dynamic Programming instead!

From Your Paper Q8 — Coins {1, 3, 4}

For which amount does greedy give optimal?

Amount 6: Greedy → 4+1+1=3 coins. Optimal → 3+3=2 coins. ✗ FAILS
Amount 10: Greedy → 4+4+1+1=4 coins. Optimal → 4+3+3=3 coins. ✗ FAILS
Amount 14: Greedy → 4+4+4+1+1=5 coins. Optimal → 4+4+3+3=4 coins. ✗ FAILS
Amount 20: Greedy → 4+4+4+4+4=5 coins. Optimal → 4+4+4+4+4=5 coins. ✓ OPTIMAL
Answer: D (20)

3.5 · Huffman Coding

Assigns variable-length binary codes to characters. Higher-frequency characters get shorter codes. This minimises the total encoded length.

Algorithm using Min-Heap
1
Create one leaf node for each character and insert into a min-heap (key = frequency).
2
Extract two nodes with minimum frequencies, say x and y.
3
Create a new internal node with frequency = x.freq + y.freq. Its children are x and y.
4
Insert new node back into heap.
5
Repeat until heap has one node. That's the root of the Huffman tree.

Time: O(n log n) — n-1 extract-min operations, each O(log n).

Example: Frequencies a=5, b=9, c=12, d=13, e=16, f=45

Step 1: Merge a(5) + b(9) = 14
Step 2: Merge c(12) + d(13) = 25
Step 3: Merge 14 + e(16) = 30
Step 4: Merge 25 + 30 = 55
Step 5: Merge f(45) + 55 = 100 (root)

Tree depth:
f → 0         (1 bit)   depth 1
c → 100       (3 bits)  depth 3
d → 101       (3 bits)  depth 3
a → 1100      (4 bits)  depth 4
b → 1101      (4 bits)  depth 4
e → 111       (3 bits)  depth 3
Lowest frequency = deepest in tree = longest code. Highest frequency = shallowest = shortest code. This is always true in Huffman coding.

3.6 · Job Sequencing with Deadlines

Given jobs with deadlines and profits, find a sequence that maximises total profit. Each job takes 1 unit of time.

Algorithm
1
Sort jobs by profit in descending order.
2
Maintain a time slot array of size = max deadline.
3
For each job (in profit order): find the latest available slot ≤ job's deadline. If found, assign job to that slot.
4
Jobs assigned = maximum profit schedule.

Example

JobDeadlineProfitAssigned to slot
J12100Slot 2 ✓
J2119Slot 1 ✓
J3227No slot ≤ 2 available ✗
J4125No slot ≤ 1 available ✗

Selected: J1 (profit 100) + J2 (profit 19) = 119 total profit.

From your paper Q10: A fruit vendor minimising perishing loss → use Job Sequencing (each fruit has a deadline = perish date, profit = selling price). Answer: B.

3.7 · Fractional Knapsack

Given items with weights and values, and a knapsack of capacity W, maximise total value. You CAN take fractions of items.

Algorithm
1
Calculate value/weight ratio for each item.
2
Sort items by ratio in descending order.
3
Take as much as possible of highest-ratio item. If it fits completely, take all. Else take the fraction that fills capacity.
4
Continue until knapsack full.

Time: O(n log n). Greedy is OPTIMAL for fractional knapsack.

0/1 Knapsack — Greedy FAILS

In 0/1 knapsack, you must take a whole item or nothing. Greedy fails because taking the highest-ratio item might prevent you from filling the knapsack optimally. Use Dynamic Programming for 0/1 knapsack.

Practice MCQs — Greedy Algorithms
Q7. Activity selection problem selects the ______ size subset of ______.
A Minimum, Mutually Compatible Activities
B Maximum, Mutually Compatible Activities
C Minimum, Mutually Incompatible Activities
D Maximum, Mutually Incompatible Activities
Answer: B · From your previous paper Q7

The goal is to find the MAXIMUM number of activities that can be performed without conflicts. "Mutually compatible" means no two selected activities overlap in time. We want as many as possible — maximum. The word "minimum" would mean we want the fewest activities, which is trivially 0 (just pick nothing). "Incompatible" activities conflict — we never want to select conflicting activities.

Q8. Coins available: {1, 3, 4}. For which sum does the greedy algorithm give an optimal answer?
A 6
B 10
C 14
D 20
Answer: D · From your previous paper Q8

For amount 20: Greedy picks 4+4+4+4+4 = 5 coins. Optimal is also 5 coins (no way to do better — 20/4 = 5 exactly). For 6: Greedy gives 4+1+1=3 coins but optimal is 3+3=2 coins — fails. For 10: Greedy gives 4+4+1+1=4 but optimal is 4+3+3=3 — fails. For 14: Greedy gives 4+4+4+1+1=5 but optimal is 4+4+3+3=4 — fails. Only 20 is a multiple of 4, so greedy works perfectly.

Q9. A fruit vendor wants to minimise loss from perishing fruits. Which approach should be used?
A Activity Selection
B Job Sequencing
C Fractional Knapsack
D Coin Change
Answer: B · From your previous paper Q10

Each fruit is a "job": it has a deadline (when it perishes) and a value/profit (selling price). Job sequencing maximises total profit from selling items before their deadlines. Activity selection maximises number of activities (not profit — wrong objective). Fractional knapsack deals with weight capacity, not time deadlines. Coin change is about making exact change — completely irrelevant.


04

Dynamic Programming

Matrix chain, 0/1 knapsack, LCS, max independent set, Kadane's algorithm

4.1 · General Method

Dynamic Programming (DP) solves problems by breaking them into subproblems, solving each subproblem once, and storing results in a table to avoid repeated computation.

Property 1: Optimal Substructure

An optimal solution to the problem contains optimal solutions to subproblems.

Example: Shortest path A→C through B uses the shortest path A→B and shortest B→C.

Property 2: Overlapping Subproblems

The same subproblems are solved multiple times in a naive recursive solution.

Example: Fibonacci — fib(5) calls fib(4) and fib(3). fib(4) also calls fib(3). fib(3) computed twice!

Approach 1: Memoisation (Top-Down)

Write recursive solution. Add a cache (usually a hash map or array). Before computing, check if result already in cache. If yes, return cached value.

Natural to write. Uses recursion call stack.

Approach 2: Tabulation (Bottom-Up)

Fill a table starting from smallest subproblems up to the full problem. No recursion needed — iterate.

More space-efficient. No stack overflow risk. Usually faster in practice.

Both approaches give the same time complexity. DP trades space (the table) for time (no re-computation). This is the classic space-time trade-off.

4.2 · Matrix Chain Multiplication

Given a chain of matrices A₁ × A₂ × ... × Aₙ, find the parenthesisation that minimises the number of scalar multiplications. Matrix multiplication is associative, so different parenthesisations give the same result but very different computation costs.

Cost of Multiplying Two Matrices

Multiplying matrix A (p × q) with matrix B (q × r) requires exactly p × q × r scalar multiplications.

The inner dimensions must match — A's columns = B's rows = q.

Example from your paper (Q9): A(5×10), B(10×4), C(4×20)

Option 1: (A×B)×C
  Step 1: A(5×10) × B(10×4) = 5×10×4 = 200 ops → result: (5×4) matrix
  Step 2: (5×4) × C(4×20) = 5×4×20 = 400 ops
  Total: 200 + 400 = 600 ops ← MINIMUM

Option 2: A×(B×C)
  Step 1: B(10×4) × C(4×20) = 10×4×20 = 800 ops → result: (10×20) matrix
  Step 2: A(5×10) × (10×20) = 5×10×20 = 1000 ops
  Total: 800 + 1000 = 1800 ops ← MAXIMUM
From your paper Q9

The question asks for the maximum number of operations. That is 1800. Answer: C.
The minimum is 600. Knowing both is important.

General DP Formulation

For chain A₁...Aₙ with dimensions p₀×p₁, p₁×p₂, ..., pₙ₋₁×pₙ:

m[i][j] = min over all k from i to j-1 of { m[i][k] + m[k+1][j] + p[i-1]×p[k]×p[j] }

Base case: m[i][i] = 0 (single matrix, no multiplication needed). Fill table diagonally. Time: O(n³), Space: O(n²).

4.3 · 0/1 Knapsack Problem

Given n items each with weight wᵢ and value vᵢ, and a knapsack of capacity W. Either take a whole item (1) or skip it (0). Maximise total value without exceeding W.

DP Recurrence

dp[i][w] = maximum value using items 1..i with capacity w.

If wᵢ > w: dp[i][w] = dp[i−1][w] (can't fit item i)
Else: dp[i][w] = max(dp[i−1][w], vᵢ + dp[i−1][w−wᵢ])

Time: O(nW), Space: O(nW). Called "pseudopolynomial" because W is not polynomial in the input size.

Worked Example: Capacity W=5, Items: (w=2,v=3), (w=3,v=4), (w=4,v=5)

Item \ W012345
0 (none)000000
1 (w=2,v=3)003333
2 (w=3,v=4)003447
3 (w=4,v=5)003457

Maximum value = 7 (take item 1: value 3 + item 2: value 4, total weight = 2+3 = 5 ≤ W).

4.4 · Longest Common Subsequence (LCS)

A subsequence is obtained by deleting zero or more elements from a sequence WITHOUT changing the order. LCS finds the longest subsequence common to two sequences.

Example: LCS of "ABCBDAB" and "BDCAB" is "BCAB" — length 4. Note: a subsequence does NOT need to be contiguous.

DP Recurrence

Let X = x₁x₂...xₘ, Y = y₁y₂...yₙ. Define c[i][j] = LCS length of X[1..i] and Y[1..j].

If i=0 or j=0: c[i][j] = 0
If X[i] == Y[j]: c[i][j] = c[i−1][j−1] + 1
Else: c[i][j] = max(c[i−1][j], c[i][j−1])

Time: O(mn). Answer = c[m][n].

Worked trace: X = "QRSTUQRST", Y = "QSBUQCSRSQT" (from your paper Q6)

Fill the table row by row. When characters match, take diagonal+1. Else take max(left, above). Final answer at bottom-right = 7. Answer: C in your paper.

Simple trace: X = "ABCD", Y = "ACDF"

""ACDF
""00000
A01111
B01111
C01222
D01233

LCS = 3 (ACD). To recover the actual subsequence, backtrack from bottom-right: where c[i][j] = c[i−1][j−1]+1 (diagonal), that character is in LCS.

4.5 · Maximum Independent Set in a Tree

An independent set is a set of vertices where no two vertices are adjacent (connected by an edge). Find the maximum-size independent set in a tree.

DP on Tree

For each vertex v, compute two values:

dp[v][1] = max independent set size in subtree of v, including v
dp[v][0] = max independent set size in subtree of v, excluding v

If v is included, none of its children can be included:
dp[v][1] = 1 + Σ dp[child][0]

If v is excluded, each child can be included or excluded:
dp[v][0] = Σ max(dp[child][0], dp[child][1])

Answer = max(dp[root][0], dp[root][1]). Time: O(n)

4.6 · Maximum Sum Subarray — Kadane's Algorithm

Find the contiguous subarray with the largest sum. This is the DP approach — O(n) time, O(1) space.

Kadane(A, n):
    maxSoFar = A[0]
    maxEndingHere = A[0]

    for i = 1 to n-1:
        // Either extend previous subarray, or start fresh from A[i]
        maxEndingHere = max(A[i], maxEndingHere + A[i])
        maxSoFar = max(maxSoFar, maxEndingHere)

    return maxSoFar

Trace on [-2, 1, -3, 4, -1, 2, 1, -5, 4]

iA[i]maxEndingHeremaxSoFar
0-2-2-2
11max(1, -2+1)=11
2-3max(-3, 1-3)=-21
34max(4, -2+4)=44
4-1max(-1, 4-1)=34
52max(2, 3+2)=55
61max(1, 5+1)=66
7-5max(-5, 6-5)=16
84max(4, 1+4)=56

Maximum sum = 6, from subarray [4, -1, 2, 1].

Practice MCQs — Dynamic Programming
Q10. Matrices A(5×10), B(10×4), C(4×20). Maximum number of arithmetic operations needed to compute A×B×C is:
A 400
B 800
C 1800
D 2000
Answer: C · From your previous paper Q9

(A×B)×C: 5×10×4 = 200, then 5×4×20 = 400, total = 600 ops (minimum). A×(B×C): 10×4×20 = 800, then 5×10×20 = 1000, total = 1800 ops (maximum). The question asks for MAXIMUM, which is 1800. D (2000) is a trap — 2000 would require a 5×10×20 multiplication of matrices without the intermediate matrix chain, which is not how matrix multiplication works here.

Q11. LCS of "QRSTUQRST" and "QSBUQCSRSQT" has length:
A 5
B 6
C 7
D 9
Answer: C · From your previous paper Q6

Fill the LCS DP table for the two strings. One valid LCS is "QSUQRST" or "QSTUQRT" — 7 characters long. The LCS cannot be 9 (that's the length of the shorter string, impossible unless one is a subsequence of the other). Answer confirmed as 7 by the paper's answer key.

Q12. Which problem CANNOT be solved optimally by a greedy algorithm?
A Fractional Knapsack
B Activity Selection
C 0/1 Knapsack
D Huffman Coding
Answer: C

0/1 Knapsack requires Dynamic Programming. Greedy fails because taking the highest value/weight ratio item might prevent us from filling the knapsack efficiently. Example: capacity=4, items (w=3,v=4), (w=2,v=3), (w=2,v=3). Greedy picks item 1 (ratio 4/3=1.33) — total value = 4. But optimal is items 2+3 — total value = 6. All other options (A, B, D) have proven greedy optimal solutions.


05

Backtracking

N-queens problem, graph coloring, constraint satisfaction puzzles

5.1 · Introduction to Backtracking

"Backtracking is a systematic way to search through all possible configurations of a search space. If a partial solution cannot possibly lead to a complete solution, abandon it (prune) and try a different path."
The Template
Backtrack(state):
    if state is a complete solution:
        record solution
        return
    for each choice in validChoices(state):
        makeChoice(state, choice)
        if promising(state):    // pruning condition
            Backtrack(state)
        undoChoice(state, choice)  // backtrack!
FeatureBacktrackingBrute Force
ApproachBuild solution incrementally; abandon earlyTry every possible combination
PruningYes — abandons invalid partial solutions earlyNo — checks all combinations
EfficiencyMuch better in practiceAlways worst case
State spaceImplicit tree (explored selectively)Entire space explored

5.2 · N-Queens Problem

Place N queens on an N×N chessboard such that no two queens attack each other. Queens attack along rows, columns, and diagonals.

Safety Check for placement at (row, col)

A placement is safe if NO previously placed queen is in:

The same column: col == prev_col
The same diagonal: |row − prev_row| == |col − prev_col|

We place one queen per row (top to bottom), so same-row conflicts are impossible by construction.

4-Queens: All Solutions

Solution 1:         Solution 2:
. Q . .             . . Q .
. . . Q             Q . . .
Q . . .             . . . Q
. . Q .             . Q . .

There are exactly 2 solutions for 4-queens. (8-queens has 92 solutions.)

NNumber of solutions
11
42
510
64
892
N=2 and N=3 have ZERO solutions — no arrangement of 2 or 3 queens on a 2×2 or 3×3 board avoids attacks.

5.3 · Graph Coloring

Assign colors to vertices of a graph such that no two adjacent vertices share the same color. The minimum number of colors needed is the chromatic number χ(G).

Key Chromatic Number Facts
Complete graph Kₙ: χ = n (each vertex adjacent to all others)
Bipartite graph: χ = 2 (vertices split into 2 groups, edges only between groups)
Cycle of EVEN length (C₄, C₆, ...): χ = 2 (can 2-colour alternately)
Cycle of ODD length (C₃, C₅, C₇, ...): χ = 3 (odd cycles need 3 colours)
Tree: χ = 2 (all trees are bipartite)
Four Color Theorem: Any planar graph can be coloured with ≤ 4 colors

Why odd cycles need 3 colors — C₅ example

Vertices: A-B-C-D-E-A (pentagon)
Try 2 colors:
A=Red, B=Blue, C=Red, D=Blue, E=?
E is adjacent to D(Blue) and A(Red)
→ E needs a THIRD color!
∴ χ(C₅) = 3
Backtracking Algorithm for Graph Coloring
1
Try assigning color 1 to vertex 1.
2
For the next vertex, try the lowest-numbered color not used by any neighbour.
3
If no valid color exists within k colors → backtrack to previous vertex and try next color.
4
If all vertices are colored → solution found.
Practice MCQs — Backtracking
Q13. How many solutions exist for the 4-queens problem?
A 1
B 2
C 4
D 8
Answer: B

Exactly 2 distinct solutions exist for placing 4 queens on a 4×4 board with no attacks. Both solutions are reflections of each other across the board's vertical axis. The solutions have queens at columns [2,4,1,3] and [3,1,4,2] (1-indexed). A common wrong answer is 8 — that's confusing with some rotations being counted separately, but there are only 2 fundamentally distinct arrangements.

Q14. A graph G is a cycle with 7 vertices (C₇). What is its chromatic number?
A 2
B 3
C 4
D 7
Answer: B

C₇ is an odd cycle (7 is odd). ALL odd cycles require exactly 3 colors — no fewer, no more. Proof: In a cycle, you alternate colors. With 2 colors: A-B-A-B-A-B-A. But A(vertex 7) is adjacent to B(vertex 6) AND A(vertex 1) — conflict! So 2 colors impossible. With 3: A-B-A-B-A-B-C — works. χ(C₇) = 3. Even cycles (C₄, C₆, C₈) need only 2 colors.

Q15. In N-queens backtracking, a queen placed at (r, c) conflicts with a queen at (r₁, c₁) if:
A c == c₁ OR |r − r₁| == |c − c₁|
B r == r₁ OR c == c₁
C |r − r₁| == |c − c₁| only
D r + c == r₁ + c₁
Answer: A

Since we place one queen per row (rows are all different by construction), we only need to check: (1) Same column: c == c₁, and (2) Same diagonal: the diagonal condition |r−r₁| == |c−c₁| — the row and column differences are equal in magnitude for diagonal attacks. Option B would check same row too, but that's already guaranteed impossible by our placement strategy. Option D (r+c == r₁+c₁) only catches one diagonal direction.


06

Graphs & Algorithms

Euler/Hamiltonian paths, topological sort, SCC, Ford-Fulkerson max flow

6.1 · Graph Basics

A graph G = (V, E) consists of vertices V and edges E. Understanding graph representations is essential.

Adjacency Matrix

Matrix A where A[i][j] = 1 if edge (i,j) exists, else 0.

Degree of vertex i = sum of row i (for undirected graphs).
Space: O(V²). Edge lookup: O(1). Enumerate neighbours: O(V).

Adjacency List

Array of linked lists. List[i] contains all neighbours of vertex i.

Space: O(V+E). Edge lookup: O(degree). Enumerate neighbours: O(degree). Better for sparse graphs.

Reading degree from adjacency matrix (critical for Euler circuits)

In the adjacency matrix, the degree of vertex i = sum of row i (for undirected graphs, where A is symmetric).

6.2 · Euler Graphs — Most Tested in Your Exam

"An Euler Circuit is a closed walk that visits every EDGE of the graph exactly once and returns to the starting vertex."
THE Most Important Condition — Memorise This

Euler Circuit exists if and only if:

The graph is connected (every vertex reachable from every other)
Every vertex has EVEN degree


Euler Path (not circuit) exists if and only if:

Exactly 2 vertices have ODD degree (path starts at one and ends at the other)
The graph is connected

From your paper Q4 — Reading an adjacency matrix

Given matrix for graph with vertices A, B, C, D, E, F:

     A  B  C  D  E  F
A  [ 0  1  0  1  0  0 ]  → degree = 1+1 = 2 (even)
B  [ 1  0  1  0  0  0 ]  → degree = 1+1 = 2 (even)
C  [ 0  1  0  1  0  1 ]  → degree = 1+1+1 = 3 (ODD)
D  [ 1  0  1  0  1  0 ]  → degree = 1+1+1 = 3 (ODD)
E  [ 0  0  0  1  0  1 ]  → degree = 1+1 = 2 (even)
F  [ 0  0  1  0  1  0 ]  → degree = 1+1 = 2 (even)

Vertices C and D have odd degree → No Euler Circuit. Answer: D from your paper.

Hierholzer's Algorithm — Find Euler Circuit

Algorithm Steps
1
Start at any vertex. Follow edges (deleting used edges) until you return to the start — this forms an initial circuit.
2
Find any vertex on the current circuit that still has unused edges.
3
Start a new sub-circuit from that vertex.
4
Splice the sub-circuit into the main circuit at the vertex where they share.
5
Repeat until all edges used.

Critical: the initial circuit chosen MUST be a valid closed walk. A path that doesn't return to its start is INVALID as an initial circuit.

From your paper Q5 — Valid vs Invalid Initial Circuits

For graph in Fig.2 with vertices R, S, T, U, V, W, X, Y, Z, the circuit that CANNOT be chosen initially by Hierholzer's is one that does not form a valid closed cycle using only existing edges. Option B (YXZYUVWUY) is the answer — it's not a valid initial circuit because YXZY uses edge X-Z which may not exist, or the path doesn't properly use available edges. Answer: B.

6.3 · Hamiltonian Paths & Circuits

"A Hamiltonian Circuit visits every VERTEX exactly once and returns to the start. A Hamiltonian Path visits every vertex exactly once but doesn't need to return."
Key Difference from Euler

Euler = every EDGE once.
Hamiltonian = every VERTEX once.

There is NO simple necessary-and-sufficient condition for Hamiltonian circuits (unlike Euler). Determining if one exists is NP-complete.

Dirac's Sufficient Condition

If every vertex has degree ≥ n/2 (where n = number of vertices), a Hamiltonian circuit exists.

This is a SUFFICIENT condition, not necessary — Hamiltonian circuits can exist even if Dirac's condition fails.

6.4 · Topological Sort

A topological ordering of a DAG (Directed Acyclic Graph) is a linear ordering of its vertices such that for every directed edge u → v, vertex u comes before v.

Topological sort is only defined for DAGs. If the graph has a cycle, no topological ordering exists (cycle creates contradiction: A before B before C before A?)
DFS-Based Algorithm (Tarjan)
1
Run DFS on graph.
2
When a vertex FINISHES (all its DFS subtree explored), push it to a stack.
3
Pop all from stack → topological order.

Time: O(V+E)

Kahn's BFS Algorithm
1
Compute in-degree for all vertices.
2
Add all zero in-degree vertices to queue.
3
Dequeue vertex, add to result, reduce in-degree of its neighbours by 1. If a neighbour hits 0 in-degree, enqueue it.
4
Repeat until queue empty.

Time: O(V+E). If result has fewer than V vertices → graph has a cycle.

Applications of Topological Sort

Build systems: compile files in correct dependency order
Course prerequisites: which course must be taken first
Task scheduling with dependencies

6.5 · Strongly Connected Components (SCC)

"A Strongly Connected Component is a maximal set of vertices such that every vertex in the set is reachable from every other vertex in the set."
Kosaraju's Algorithm — O(V+E)
1
First DFS: Run DFS on original graph. Push vertices to stack in order of finish time.
2
Transpose: Reverse all edge directions → get G^T.
3
Second DFS: While stack is non-empty, pop vertex. If unvisited in G^T, run DFS from it in G^T. All vertices visited in this DFS = one SCC.

Intuition: If A→B in original and B→A in transpose, they're in the same SCC.

Tarjan's algorithm finds SCCs in a single DFS pass using "low-link" values. Both Kosaraju's and Tarjan's run in O(V+E) time.

6.6 · Max Flow — Ford-Fulkerson Algorithm

In a flow network, each edge has a capacity. Find the maximum flow from source s to sink t.

Key Concepts
Residual graph: After sending flow f along edge (u,v) with capacity c, residual capacity = c−f on forward edge, and f on backward edge (can cancel flow)
Augmenting path: A path from s to t in the residual graph with positive residual capacity on every edge
Bottleneck: The minimum residual capacity along an augmenting path — this is how much flow we can push
Ford-Fulkerson Algorithm
1
Start with zero flow on all edges.
2
Find an augmenting path from s to t in the residual graph (using DFS or BFS).
3
Find the bottleneck = min capacity along this path.
4
Update: add bottleneck to forward edges, subtract from backward edges in residual graph.
5
Repeat until no augmenting path exists.

Time: O(E × max_flow) for DFS-based. O(VE²) for Edmonds-Karp (BFS-based).

Max-Flow Min-Cut Theorem — Always Tested

The maximum value of flow from s to t equals the minimum cut capacity separating s from t.

A cut is a partition of vertices into two sets S (containing s) and T (containing t). The cut capacity = sum of capacities of edges going from S to T.

This theorem means: if you find max flow = 10, then min cut = 10 as well. Always equal.

Practice MCQs — Graphs & Algorithms
Q16. An undirected graph has vertices with degrees 3, 2, 3, 4, 2, 2. Choose the correct statement about Euler circuits.
A Euler circuit exists since sum of degrees = 16 (even)
B No Euler circuit since two vertices have odd degree
C Euler circuit exists since average degree > 2
D No Euler circuit since the graph is not complete
Answer: B · Same concept as Q4 in your paper

Two vertices have degree 3 (odd). For an Euler circuit, ALL vertices must have even degree — no exceptions. The sum of degrees being even is always true by handshaking lemma (every edge contributes 2 to the sum), so option A's reasoning is wrong. Being complete or having high average degree is irrelevant. Exactly 2 odd-degree vertices means an Euler PATH (not circuit) exists between those two vertices.

Q17. In Ford-Fulkerson, the maximum flow found is 15. What is the minimum cut capacity?
A Less than 15
B Greater than 15
C Exactly 15
D Cannot be determined without knowing the graph
Answer: C

The Max-Flow Min-Cut Theorem states that the maximum flow value equals the capacity of the minimum cut — always, exactly. This is not an approximation. If max flow = 15, then min cut = 15. No further information about the graph is needed. This theorem is one of the most fundamental results in combinatorial optimisation.

Q18. Which statement correctly distinguishes Euler and Hamiltonian paths?
A Euler path visits every vertex once; Hamiltonian visits every edge once
B Euler path visits every edge once; Hamiltonian visits every vertex once
C Both visit every vertex and every edge exactly once
D Hamiltonian circuit existence can be determined in polynomial time
Answer: B

Euler path/circuit: every EDGE visited exactly once (vertices may be revisited). Hamiltonian path/circuit: every VERTEX visited exactly once (some edges may not be used). Option D is false — determining whether a Hamiltonian circuit exists is NP-complete, meaning no known polynomial-time algorithm exists. In contrast, Euler circuit existence is checkable in O(V+E) by checking degree parity.

Q19. Topological sort can be applied to which type of graph?
A Any undirected graph
B Any directed graph
C Directed Acyclic Graph (DAG) only
D Complete graph only
Answer: C

Topological sort requires the graph to be directed (otherwise "before/after" relationships don't exist) AND acyclic (a cycle creates a contradiction: A must come before B, B before C, C before A — impossible). Kahn's algorithm detects cycles: if fewer than V vertices are output, a cycle exists. Directed graphs with cycles — called just "digraphs" — cannot be topologically sorted.


Last-Hour Cheat Sheet

Everything in one place — review this right before the exam

TopicCore factCommon trap
Master Theorem Compare f(n) with n^(log_b a). Equal → multiply by log n. Forgetting it doesn't apply to T(n)=T(n−1)+... form
Merge sort Always Θ(n log n). Stable. O(n) space. Confusing with quicksort worst case
Quick sort Avg Θ(n log n), Worst Θ(n²). In-place. Not stable. Forgetting worst case is sorted input with bad pivot
Binary search pair >x O(1) — just check two largest elements. Thinking it's O(n) or O(log n)
Activity selection Sort by FINISH time. Select earliest-finishing compatible. Sorting by start time or duration
Coin change Greedy fails for {1,3,4}. Amount 6: optimal=2 coins, greedy=3. Assuming greedy always works
Huffman coding Lowest freq → deepest node → longest code. Confusing which freq gets which code length
Job sequencing Sort by PROFIT desc. Assign to latest available slot ≤ deadline. Confusing with activity selection (no profit there)
Matrix chain max A(5×10)B(10×4)C(4×20): max=1800 [A×(B×C)], min=600. Picking min instead of max (question asked for max)
LCS If chars match → diagonal+1. Else → max(left, above). Confusing LCS (non-contiguous) with Longest Common Substring (contiguous)
Kadane's maxEndHere = max(A[i], maxEndHere + A[i]). O(n). Forgetting to reinitialise (start fresh) when current sum goes negative
Euler Circuit ALL vertices must have EVEN degree. Check row sums of adj. matrix. Thinking sum of degrees even → Euler (that's always true — not the condition)
N-Queens N=4 → 2 solutions. N=8 → 92. No solution for N=2 or N=3. Confusing number of solutions across different N values
Max-flow min-cut Max flow = min cut. Always exactly equal. Thinking it's an approximation or inequality
Odd cycles C₃,C₅,C₇,... need 3 colors. Even cycles need 2. Applying bipartite rule to odd cycles
Topological sort Only for DAGs. Not for undirected or cyclic graphs. Applying to graphs with cycles