C Notes Library

Data Structures & Algorithms

Understandable explanations with practical C snippets and system-level context.

Core Topics: 9 Language: C Use Case: Interview + Fundamentals

Comprehensive Concept Notes

This section summarizes the complete unit in one place: what each concept means, why it is used, and what complexity trade-offs to remember for exams and assignments.

Complexity and Performance Basics

  • Big O: Upper bound of growth (worst-case trend)
  • Big Theta: Tight bound (exact growth order)
  • Big Omega: Lower bound (best-case trend)
  • Typical growth ranking: O(1) < O(log n) < O(n) < O(n log n) < O(n^2)
  • In real systems, complexity and constants both matter. For small inputs, a theoretically slower algorithm can still be faster.

Linear vs Non-Linear Structures

Linear structures arrange elements in sequence (arrays, linked lists, stacks, queues). Non-linear structures represent hierarchy or networks (trees, graphs).

  • Linear: Easier traversal logic, often simpler to implement.
  • Non-linear: Better for hierarchical relationships and many-to-many connections.

Arrays

  • Contiguous memory gives constant-time index access: O(1).
  • Insertion/deletion in the middle requires shifting: O(n).
  • Best when size is known and fast random access is needed.

Linked Lists

  • Nodes are not contiguous; each node stores data and next pointer.
  • Insertion/deletion at a known node is efficient: O(1) pointer updates.
  • Search is sequential: O(n).
  • Best when frequent structural updates matter more than direct indexing.

Stacks and Queues

  • Stack: LIFO, push/pop on top, O(1).
  • Use stack for recursion frames, expression evaluation, undo/redo.
  • Queue: FIFO, enqueue at rear, dequeue at front.
  • Use queue for scheduling, buffering, and breadth-first traversal.

Trees and BST

  • Trees model hierarchical data such as folders, organization, and syntax trees.
  • Traversals: preorder (root-left-right), inorder (left-root-right), postorder (left-right-root).
  • BST rule enables faster search when balanced: average O(log n).
  • Skewed BST behaves like a list, causing worst-case O(n).

Hash Tables

  • Hash function maps key to index. Collisions are unavoidable and must be handled.
  • Chaining stores collided entries in a linked list at the same index.
  • Average insert/search/delete is O(1) with good hash distribution.
  • Poor distribution or high load factor can degrade to O(n).

Graphs, BFS, and DFS

  • Graphs represent entities and relationships (maps, social links, routing).
  • BFS: Level-order exploration using a queue, useful for shortest path in unweighted graphs.
  • DFS: Depth-first exploration using recursion or a stack, useful for connectivity and cycle detection.
  • Traversal complexity is O(V + E) with adjacency lists.

Searching Algorithms

  • Linear Search: Works on unsorted data, O(n).
  • Binary Search: Requires sorted data, O(log n), repeatedly halves the search interval.

Sorting Algorithms

  • Bubble Sort: Easy to understand, many swaps, usually O(n^2).
  • Selection Sort: Fewer swaps, still O(n^2), not stable in standard form.
  • Insertion Sort: Efficient for nearly sorted and small arrays; best case O(n).
  • Merge Sort: Stable O(n log n), needs extra memory.
  • Quick Sort: Very fast in practice on average, worst-case O(n^2) with poor pivots.

Dynamic Programming vs Greedy

  • Greedy: Makes locally best choices now; may fail to produce global optimum.
  • Dynamic Programming: Solves overlapping subproblems and stores results to guarantee optimal solutions for suitable problems.
  • Coin change is a classic case where greedy can be non-optimal, while DP finds the best answer.

Exam-Focused Selection Guide

  • Need direct indexed access: choose arrays.
  • Need frequent middle insert/delete: choose linked lists.
  • Need undo/backtracking: choose stack.
  • Need fairness/first-come service: choose queue.
  • Need hierarchical representation and ordered lookup: choose BST or balanced tree.
  • Need very fast key-value access: choose hash table (with careful hashing).
  • Need relationship modeling: choose graph.

Data Structure Classification

Data structures are classified based on how data is organized in memory and how elements relate to each other.

1. Linear Data Structures

Linear data structures store data elements sequentially. Each element is connected to its previous and next element.

a) Static Linear Data Structures

Static data structures have a fixed size defined at compile time. Memory allocation cannot be changed during execution.

  • Arrays

Why used:

  • Fast random access using index
  • Efficient memory usage for fixed-size data

Applications:

  • Image processing (pixel matrices)
  • Storing fixed sensor readings
  • Lookup tables

b) Dynamic Linear Data Structures

Dynamic structures can grow or shrink during program execution. Memory is allocated at runtime.

  • Linked Lists
  • Stacks
  • Queues

Why used:

  • Efficient insertion and deletion
  • No wasted memory due to fixed size

Applications:

  • Undo/Redo systems (Stacks)
  • CPU scheduling (Queues)
  • Dynamic memory management

2. Non-Linear Data Structures

Non-linear data structures store data hierarchically or graph-like, allowing one-to-many relationships.

a) Trees

Trees consist of nodes connected in a hierarchical manner. Each tree has a root and child nodes.

Why used:

  • Efficient searching and sorting
  • Represents hierarchical data naturally

Applications:

  • File systems
  • Database indexing (B-Trees)
  • DOM structure in web browsers

b) Graphs

Graphs consist of vertices (nodes) connected by edges. They can be directed or undirected.

Why used:

  • Models real-world relationships
  • Supports complex connections

Applications:

  • Social networks
  • Routing algorithms
  • Maps and navigation systems

Applications of Data Structures and Algorithms

Data structures are chosen based on the problem requirements and the algorithm operating on them.

Arrays + Binary Search

Binary search requires a sorted array to work efficiently.

Used in:

  • Databases
  • Search engines
  • Operating system resource lookup

Reason:

  • O(log n) search time
  • Predictable memory layout

Stacks + Recursion

Function calls are managed using stacks.

Used in:

  • Program execution
  • Expression evaluation
  • Undo/Redo features

Reason:

  • Last function call must return first

Queues + Scheduling Algorithms

Queues are used where tasks must be processed in order.

Used in:

  • CPU scheduling
  • Printer queues
  • Network packet handling

Reason:

  • Fairness
  • Predictable processing order

How Data Structures and Algorithms Work Within Systems

Data structures and algorithms form the foundation of all computer systems. They determine system performance, scalability, and reliability.

Operating Systems

  • Queues manage process scheduling
  • Stacks manage function calls and interrupts
  • Linked lists track memory blocks

Databases

  • Trees index large datasets
  • Hash tables provide fast lookups
  • Graphs represent relationships

Networking Systems

  • Graphs determine routing paths
  • Queues manage packet transmission

Software Applications

  • Text editors use stacks for undo operations
  • Games use trees for scene management
  • Web browsers use trees for HTML rendering

Efficient data structures combined with suitable algorithms ensure systems are fast, scalable, and resource-efficient.

Arrays

An array stores elements in contiguous memory locations. This allows constant-time access using an index.

Example


#include <stdio.h>

int main() {
    int arr[4] = {10, 20, 30, 40};

    printf("%d\n", arr[0]); // 10
    printf("%d\n", arr[2]); // 30

    return 0;
}
  

Why insertion is expensive

To insert an element in the middle, all following elements must be shifted.

Time Complexity

Operation Time
Access O(1)
Search O(n)
Insert / Delete O(n)

Linked Lists

A linked list is made of nodes where each node contains data and a pointer to the next node.

Node Definition


#include <stdlib.h>

struct Node {
    int data;
    struct Node* next;
};
  

Create and Link Nodes


struct Node* head = malloc(sizeof(struct Node));
head->data = 10;
head->next = malloc(sizeof(struct Node));
head->next->data = 20;
head->next->next = NULL;
  

Linked lists trade fast access for fast insertion and deletion.

Stacks

A stack follows the LIFO principle (Last In, First Out).

Array-Based Stack


#define MAX 5
int stack[MAX];
int top = -1;

void push(int value) {
    if (top == MAX - 1) return;
    stack[++top] = value;
}

int pop() {
    if (top == -1) return -1;
    return stack[top--];
}
  

Used in function calls, recursion, undo/redo operations.

Queues

A queue follows the FIFO principle (First In, First Out).

Simple Queue Implementation


#define MAX 5
int queue[MAX];
int front = 0, rear = -1;

void enqueue(int value) {
    if (rear == MAX - 1) return;
    queue[++rear] = value;
}

int dequeue() {
    if (front > rear) return -1;
    return queue[front++];
}
  

Queues are essential for BFS and scheduling algorithms.

Trees

A tree is a hierarchical data structure made of nodes.

Binary Tree Node


struct TreeNode {
    int data;
    struct TreeNode* left;
    struct TreeNode* right;
};
  

Create a Node


struct TreeNode* newNode(int value) {
    struct TreeNode* node = malloc(sizeof(struct TreeNode));
    node->data = value;
    node->left = NULL;
    node->right = NULL;
    return node;
}
  

Algorithms

Linear Search

Checks each element from left to right until the target is found.

Input: arr = [7, 2, 9, 4], target = 9

Output: index = 2


int linearSearch(int arr[], int size, int target) {
    for (int i = 0; i < size; i++) {
        if (arr[i] == target)
            return i;
    }
    return -1;
}
  

Time Complexity: O(n)

Binary Search

Works on sorted arrays by repeatedly halving the search space.

Input: arr = [2, 4, 7, 9, 12, 18], target = 12

Output: index = 4


int binarySearch(int arr[], int size, int target) {
    int left = 0, right = size - 1;

    while (left <= right) {
        int mid = left + (right - left) / 2;

        if (arr[mid] == target)
            return mid;
        else if (arr[mid] < target)
            left = mid + 1;
        else
            right = mid - 1;
    }
    return -1;
}
  

Time Complexity: O(log n)

Inorder Traversal (Binary Tree)

Visits nodes in left, root, right order.

Input Tree:


    50
   /  \
 30    70
  

Output: 30 50 70


void inorder(struct TreeNode* root) {
    if (root) {
        inorder(root->left);
        printf("%d ", root->data);
        inorder(root->right);
    }
}
  

Time Complexity: O(n)

Hash Table Search (Chaining)

Finds a key by checking entries in the computed bucket.

Input Operations: insert(1, 100), insert(11, 200), search(11)

Output: Found key 11 with value 200


int hash(int key) {
    return key % TABLE_SIZE;
}

struct Entry* searchHash(int key) {
    int h = hash(key);
    struct Entry* e = table[h];

    while (e) {
        if (e->key == key)
            return e;
        e = e->next;
    }
    return NULL;
}
  

Average Time Complexity: O(1)

Worst Case Time Complexity: O(n)

Sorting Algorithms

Sorting arranges elements in ascending or descending order to make later operations (like searching) more efficient.

Bubble Sort

Compares adjacent elements and swaps when they are out of order.

Input: arr = [5, 1, 4, 2]

Output: arr = [1, 2, 4, 5]


void bubbleSort(int arr[], int n) {
    for (int i = 0; i < n - 1; i++) {
        for (int j = 0; j < n - i - 1; j++) {
            if (arr[j] > arr[j + 1]) {
                int temp = arr[j];
                arr[j] = arr[j + 1];
                arr[j + 1] = temp;
            }
        }
    }
}
  

Time Complexity: O(n^2)

Selection Sort

Selects the minimum value from the unsorted part and places it in the next sorted position.

Input: arr = [29, 10, 14, 37, 13]

Output: arr = [10, 13, 14, 29, 37]


void selectionSort(int arr[], int n) {
    for (int i = 0; i < n - 1; i++) {
        int minIndex = i;
        for (int j = i + 1; j < n; j++) {
            if (arr[j] < arr[minIndex])
                minIndex = j;
        }

        int temp = arr[i];
        arr[i] = arr[minIndex];
        arr[minIndex] = temp;
    }
}
  

Time Complexity: O(n^2)

Insertion Sort

Builds a sorted left side by inserting each new element into its correct position.

Input: arr = [9, 5, 1, 4, 3]

Output: arr = [1, 3, 4, 5, 9]


void insertionSort(int arr[], int n) {
    for (int i = 1; i < n; i++) {
        int key = arr[i];
        int j = i - 1;

        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j--;
        }
        arr[j + 1] = key;
    }
}
  

Time Complexity: O(n^2)

Unit Completion Pack

Sorting Comparison Matrix

Algorithm Best Average Worst Stable In-place
Bubble Sort O(n) O(n^2) O(n^2) Yes Yes
Selection Sort O(n^2) O(n^2) O(n^2) No Yes
Insertion Sort O(n) O(n^2) O(n^2) Yes Yes
Merge Sort O(n log n) O(n log n) O(n log n) Yes No
Quick Sort O(n log n) O(n log n) O(n^2) No Yes

BFS and DFS (Worked Graph Example)

Adjacency List:


0: 1, 2
1: 0, 3, 4
2: 0, 5
3: 1
4: 1, 5
5: 2, 4
  

Start Vertex: 0

BFS Output: 0 1 2 3 4 5

DFS Output (one valid order): 0 1 3 4 5 2

Complexity: O(V + E) with adjacency list

Binary Search Tree (Insert/Search/Delete)

BST rule: left subtree values are smaller, right subtree values are larger.

Insert Sequence: 50, 30, 70, 20, 40, 60, 80

Search Example: search(60) -> found


struct TreeNode* searchBST(struct TreeNode* root, int key) {
    if (root == NULL || root->data == key)
        return root;
    if (key < root->data)
        return searchBST(root->left, key);
    return searchBST(root->right, key);
}
  

Average Insert/Search/Delete: O(log n)

Worst Case (skewed tree): O(n)

Dynamic Programming vs Greedy (Coin Change)

Problem: Amount = 6, coins = [1, 3, 4], minimum number of coins.

Greedy Result: 4 + 1 + 1 = 3 coins (not optimal)

DP Optimal Result: 3 + 3 = 2 coins


int minCoinsDP(int amount, int coins[], int m) {
    int dp[100];
    dp[0] = 0;

    for (int i = 1; i <= amount; i++)
        dp[i] = 1000000;

    for (int i = 1; i <= amount; i++) {
        for (int j = 0; j < m; j++) {
            if (coins[j] <= i && dp[i - coins[j]] + 1 < dp[i])
                dp[i] = dp[i - coins[j]] + 1;
        }
    }

    return dp[amount];
}
  

Complexity: O(amount * number_of_coins)