Algorithms Interview Questions

Смесено

Sharpen your algorithm skills with questions on sorting, searching, dynamic programming, greedy algorithms, recursion, and complexity analysis.

15 въпроса|
5 лесно
6 средно
4 трудно

Time complexity measures how the runtime of an algorithm grows relative to the input size, while space complexity measures how the memory usage grows. Both are expressed using Big O notation. An algorithm can optimize one at the expense of the other; for example, memoization trades additional space for reduced time by caching previously computed results.

complexitybasics

Quicksort selects a pivot element, partitions the array into elements less than and greater than the pivot, and recursively sorts each partition. It has an average-case time complexity of O(n log n) and worst-case O(n^2) when the pivot is consistently the smallest or largest element. Using randomized or median-of-three pivot selection effectively avoids the worst case in practice. It sorts in-place with O(log n) stack space.

sortingdivide-and-conquer

Dynamic programming is a technique for solving problems by breaking them into overlapping subproblems, solving each once, and storing the results to avoid redundant computation. It applies when a problem has optimal substructure (optimal solution is built from optimal solutions to subproblems) and overlapping subproblems. It can be implemented top-down with memoization or bottom-up with tabulation.

dynamic-programmingoptimization

Binary search finds a target value in a sorted array by repeatedly comparing the target to the middle element and eliminating half the remaining elements. If the target is less than the middle, search the left half; if greater, search the right half. It requires the input to be sorted and provides O(log n) time complexity. It can be applied to any monotonic function, not just arrays.

searchingdivide-and-conquer

Greedy algorithms make the locally optimal choice at each step, hoping to reach a globally optimal solution, and never reconsider past choices. Dynamic programming considers all possible choices and builds the optimal solution from subproblem solutions. Greedy algorithms are faster and simpler but only work when the greedy choice property holds (like in Dijkstra's or Huffman coding). Dynamic programming is more general and handles overlapping subproblems.

greedydynamic-programming

Merge sort divides the input in half, recursively sorts each half, and merges the sorted halves together. It guarantees O(n log n) time in all cases and is stable (preserves the order of equal elements). It is preferred for linked lists because merging linked lists requires no extra space (just pointer manipulation), and linked lists do not benefit from quicksort's cache-friendly random access patterns.

sortinglinked-lists

Memoization is a top-down approach that starts with the original problem, recursively breaks it into subproblems, and caches results as they are computed. Tabulation is a bottom-up approach that solves all subproblems iteratively from the smallest, filling a table until the final answer is reached. Memoization only computes needed subproblems, while tabulation avoids recursion overhead and is generally more space-predictable.

dynamic-programmingoptimization

Dijkstra's algorithm finds the shortest path from a source node to all other nodes in a weighted graph with non-negative edge weights. It maintains a priority queue of unvisited nodes ordered by their current shortest distance, repeatedly extracts the minimum, and relaxes its neighbors. With a binary heap, it runs in O((V + E) log V) time. It does not work with negative edge weights; Bellman-Ford is used for those cases.

graphsshortest-path

The two-pointer technique uses two indices that move through a data structure (typically a sorted array) to solve problems in O(n) time instead of O(n^2). Common patterns include one pointer at each end moving inward (for pair sum problems), or a slow and fast pointer moving in the same direction (for removing duplicates or detecting cycles). It is effective for problems involving pairs, subarrays, or partitioning.

techniquesarrays

Backtracking is a systematic method for exploring all potential solutions by building candidates incrementally and abandoning (pruning) a candidate as soon as it is determined to be invalid. It is essentially a depth-first search of the solution space with early termination. Classic applications include N-Queens, Sudoku solvers, permutation generation, and subset sum problems. Effective pruning significantly reduces the search space from brute force.

recursionbacktracking

The sliding window technique maintains a window (subarray or substring) that slides across data to efficiently compute results like maximum sum, longest substring, or minimum covering substring. Instead of recalculating for every possible subarray (O(n^2)), the window expands or contracts from one end while maintaining relevant state, achieving O(n) time. It applies to problems with contiguous sequences where adding and removing elements is incremental.

techniquesarrays

Topological sorting produces a linear ordering of vertices in a directed acyclic graph (DAG) such that for every directed edge (u, v), vertex u appears before v. It can be implemented using DFS (processing nodes in reverse post-order) or Kahn's algorithm (repeatedly removing nodes with zero in-degree using a queue). It is used for task scheduling, build systems, and dependency resolution. It is only possible on DAGs; the presence of a cycle makes topological ordering impossible.

graphssorting

O(n^2) algorithms like bubble sort, selection sort, and insertion sort compare or swap elements in nested loops, making them impractical for large datasets. O(n log n) algorithms like merge sort, quicksort (average case), and heap sort divide the problem into smaller parts, achieving theoretically optimal comparison-based sorting. For 1 million elements, O(n^2) requires roughly 10^12 operations while O(n log n) requires about 2 x 10^7.

sortingcomplexity

Amortized analysis determines the average time per operation over a worst-case sequence of operations, even when some individual operations are expensive. For example, a dynamic array's append is O(1) amortized because costly O(n) resizing operations happen infrequently enough that the total cost over n appends is still O(n). Common methods include the aggregate method, accounting method, and potential method.

complexityanalysis

Divide and conquer solves a problem by breaking it into smaller, independent subproblems of the same type, solving each recursively, and combining the results. The approach naturally maps to recursive implementations and is analyzed using the Master Theorem. Classic examples include merge sort, quicksort, binary search, Strassen's matrix multiplication, and the closest pair of points algorithm.

divide-and-conquerrecursion