Linear Algebra for Quant Interviews: What’s Actually Tested
Linear algebra is the core mathematical machinery of modern quantitative finance. Almost every quant model — factor models, PCA, Kalman filters, regression, portfolio optimization, covariance estimation — runs on linear algebra primitives. Quant-research interviews at Two Sigma, D. E. Shaw, Citadel, Renaissance Technologies, Bridgewater, and the Strats groups at Goldman Sachs, JPMorgan, and Morgan Stanley test linear algebra fluency directly and indirectly.
The good news: the tested material is narrower than a full linear-algebra course. The bad news: candidates often think they know it because they took the course, and find out at the interview that “knowing the definition of an eigenvalue” is different from “explaining what eigendecomposition tells you about the covariance matrix in a portfolio context.” This guide covers the linear algebra topics that actually come up, with quant-finance applications.
Topics That Get Tested
Vectors and matrices
Trivial in the sense that everyone knows what they are; non-trivial in the sense that the interviewer wants to see fluent operations: matrix multiplication, transpose, inverse, determinant, trace, basis change. You should be able to multiply small matrices mentally and recognize patterns (diagonal, triangular, symmetric, orthogonal).
Eigenvalues and eigenvectors
The single most-tested linear algebra topic in quant interviews. You should know:
- Definition: Av = λv. Eigenvector v is preserved in direction by A; eigenvalue λ is the scaling factor.
- Computation: solve det(A – λI) = 0 for eigenvalues; substitute back to find eigenvectors.
- Properties: sum of eigenvalues = trace; product of eigenvalues = determinant; symmetric matrices have real eigenvalues and orthogonal eigenvectors.
- Diagonalization: A = PDP^(-1) where D is the diagonal matrix of eigenvalues and P is the matrix of eigenvectors.
Spectral theorem
For symmetric (or Hermitian) matrices: there exists an orthonormal basis of eigenvectors. In matrix form: A = QΛQ^T where Q is orthogonal and Λ is diagonal. This is the structural backbone of PCA and covariance analysis.
Singular Value Decomposition (SVD)
Any matrix A can be written as A = UΣV^T where U and V are orthogonal and Σ is diagonal with non-negative entries (singular values). SVD is more general than eigendecomposition (works on rectangular matrices) and is the foundation of low-rank approximation, regression diagnostics, and PCA on data matrices.
Positive (semi-)definite matrices
A symmetric matrix M is positive definite if x^T M x > 0 for all non-zero x; positive semi-definite if ≥ 0. Equivalent to: all eigenvalues > 0 (or ≥ 0). Covariance matrices are positive semi-definite by construction. Knowing this matters because: portfolio variance is x^T Σ x where Σ is the covariance matrix; numerical issues with covariance estimation often produce matrices that aren’t quite positive semi-definite, and you need to understand when this is real vs numerical.
Matrix inverse and pseudo-inverse
Solving Ax = b. When A is square and invertible, x = A^(-1) b. When A is rectangular or singular, use the Moore-Penrose pseudo-inverse, which corresponds to the least-squares solution. This shows up directly in regression: β = (X^T X)^(-1) X^T y, where (X^T X)^(-1) X^T is the pseudo-inverse of X.
Rank and null space
Rank: dimension of the column space (or row space). Null space: vectors x such that Ax = 0. Rank-deficient matrices have non-trivial null spaces, which causes problems in regression (multicollinearity) and inversion (singular matrices). You should be able to identify rank deficiency from a matrix and reason about its consequences.
Quadratic forms
Functions of the form f(x) = x^T A x. Portfolio variance is a quadratic form. Quadratic optimization problems (mean-variance portfolio optimization) reduce to setting the gradient to zero, yielding linear systems.
Where Linear Algebra Meets Quant Finance
Covariance and correlation
Portfolio variance is x^T Σ x. The covariance matrix Σ is symmetric and positive semi-definite. Eigendecomposition Σ = QΛQ^T gives PCA: the eigenvectors are the principal directions of variance, and the eigenvalues are the variances along those directions. Risk decomposition uses this directly.
Linear regression
OLS: β = (X^T X)^(-1) X^T y. This minimizes ||y – Xβ||^2. Geometrically: project y onto the column space of X; β is the coordinates of that projection.
Issues:
- If X is rank-deficient (multicollinear features), X^T X is not invertible. Use ridge regression: β = (X^T X + λI)^(-1) X^T y.
- Predicted values: ŷ = X β = X(X^T X)^(-1) X^T y = Hy, where H is the “hat matrix” (a projection matrix).
- Residuals: e = y – ŷ = (I – H) y. Properties: H and (I – H) are both projection matrices.
PCA (Principal Component Analysis)
Given a data matrix X (rows = observations, columns = features), compute the covariance matrix Σ = X^T X / n (after centering). Eigendecompose Σ. The top-k eigenvectors give the directions of maximum variance; projecting data onto them gives a k-dimensional approximation. Used everywhere in quant: signal extraction, factor models, dimensionality reduction.
Kalman filtering
State-space models for time-series prediction. Updates use matrix multiplication and inversion. The Kalman gain involves a matrix inverse that needs to be regularized in practice.
Portfolio optimization
Mean-variance: minimize x^T Σ x subject to expected return constraint and budget constraint. Lagrangian gives a linear system; solving it requires matrix inversion. Issues: covariance matrix estimation error compounds; in high-dimensional regimes (many assets), Σ is poorly conditioned and naive optimization gives unstable portfolios. Shrinkage estimators (Ledoit-Wolf) are standard fixes.
Common Interview Problems
Compute eigenvalues of a small matrix
“Find the eigenvalues of [[3, 1], [0, 2]].” Triangular matrix; eigenvalues are diagonal entries: 3 and 2. Then: “Find the eigenvectors.” Solve (A – λI)v = 0 for each λ.
Discuss properties of symmetric matrices
“Why does PCA require a covariance matrix?” Symmetric, positive semi-definite, real eigenvalues, orthogonal eigenvectors. The eigenvectors form an orthonormal basis aligned with directions of variance.
Solve regression by hand on a 2×2 example
“Compute β = (X^T X)^(-1) X^T y for X = [[1, 1], [1, 2], [1, 3]] and y = [2, 4, 6].” Walking through this by hand demonstrates fluency with matrix inverses, transposes, and the geometric interpretation of OLS.
Reason about rank deficiency
“You’re running a regression with 100 features and 50 observations. What goes wrong?” The design matrix can’t have rank greater than 50; X^T X is rank-deficient and singular; OLS doesn’t have a unique solution. You’d use ridge regression, lasso, or PCA-based dimensionality reduction.
Explain the geometric interpretation of OLS
“What does it mean that ŷ = X β minimizes ||y – Xβ||^2?” ŷ is the orthogonal projection of y onto the column space of X. Residuals (y – ŷ) are orthogonal to every column of X. This is the geometric reason why X^T (y – Xβ) = 0, which gives the normal equations.
Things That Surprise Candidates
- Numerical considerations matter. Inverting a poorly conditioned matrix gives garbage. Quants use SVD, QR, or Cholesky decomposition rather than computing inverses directly.
- Covariance shrinkage is standard. Empirical covariance matrices in finance are too noisy to use directly. Ledoit-Wolf, Tikhonov regularization, and similar methods are routine.
- Eigenvalues of covariance matrices have economic interpretations. The largest eigenvalue often corresponds to a “market factor”; subsequent eigenvalues to sector / style factors. PCA on returns recovers economically interpretable structures.
Frequently Asked Questions
How deep does linear algebra knowledge need to go for quant interviews?
Deeper than a typical undergrad course allows you to fake. You need fluency, not just memorized definitions. Strong candidates can compute eigenvalues of small matrices mentally, derive normal equations from least squares, explain why covariance matrices are positive semi-definite, and reason about numerical issues. Weak candidates can recite the definition of an eigenvalue but can’t connect it to PCA or risk decomposition. The depth bar varies by firm: research-heavy shops (Two Sigma, D. E. Shaw, Renaissance) push hardest; trading-heavy shops (Optiver, Jane Street) push less but still expect basics.
Should I review linear algebra from scratch or just hit the highlights?
If you took a strong proof-based linear algebra course recently and remember it, hit the highlights: eigendecomposition, SVD, positive definiteness, projection matrices, geometric interpretation of OLS. If your background is shakier, work through Gilbert Strang’s Introduction to Linear Algebra or his MIT OCW lectures — he teaches the geometric intuition that interviewers actually probe. Linear Algebra Done Right (Axler) is a good companion for proof depth but less directly applicable to quant interviews.
What’s the difference between eigendecomposition and SVD in practice?
Eigendecomposition requires a square matrix and only “works nicely” for diagonalizable matrices (e.g., symmetric ones). SVD works on any matrix, square or rectangular, full-rank or not. In quant work: eigendecomposition is standard for covariance matrices (square, symmetric, positive semi-definite). SVD is standard for data matrices (rectangular, often rank-deficient) and for numerically stable regression. Many candidates conflate the two; understanding when to use each is a meaningful interview signal.
How does linear algebra come up in actual quant work day-to-day?
Constantly. Covariance matrix estimation and decomposition (risk models, PCA-based factor models). Regression with regularization (signal generation, alpha modeling). Portfolio optimization (mean-variance, Black-Litterman). Time-series state-space models (Kalman filters). Even simple operations like converting a returns matrix to a correlation matrix involve linear algebra. The math you do for an interview is a sanitized version of work you’d do daily.
What’s the most common linear algebra mistake on interviews?
Treating linear algebra as separate from the application. A candidate who can compute eigenvalues but can’t connect that to “the largest eigenvalue of the covariance matrix is roughly the market factor’s variance” looks underprepared. The interviewer wants to see that you understand why the math is in the toolkit. Always connect linear algebra computations to the financial concept they support: variance decomposition, projection, factor structure, dimensionality reduction.
See also: Breaking Into Quant Finance and Wall Street: 2026 Guide • Options Pricing for Quant Interviews • Expected Value and Fair-Game Reasoning