Linear Algebra

  • Determinant

    Derivation of Determinant Determinant may be the most infamous concept in linear algebra, in terms of its odd definition and computation. Sometimes, we may wonder why there has to be a determinant.

  • Eigenvectors and Eigenvalues

    Eigenvectors and Eigenvalues An eigenvector of a \(n \times n\) matrix \(A\) is a nonzero vector \(\rm x\) such that \(A\rm x = \lambda \rm x\) for some scalar \(\lambda\).

  • Singular Value Decomposition

    Theorem Any \(m \times n\) matrix \(A\) can be decomposed into \(U\Sigma V^T\), where \[ \begin{gathered} \text{$U$ is $m \times m$, $V$ is $n \times n$} \\ U U^T =

  • Real Symmetric Matrix

    Real Symmetric Matrix Let \(A\) be an \(n \times n\) real-valued symmetric matrix. We have its properties as follows. Real-valued Eigenvalues and Eigenvectors Its eigenvalues and thus eigenvectors are real-valued.

  • Difference Equation

    Difference Equation To solve difference equation like \(x_t = a_{t-1} x_{t-1} + a_{t-2} x_{t-2} + \dots + a_0\), we first rewrite it into the matrix form: \[ \left[ \begin{array} \\ x_t \\ x_{t-1} \\ \vdots \\ x_2 \\ x_1 \end{array} \right] = \underbrace{ \left[ \begin{array} \\ a_{t-1} & a_{t-2} & \cdots & a_1 & a_0 \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & \vdots \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{array} \right] }_{A} \left[ \begin{array} \\ x_{t-1} \\ x_{t-2} \\ \vdots \\ x_1 \\ x_0 \end{array} \right] \]

  • Matrix Identity

    A useful matrix identity: \[ (P^{-1}+B^TR^{-1}B)^{-1}B^TR^{-1} = PB^T(BPB^T+R)^{-1} \] It can be proved with \[ \begin{aligned} (P^{-1}+B^TR^{-1}B)^{-1}B^TR^{-1} &= PB^T(BPB^T+R)^{-1} \\ \iff B^TR^{-1} &= (P^{-1}+B^TR^{-1}B)PB^T(BPB^T+R)^{-1} \\ \iff B^TR^{-1} &= (P^{-1}PB^T+B^TR^{-1}BPB^T)(BPB^T+R)^{-1} \\ \iff B^TR^{-1} &= (B^T+B^TR^{-1}BPB^T)(BPB^T+R)^{-1} \\ \iff B^TR^{-1} &= (B^TR^{-1}R+B^TR^{-1}BPB^T)(BPB^T+R)^{-1} \\ \iff B^TR^{-1} &= B^TR^{-1}(R+BPB^T)(BPB^T+R)^{-1} \\ \iff B^TR^{-1} &= B^TR^{-1} \\ \end{aligned} \] Its reduced form: \[ (I_N+AB)^{-1}A = A(I_M+BA)^{-1} \] It can be proved with \[ \begin{aligned} (I_N+AB)^{-1}A &= A(I_M+BA)^{-1} \\ \iff A &= (I_N + AB)A(I_M + BA)^{-1} \\ \iff A &= (A + ABA)(I_M + BA)^{-1} \\ \iff A &= A(I_M + BA)(I_M + BA)^{-1} \\ \iff A &= A \end{aligned} \]

  • Quadratic Form

    Quadratic Form Quadratic form involves many concepts like real symmetric matrix, positive definiteness and singular value decomposition. It can be quite helpful to glue these things together. A quadratic function \(f\) of \(n\) variables, or say a vector \(\x\) of length \(n\), is the sum of second-order terms: \[ f(\x) = \sum_{i=1}^n \sum_{j=1}^n c_{ij} x_i x_j \]

  • Metrics

    Spectral Normalization Spectral normalization of an \(M \times N\) matrix \(A\) is defined as \[ ||A||_2 = \max_{\mathrm z} \frac{||A\mathrm z||_2}{||\mathrm z||_2} = \sqrt{\lambda_{\max}(A^TA)} = \sigma_{\max}(A) \] where \(\rm z \in \R^N\), \(\lambda_{\max}(A^TA)\) is the maximum eigenvalue of matrix \(A^TA\), which is exactly \(A\)’s largest singular value \(\sigma_{\max}(A)\).

  • Positive semi-definite matrix involves many concepts like quadratic form, real symmetric matrix and singular value decomposition. It can be quite helpful to glue these things together here. Quadratic Form A quadratic function $f$ of $n$ variables, or say a vector $\x$ of length $n$, is the sum of second-order terms: $$ f(\x) = \sum_{i=1}^n \sum_{j=1}^n c_{ij} x_i x_j $$