Linear Discriminant Analysis

Linear discriminant analysis is another approximation to the Bayes optimal classifier. Instead of assuming independence between each pair of input dimensions given certain label, LDA assumes a single common shared covariance matrix among the input dimensions, no matter the label is. Take the Gaussian distribution as an example: \[ \begin{aligned} p(x|y=C_j) &= \mathcal{N}(x;\mu_j, \Sigma) \\ &= \frac{1}{\sqrt{(2\pi)^n|\Sigma|}}e^{-\frac{1}{2}(x-\mu_j)^T\Sigma^{-1}(x-\mu_j)} \end{aligned} \] Then the classification function will be \(f_{LDA} = \arg \max\limits_{j}p(x|y=C_j)p(y=C_j)\).

LDA’s parameters \(\theta = (\mu, \Sigma, \varphi)\) is also learned with Maximum Likelihood Estimation: \[ \begin{aligned} L(\theta) &= \prod_{i=1}^mp(x^{(i)}, y^{(i)};\theta) \\ &= \prod_{i=1}^mp(x^{(i)}|y^{(i)};\theta)p(y^{(i)}) \end{aligned} \] The log-likelihood function will be \[ \begin{aligned} l(\theta) &= \log L(\theta) \\ &= \sum_{i=1}^m\log p(x^{(i)}|y^{(i)};\mu,\Sigma) + \sum_{i=1}^m\log p(y^{(i)}, \varphi) \\ \end{aligned} \] We can maximize above two summations separately.

Let \[ \begin{aligned} l_1(\mu,\Sigma) &= \sum_{i=1}^m\log p(x^{(i)}|y^{(i)};\mu,\Sigma) \\ &= -\frac{1}{2}\sum_{i=1}^m(x^{(i)}-\mu_{j|y^{(i)}})^T\Sigma^{-1}(x^{(i)}-\mu_{j|y^{(i)}}) + \log|\Sigma| + n\log2\pi\\ \end{aligned} \] Take derivative w.r.t. \(\mu_j\) and make it zero to give \[ \begin{aligned} \frac{\partial l_1}{\partial \mu_j} = \sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)\Sigma^{-1}(x^{(i)} - \mu_j) \\ \sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)\Sigma^{-1}(x^{(i)} - \mu_j) &= 0 \\ \Sigma^{-1}\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)(x^{(i)} - \mu_j) &= 0 \\ \end{aligned} \] Since the covariance matrix \(\Sigma\) is real-symmetric and invertible and thus its nullspace is \(0\), \[ \sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)(x^{(i)} - \mu_j) = 0 \\ \mu_j = \frac{\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)x^{(i)}}{\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)} \] Take derivative w.r.t. \(\Sigma\) and make it zero to give \[ \begin{aligned} \frac{\partial l_1}{\partial \Sigma} = -\frac{1}{2}\sum_{i=1}^m(\Sigma^{-1} -\Sigma^{-1}(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T\Sigma^{-1}) \\ \sum_{i=1}^m(\Sigma^{-1} -\Sigma^{-1}(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T\Sigma^{-1}) = 0 \\ \Sigma^{-1}\sum_{i=1}^m(I - \Sigma^{-1}(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T) = 0 \\ \sum_{i=1}^m(I - \Sigma^{-1}(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T) = 0 \\ \sum_{i=1}^mI = \Sigma^{-1}\sum_{i=1}^m(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T \\ \sum_{i=1}^m\Sigma = \Sigma\Sigma^{-1}\sum_{i=1}^m(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T \\ \Sigma = \frac{1}{m}\sum_{i=1}^m(x^{(i)}-\mu_j)(x^{(i)}-\mu_j)^T \\ \end{aligned} \] Let \[ \begin{aligned} l_2(\varphi) &= \sum_{i=1}^m\log p(y^{(i)}; \varphi) \\ &= \sum_{i=1}^m\log\prod_{j=1}^c\varphi_j^{\mathbb{I}(y^{(i)}=C_j)} \\ &= \sum_{i=1}^m\sum_{j=1}^c\mathbb{I}(y^{(i)}=C_j)\log \varphi_j \end{aligned} \] Note that \(p(y^{(i)};\varphi) = \prod_{j=1}^c\varphi_j^{\mathbb{I}(y^{(i)}=C_j)} = \sum_{j=1}^c\mathbb{I}(y^{(i)}=C_j)\varphi_j\). We chose the product notation because it could be easily “logged”.

There is also a constraint in maximization of \(l_2\): \(\sum_{j=1}^c\varphi_j = 1\). We can form the corresponding Lagrangian function: \[ J(\varphi, \lambda) = \sum_{i=1}^m\sum_{j=1}^c\mathbb{I}(y^{(i)}=C_j)\log \varphi_j + \lambda(-1 + \sum_{j=1}^c\varphi_j) \] Take derivative w.r.t. \(\varphi_j\) and make it zero to give \[ \begin{gather} \frac{\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)}{\varphi_j} + \lambda = 0 \\ \varphi_j = -\frac{\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)}{\lambda} \end{gather} \] Because \(\sum_{j=1}^c\varphi_j = 1\), we have \[ \begin{gather} -\frac{\sum_{j=1}^c\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)}{\lambda} = 1 \\ \lambda = -M \end{gather} \] Then, \[ \varphi_j = \frac{\sum_{i=1}^m\mathbb{I}(y^{(i)}=C_j)}{M} \]

Previous
Next