Probability and Statistics

  • Law of Total Variance

    Conditional Expectation Let \(X\) and \(Y\) be two discrete random variables. The conditional probability function of \(X\) given \(Y=y\) is \[ \Pr(X=x|Y=y) = \frac{\Pr(X=x, Y=y)}{P(Y=y)} \] Thus the conditional expectation of \(X\) given that \(Y=y\) is \[ \E(X|Y=y) \coloneq \sum_x x \Pr(X=x|Y=y) \] Clearly the conditional expectation \(\E(X|Y)\) is a function of \(Y\), or put it another way, a random variable depending on \(Y\), instead of \(X\).

  • Gaussian Distribution

    Gaussian Distribution One-dimensional Suppose \(1\)-d random variable \(x \sim N(\mu, \sigma^2)\), then its density function is \[ p(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{x - \mu}{\sigma})^2} \] To verify that it integrates to \(1\),

  • Unconscious Statistics

    Law of the Unconscious Statistician In probability theory and statistics, the law of the unconscious statistician (LOTUS), is a theorem used to calculate the expected value of a function \(g(X)\) of a random variable \(X\) when one knows the probability distribution of \(X\) but one does not know the distribution of \(g(X)\).

  • Whitening

    Whitening Data whitening is the process of converting a random vector \(X\) with only first-order correlation into a new random vector \(Z\) such that the covariance matrix of \(Z\) is an identity matrix.

  • 随机变量的收敛

    依概率收敛(convergence in probability)

  • 特征函数

    定义 感性认知 根据泰勒级数我们可以得知,两个函数\(f(x),