Gaussian random variable

Standard Gaussian random variable $\mathbf{z}$ is the real random variable that is zero-mean and unit-variance, and has a log-quadratic PDF. Standard Gaussian distribution $N(0, 1)$ has the following PDF and CF:

  • PDF: $\phi(x) = (2 \pi)^{-\frac{1}{2}} \exp(-\frac{1}{2} x^2)$
  • CF: $\varphi_{\mathbf{z}}(t) = \exp(- \frac{1}{2} t^2)$

We usually denote the PDF of a standard Gaussian as $\phi(x)$, and denote its CDF as $\Phi(x)$.

Gaussian random variable is a real random variable whose PDF is log-quadratic. Gaussian distribution $N(\mu, \sigma^2)$ with mean $\mu$ and variance $\sigma^2$ has the following PDF and CF:

  • PDF: $f_{\chi}(x) = (2 \pi \sigma^2)^{-\frac{1}{2}} \exp(-\frac{1}{2} (x-\mu)^2 \sigma^{-2})$
  • CF: $\varphi_{\chi}(t) = \exp(- \frac{1}{2} t^2 \sigma^2 + i \mu t)$

Gaussian random vector

Standard Gaussian random vector $\mathbf{z}$ is a random vector that consists of jointly independent standard Gaussian random variables. n-dimensional standard Gaussian distribution $N_n(0, I_n)$ has the following PDF and CF:

  • PDF: $\phi(\mathbf{x}) = \prod_{i=1}^n (2 \pi)^{-\frac{1}{2}} \exp(-\frac{1}{2} x_i^2) = (2 \pi)^{-\frac{n}{2}} \exp(-\frac{1}{2} \mathbf{x}^T \mathbf{x})$
  • CF: $\Phi_{\mathbf{z}} (\mathbf{w}) = \prod_{i=1}^n \exp(-\frac{1}{2} w_i^2) = \exp(-\frac{1}{2} \mathbf{w}^T \mathbf{w})$

Gaussian random vector is a random vector that can be represented by a linear or affine transformation of a standard Gaussian random vector: $\mathbf{x} = A \mathbf{z} + b$, $A \in M_{n,n}$, $b \in \mathbb{R}^n$. Theorem: Affine transformations of Gaussian random vectors are Gaussian random vectors; or equivalently, Gaussian random vectors stay Gaussian in any direction. n-dimensional Gaussian distribution $N_n(\boldsymbol{\mu}, \Sigma)$ with mean $\boldsymbol{\mu}$ and covariance matrix $\Sigma$ has the following PDF and CF: († denotes pseudo-inverse; $\text{det}_+$ denotes pseudo-determinant)

  • PDF (general): $f_{\chi}(\mathbf{x}) = \text{det}_+(2\pi \Sigma)^{-\frac{1}{2}} \exp(-\frac{1}{2}(\mathbf{x} - \boldsymbol{\mu})^T \Sigma^\dagger (\mathbf{x} - \boldsymbol{\mu}))$, where x is in the support $\text{span}(\Sigma) + \boldsymbol{\mu}$.
  • PDF (non-singular covariance): $f_{\chi} (\mathbf{x}) = \det(2\pi \Sigma)^{-\frac{1}{2}} \exp(-\frac{1}{2}(\mathbf{x} - \boldsymbol{\mu} )^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}))$
  • CF: $\Phi_{\chi} (\mathbf{w}) = \exp(-\frac{1}{2} \mathbf{w}^T \Sigma \mathbf{w} + i \boldsymbol{\mu}^T \mathbf{w})$

If the covariance matrix is singular, the PDF given above is defined w.r.t. the Lebesgue measure on the affine subspace where the distribution is supported. Let $\text{rank}(\Sigma) = r$, eigen-decompose $\Sigma = Q \Lambda Q^T$ where $\lambda \in \mathbb{R}^n_{+\downarrow}$, and partition Q into two column matrices $Q = (Q_r, Q_{n-r})$. The support can be written as $\mu + \text{span}(Q_r)$, or the set of points that satisfies $Q^T_{n-r} (\mathbf{x} - \boldsymbol{\mu}) = 0$.

Theorem: Two Gaussian random vectors are independent if and only if they are uncorrelated: $\mathbf{x}_1 ∐ \mathbf{x}_2 \Leftrightarrow \text{Cov}(\mathbf{x}_1, \mathbf{x}_2) = 0$.

[@Petersen2012, Sec 8] gives a good summary of properites of Gaussian random vectors and distributions. [@Petersen2012, Apx A] gives particular results for Gaussian random variables.

Click here for all the proofs.


🏷 Category=Probability