## Gaussian Random Variable

(1d Gaussian r.v.)

## Gaussian Random Vector

Definition: Elementary (or standard) Gaussian random vector is a random vector composed of real, mean-zero, unit variance, independent Gaussian r.v.'s.

According to the definition, the PDF and CF of standard Gaussian random vector $\mathbf{X}$ are

$$f_{\mathbf{X}} (\mathbf{x}) = \prod_{i=1}^n \frac{1}{\sqrt{2\pi}} e^{-\frac{x_i^2}{2}} = \frac{1}{\sqrt{2\pi}^n} e^{-\frac{1}{2} \mathbf{x}^T \mathbf{x}}$$

$$\Phi_{\mathbf{X}} (\mathbf{w}) = \prod_{i=1}^n e^{-\frac{w_i^2}{2}} = e^{-\frac{1}{2} \mathbf{w}^T \mathbf{w}}$$

Definition: Any random vector that can be generated by a linear/affine transformation of a standard Gaussian random vector is called a Gaussian random vector.

Theorem: The CF of a Gaussian random vector $\mathbf{Y}$ with mean $\mathbf{m}_Y$ and covariance matrix $K_Y$ is

$$\Phi_{\mathbf{Y}} (\mathbf{w}) = e^{i \mathbf{w}^T \mathbf{m}_Y -\frac{1}{2} \mathbf{w}^T K_Y \mathbf{w}}$$

Theorem: Linear/Affine transformations of Gaussian random vectors are Gaussian random vectors. (Equivalently, Gaussian random vectors stay Gaussian in any direction.)

Theorem: The PDF of a Gaussian random vector $\mathbf{Y}$ with nonsingular $K_Y$ is

$$f_{\mathbf{Y}} (\mathbf{y}) = \frac{1}{\sqrt{ (2\pi)^n \det(K_Y)}} e^{-\frac{1}{2} ( \mathbf{y} - \mathbf{m}_Y )^T K_Y^{-1} ( \mathbf{y} - \mathbf{m}_Y )}$$

Theorem: The PDF of a Gaussian random vector $\mathbf{Y}$ with singular $K_Y$ is

$$f_{\mathbf{Y}} (\mathbf{y}) = f_{\mathbf{Y}_a} (\mathbf{y}_a) \delta( \mathbf{y}_b - [\mathbf{m}_b + BA^{-1}(\mathbf{y}_a - \mathbf{m}_a) ] )$$

Here we denote $\text{rank}(K_Y) = m$ and suppose the first m elements of Y, $\mathbf{Y}_a$, have nonsingular covariance matrix. Decompose $K_Y = E \Lambda E^T = H H^T$, with $H = ( \sqrt{\lambda_1} \mathbf{e}_1, \cdots, \sqrt{\lambda_m} \mathbf{e}_m)$. Write $H = \begin{pmatrix} A \\ B \end{pmatrix}$, where A is m-by-m and B is (n-m)-by-m.

Theorem: For any Gaussian random vectors $\mathbf{X}_1, \mathbf{X}_2$,

$$\mathbf{X}_1 ∐ \mathbf{X}_2 \Leftrightarrow \text{Cov}(\mathbf{X}_1, \mathbf{X}_2) = 0$$