Vector norms

Vector norm $\|\cdot\|: V \mapsto \mathbb{R}_{\ge 0}$ is any non-negative function on a vector space, that is positive for non-zero vectors, homogeneous, and satisfies the triangle inequality.

Vector p-norm is the vector norm on the Euclidean n-space, defined by: $\|x\|_p = (\sum_{i=1}^n |x_i|^p)^{1/p}$, $p \in [1, \infty]$. Vector 2-norm or Euclidean norm is the norm induced from the Euclidean inner product: $\|x\|_2 = (\sum_{i=1}^n |x_i|^2)^{1/2}$. Euclidean norm is the default vector norm, and is often simply denoted as $\|\cdot\|$ or $|\cdot|$.

Matrix norms

Matrix norm $\|\cdot\|: M_{m,n} \mapsto \mathbb{R}_{\ge 0}$ is any vector norm on a vector space of matrices that is sub-multiplicative: $\forall A \in M_{m,n}$, $\forall B \in M_{n,l}$, $\|A B\| \le \|A\| \|B\|$.

Matrix p-norms

Matrix p-norm is the operator norm on a space of matrices induced from the p-norm on the Euclidean spaces: $\|A\|_p = \max_{\|x\|_p = 1} \|A x\|_p$. Matrix 2-norm or spectral norm of a matrix is its largest singular value: $\|A\|_2 = \sigma_1(A)$. Matrix 1-norm or maximum column sum norm is the largest column sum: $\|A\|_1 = \max_{j \in n} \sum_{i \in m} |a_{ij}|$. Matrix ∞-norm or maximum row sum norm is the largest row sum: $\|A\|_\infty = \max_{i \in m} \sum_{j \in n} |a_{ij}|$.

Schatten p-norms

Schatten p-norm of a matrix is the p-norm of its singular values: $\|A\|_p = |\sigma(A)|_p$, $p \in [1, \infty]$, usually written with three vertical lines to distinguish from matrix p-norms. Schatten p-norms are special cases of symmetric gauge functions (see e.g. [@Horn1990, Sec 3.4-3.5]). Schatten p-norms are decreasing in p: $\forall 1 \le p < q$, $\|A\|_p \le \|A\|_q$; in particular, the nuclear norm ≥ the Frobenius norm ≥ spectral norm. Schatten 1-norm or nuclear norm is the sum of the singular values: $\|A\|_* = \sum_{i=1}^r \sigma_i(A)$. Schatten 2-norm is the 2-norm of the singular values, which is equivalent to the Frobenius norm or Euclidean norm, the 2-norm of its vectorization: $\|A\|_F = (\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2)^{1/2} = (\text{tr}(A A^T))^{1/2}$. One can easily show the equivalence using $\text{tr}(A A^T) = \sum_{i=1}^r \lambda_i(A A^T)$ and $\lambda(A A^T) = \sigma^2(A)$. Schatten ∞-norm is equivalent to the matrix 2-norm, aka the spectral norm, whic is the largest singular value.

Misc

$L_{p,q}$ norms

L_{p,q} norm, $p,q \in [1, \infty]$, is a function defined as: $\|A\|_{p,q} = (\sum_{j=1}^n (\sum_{i=1}^m |a_{ij}|^p)^{q/p})^{1/q}$. $L_{p,q}$ norm need not be a matrix norm. $L_{p,p}$ norm of a matrix equals the p-norm of its vectorization: $\|A\|_{p,p} = \|\text{vec}(A)\|_p$. L_{1,1}-norm of a matrix is the 1-norm of its vectorization: $\|A\| = \sum_{i=1}^m \sum_{j=1}^n |a_{ij}|$, which is a matrix norm. L_{2,2}-norm of a matrix is the Frobenius norm, aka the Euclidean norm. $\|A\|_{2,2} = \|A\|_F$. L_{∞,∞}-norm is not a matrix norm. L_{2,1} norm of a matrix is the sum of the 2-norms of its columns: $\|A\|_{2,1} = \sum_{j=1}^n (\sum_{i=1}^m |a_{ij}|^2)^{1/2})$. The $L_{2,1}$ norm is useful if the matrix represents a data set, where each column vector is an observation.

Notes:


🏷 Category=Algebra Category=Matrix Theory