Polar Decomposition

Lemma (7.3.1): $A = X \Lambda Y$

Theorem (7.3.2; Polar decomposition): $A = P U$

Theorem (7.3.4): Given $A \in M_n, A = P U$, $A$ is normal iff $P U = U P$.

Singular Value Decomposition

Theorem (7.3.5; Singular Value Decomposition): $A = V \Sigma W^∗$, where $A \in M_{m,n}$, $V$ and $W$ are unitary matrices, and $\Sigma$ is non-negative diagonal. If $A$ is real, then $V, W$ are also real.

Singular values are the "diagonal" entries of $\Sigma$, denoted as $\sigma_1, \dots, \sigma_q$. Left and right singular vectors are the columns of $V$ and $W$, respectively.

The singular values of $A$ are eigenvalues of polar matrix $P$.

If $A$ is normal, then $A A^∗ = A^∗ A$ implies that $A A^∗$ and $A^∗ A$ have the same eigenvectors. This does not mean $V = W$ in $A = V \Sigma W^∗$, because each corresponding singular vector are different by a factor $e^{i \theta_k}$.

If $A = U \Lambda U^∗$, $\lambda_k = |\lambda_k| e^{i \theta_k}$ (define $\theta_k = 0$ if $\lambda_k = 0$), let $|\Lambda| = \mathrm{diag}\{|\lambda_1|, \dots, |\lambda_n|\}$, $D = \mathrm{diag}\{e^{i \theta_1}, \dots, e^{i \theta_k}\}$, then the SVD of $A$ is $A = U |\Lambda| (U D^∗)^∗$.

Theorem (7.3.7): Given $\tilde{A} = [0, A; A^∗, 0]$, the singular values of $A$ are $\{\sigma_1, \dots, \sigma_q\}$ iff the eigenvalues of $\tilde{A}$ are $\{\sigma_1, \dots, \sigma_q, -\sigma_1, \dots, -\sigma_q, 0, \dots, 0\}$.

$A^∗, A^T, \bar{A}$ have the same singular values with $A$. If $U$ and $V$ are unitary, then $U A V$ and $A$ have the same singular values. $\forall c \in \mathbb{C}, \Sigma(c A) = |c| \Sigma(A)$

Theorem (7.3.9; Interlacing property for singular values)

Theorem (7.3.10; Analog of Courant-Fischer Theorem)

Pseudo-inverse

The Moore-Penrose pseudoinverse of $A = V \Sigma W^∗$ is defined as $A^\dagger \equiv W \Sigma^\dagger V^∗$, where $\Sigma^\dagger$ is the transpose of $\Sigma$ with positive singular values replaced by their reciprocals.

Properties:

  • $A A^\dagger$ and $A^\dagger A$ are Hermitian.
  • $A A^\dagger A = A$ and $A^\dagger A A^\dagger = A^\dagger$.
  • If $A$ is an invertible square matrix, then $A^\dagger = A^{-1}$.
  • $A^\dagger$ is unique.
  • The lease square solution to $A x = b$ is $x = A^\dagger b$.

Theorem (p. 426 Q26; simultaneous singular value decomposition)

Applications

The effective/numerical rank of a matrix is the number of singular values whose magnitudes are greater than measurement error.

(7.4.1; Nearest rank-$k$ matrix) Given $A \in M_{m,n}, \mathrm{rank}(A) = k$, then $\forall A_1 \in M_{m,n}, \mathrm{rank}(A) = k_1 \le k$, $\min \| A - A_1 \|_2 = \| A - V \Sigma_1 W^∗ \|_2$, where $\Sigma_1 = \mathrm{diag}\{\sigma_1, \dots, \sigma_{k_1}, 0, \dots, 0\}_{m,n}$.

Theorem (7.4.10): If $A \in M_{m,n}, B \in M_{n,m}$, $A B$ and $B A$ are positive semi-definite, then $\mathrm{tr}(A B) = \mathrm{tr}(B A) = \sum_{i=1}^q \sigma_i(A) \sigma_{\tau(i)}(B)$.


🏷 Category=Algebra Category=Matrix Theory