Introduction to numerical analysis
Numerical error: truncation, round off, error propagation,
Numerical Linear Algebra
Linear equations
Classical methods:
-
Direct methods for small and dense matrices:
- Gaussian elimination: LU decomposition with pivoting ($2/3 n^3$ flops),
back substitution ($n^2$ flops);
- Cholesky or LDL decomposition with symmetric pivoting ($1/3 n^3$ flops), back substitution;
-
Iteration methods for large and sparse matrices:
Jacobi iteration, Gauss-Seidel iteration, successive over relaxation, Chebyshev;
Matrix norm, condition number, error estimation.
Krylov subspace methods:
(symmetric Lanczos process + LDLt factorization + clever recursions)
- symmetric positive-definite: conjugate gradient (CG);
- symmetric: MINRES (minimum residual), SYMMLQ;
- square: GMRES (general minimum residual), QMR (quasi-minimum residual),
BiCG (biconjugate gradient), CGS (CG squared), BiCGStab (BiCG stablilized);
- least squares: LSQR, LSMR (least-squares minimum residual);
Krylov subspace methods are successful only if they can be effectively preconditioned.
Notes:
用高斯消元法求解时,比较计算后和计算前的主对角元,比值越小、解的精度就少多少(?)
(Using double precision floating point number) 系数矩阵条件数在百亿量级(1e10)时,
解的精度大概只有1e-3或1e-4。——陈璞
- power method: acceleration, orthonormalization;
- inverse iteration;
- QR decomposition, Jacobi's method;
Root Finding
Solving Nonlinear Equations:
Merit function for solving nonlinear equations is a scalar-valued function
that indicates whether a new iterate is better or worse than the current iterate,
in the sense of making progress toward a root.
Merit functions are often obtained by combining the components of the vector field in some way,
e.g. the sum of squares; all of which have some drawbacks.
Line search and trust-region techniques play an equally important role in optimization,
but one can argue that trust-region algorithms have certain theoretical advantages
in solving nonlinear equations.
Optimization
Optimization:
Function Approximation
Least squares solution of over-determined systems of equations.
QR decomposition:
- modified Gram-Schmidt (mGS), $2 m n^2$ flops for an m-by-n matrix A;
- Householder orthogonalization, $4 (m - n/3) n^2$ flops for the Q matrix,
but is more accurate when A is rank-deficient [@Golub2013, Sec 5.2.9];
Numerical Calculus
References
- Isaacson and Keller, 1966. Analysis of Numerical Methods.
- Hamming, 1962, 1973. Numerical Methods for Scientists and Engineers.
- Golub and Van Loan, 1983, 2013. Matrix Computations.
- Trefethen and Bau, 1997. Numerical Linear Algebra.
- Dennis and Schnabel, 1983, 1996.
Numerical Methods for Unconstrained Optimization and Nonlinear Equations.
- Nocedal and Wright, 1999, 2006. Numerical Optimization.
- Suli and Mayers, 2003. An Introduction to Numerical Analysis.
- Hogben (ed.), 2006, 2013. Handbook of Linear Algebra.
🏷 Category=Computation