1 / 29

Modern iterative methods

Modern iterative methods. For basic iterative methods, converge linearly Modern iterative methods, converge faster Krylov subspace method Steepest descent method Conjugate gradient (CG) method --- most popular Preconditioning CG (PCG) method GMRES for nonsymmetric matrix

courtney
Download Presentation

Modern iterative methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modern iterative methods • For basic iterative methods, converge linearly • Modern iterative methods, converge faster • Krylov subspace method • Steepest descent method • Conjugate gradient (CG) method --- most popular • Preconditioning CG (PCG) method • GMRES for nonsymmetric matrix • Other methods (read yourself) • Chebyshev iterative method • Lanczos methods • Conjugate gradient normal residual (CGNR)

  2. Modern iterative methods • Ideas: • Minimizing the residual • Projecting to Krylov subspace • Thm: If A is an n-by-n real symmetric positive definite matrix, then have the same solution • Proof: see details in class

  3. Steepest decent method • Suppose we have an approximation • Choose the direction as negative gradient of • If • Else, choose to minimize

  4. Steepest decent method • Computation • Choose as

  5. Algorithm– Steepest descent method

  6. Theory • Suppose A is symmetric positive definite. • Define A-inner product • Define A-norm • Steepest decent method

  7. Theory • Thm: For steepest decent method, we have • Proof: Exercise

  8. Theory • Rewrite the steepest decent method • Let errors • Lemma: For the method, we have

  9. Theory • Thm: For steepest decent method, we have • Proof: See details in class (or as an exercise)

  10. Steepest decent method • Performance • Converge globally, for any initial data • If , then it converges very fast • If , then it converges very slow!!! • Geometric interpretation • Contour plots are flat!! • Local best direction (steepest direction) is not necessarily a global best direction • Computational experience shows that the method suffers a decreasing convergence rate after a few iteration steps because the search directions become linearly dependent!!!

  11. Conjugate gradient (CG) method • Since A is symmetric positive definite, A-norm • In CG method, the direction vectors are chosen to be A-orthogonal (and called as conjugate vectors), i.e.

  12. CG method • In addition, we take the new direction vector as a linear combination of the old direction vector and the descent direction as • By the assumption we get

  13. Algorithm– CG Method

  14. An example • An example • Initial guess • The approximate solutions

  15. CG method • In CG method, are A-orthogonal! • Define the linear space as • Lemma: In CG method, for m=0,1,…., we have • Proof: See details in class or as an exercise

  16. CG method • In CG method, is A-orthogonal to or • Lemma: In CG method, we have • Proof: See details in class or as an exercise • Thm: Error estimate for CG method

  17. CG method • Computational cost • At each iteration, 2 matrix-vector multiplications. This can be further reduced to 1 matrix-vector multiplications • At most n steps, we can get the exact solution!!! • Convergence rate depends on the condition # • K2(A)=O(1), converges very fast!! • K2(A)>>1, converges slow but can be accelerated by preconditioning!!

  18. Preconditioning • Ideas: Replace by satisfying • C is symmetric positive definite • is well-conditioned, i.e. • can be easily solved • Conditions for choosing the preconditioning matrix • as small as possible • is easy to compute • Trade-off

  19. Algorithm– PCG Method

  20. Preconditioning • Ways to choose the matrix C (read yourself) • Diagonal part of A • Tri-diagonal part of A • m-step Jacobi preconditioner • Symmetric Gauss-Seidel preconditioner • SSOR preconditioner • In-complete Cholesky decomposition • In-complete block preconditioning • Preconditioning based on domain decomposition • …….

  21. Extension of CG method to nonsymmetric • Biconjugate gradient (BiCG) method: • Solve simultaneously • Works well for A is positive definite, not symmetric • If A is symmetric, BiCG reduces to CG • Conjugate gradient squared (CGS) method • A has a special formula in computing Ax, its transport hasn’t • Multiplication by A is efficient but multiplication by its transport is not

  22. Krylov subspace methods • Problem I. Linear system • Problem II. Variational formulation • Problem III. Minimization problem • Thm1: Problem I is equivalent to Problem II • Thm2: If A is symmetric positive definite, they are equivalent

  23. Krylov subspace methods • To reduce problem size, we replace by a subspace • Subspace minimization: • Find • Such that • Subspace projection

  24. Krylov subspace methods • To determine the coefficients, we have – Normal Equations • It is a linear system with degree m!! • m=1: line minimization or linear search or 1D projection • By converting this formula into an iteration, we reduce the original problem into a sequence of line minimization (successive line minimization ).

  25. For symmetric matrix • Positive definite • Steepest decent method • CG method • Preconditioning CG method • Non-positive definite • MINRES (minimum residual method)

  26. For nonsymmetric matrix • Normal equations method (or CGNR method) • GMRES (generalized minimium residual method) • Saad & Schultz, 1986 • Ideas: • In the m-th step, minimize the residual over the set • Use Arnoldi (full orthogonal) vectors instead of Lanczos vectors • If A is symmetric, it reduces to the conjugate residual method

  27. Algorithm– GMRES

  28. More topics on Matrix computations • Eigenvalue & eigenvector computations • If A is symmetric: Power method • If A is general matrix • Householder matrix (transform) • QR method

  29. More topics on matrix computations • Singular value decomposition (SVD) • Thm: Let A be an m-by-n real matrix, there exists orthogonal matrices U & V such that Proof: Exercise

More Related