1 / 89

Multigrid Methods

Multigrid Methods. Jinchao Xu & James Brannick Deparment of Mathematics Penn State. Sources. Multigrid related webpages http:..multigrid.org http://mgnet.org Newsletter Software repository Copper Mountain Conference April 7 - 11, 2008. Multilevel methods have been developed for.

krista
Download Presentation

Multigrid Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multigrid Methods Jinchao Xu & James Brannick Deparment of Mathematics Penn State

  2. Sources Multigrid related webpages http:..multigrid.org http://mgnet.org Newsletter Software repository Copper Mountain Conference April 7 - 11, 2008

  3. Multilevel methods have been developed for... • PDEs, CFD, porous media, elasticity, electromagnetics. • Purely algebraic problems, with no physical grid; for example, network & geodetic survey problems. • Image reconstruction & tomography. • Optimization (e.g., the traveling salesman & long transportation problems). • Statistical mechanics, Ising spin models. • Quantum chromo dynamics. • Quadrature & generalized FFTs. • Integral equations. Project will be exploring use of MG methods for various applications of scalar anisotropic diffustion equations.

  4. 1.Model Problems: 1d and 2d Poisson eqns. 2. Basic Iterative Methods Convergence tests Analysis 3.Elements of Multigrid Relaxation Coarsening Implementation ( Time permitting ) Complexity Diagnostics Intro. to MG: outline

  5. Suggested readingCHECK ALSO MGNET REPOSITORY • A. Brandt, “Multi-level Adaptive Solutions to Boundary Value Problems,” Math Comp., 31, 1977, pp 333-390. • A. Brandt, “Multigrid techniques: 1984 guide with applications to computational fluid dynamics,” GMD, 1984. • W. Hackbusch, “Multi-Grid Methods & Applications,” Springer, 1985. • W. Hackbusch & U. Trottenberg, “Multigrid Methods”, Springer-Verlag, 1982. • S. McCormick, ed., “Multigrid Methods,” SIAM Frontiers in Applied Math. III, 1987. • U. Trottenberg, C. Oosterlee, & A. Schüller, “Multigrid,” Academic Press, 2000. • P. Wesseling, “An Introduction to Multigrid Methods,” Wylie, 1992. • J. Xu L. Zikatanov, “Method of Alternating Projections and the method of Subspace Corrections,” J. AMS, 2002.

  6. u x u x f x x 1 0 0 s < <  -  ( ) + ( ) = ( ) , s f f x  ( ) i i 1. Model problems • 1-D boundary value problem: • Grid: • Let & for . This discretizes the variables, but what about the equations?

  7. 0 0 • Summing & solving: Approximate u’’(x) via Taylor series • Approximate 2nd derivative using Taylor series:

  8. u x u x f x x 1 0 0 s < <  -  ( ) + ( ) = ( ) , s Approximate equation via finite differences • Approximate the BVP by a finite difference scheme:

  9. T f f f f = ( , , . . . , ) 1 2 N 1 - Discrete model problem Letting & we obtain the matrix equation Av = f, where A is (N-1) x (N-1), symmetric, positive definite, &

  10. 2 -1 -1    Stencil notation A = [-1 2 -1] dropping h -2&for convenience

  11. Basic solution methods • Direct • Gaussian elimination • Factorization • Fast Poisson solvers (FFT-based, reduction-based, …) • Iterative • Richardson, Jacobi, Gauss-Seidel, … • Steepest Descent, Conjugate Gradients, … • Incomplete Factorization, ... • Notes: • This simple 1-D problem can be solved efficiently in many ways. Pretend it can’t & that it’s very hard, because it shares many characteristics with some very hard problems. If we keep things as simple as possible by studying this model, we’ve got a chance to really understand what’s going on. • But, to keep our feet on the ground, let’s go to 2-D anyway…

  12. u 0 x 0 x 1 y 0 y 1 0  = , = , = , = , = ; s 2-D model problem • Consider the problem • Consider the grid

  13. f f x y  ( , ) i j i j T v = ( v , v , ..., v , v , v , ..., v , ..., v , v , ..., v ) 1 , 1 1 , 2 1 , N - 1 2 , 1 2 , 2 2 , N - 1 M - 1 , 1 M - 1 , 2 M - 1 , N - 1 Discretizing the 2-D problem • Let & . Again, using 2nd-order finite differences to approximate & we arrive at the approximate equation for the unknown , for i =1,2, …M-1 & j =1,2, …, N-1: • Ordering the unknowns (& also the vector f)lexicographically by y-lines:

  14. Resulting linear system • We obtain a block-tridiagonal system Av = f: whereIy is a diagonal matrix with on the diagonal &

  15. -1 -1 -1 4 -1 Stencilspreferred for grid issues Stencils are much better for showing the grid picture: again dropping h -2 &  Stencils show local relationships--grid point interactions.

  16. e u v = - r f A v = - 2. Basic iterative methods • Consider Au = fwhere A is NxN& let v be an approximation to u. (Generic A uses Nnot N-1.) • Two important measures: • The Error: with norms • The Residual: with & &

  17. Residual correction • Since e = u - v, we can write Au = fas A(v + e)= f which means that Ae = f - Au  r. • Residual Equation: • Residual Correction: r = f - Av = Au - Av = A(u - v) = Ae

  18. Relaxation • Consider the 1-D model problem • Jacobi (simultaneous displacement): Solve the ith equation for holding all other variables fixed:

  19. A u f = Jacobi in matrix form • Let A = D - L - U, where D is diagonal & L & U are the strictly lower & upper triangular parts of A. • Then becomes • Let . • RJ is called the error propagation or iteration matrix. • Then the iteration is (J = D-1(D-A) = I-D-1A)

  20. Error propagation matrix & the error • From the derivation, • the iteration is • subtracting, • or • hence Error propagation! J = I-D-1A

  21.                         A picture 1 2 1 2 RJ= D-1 (L + U) = [ 0 ] so Jacobi is an error averaging process: ei(new) (ei-1(old) + ei+1(old))/2     

  22. Another matrix look at Jacobi Jacobi v (new) D-1 (L + U) v (old) + D-1f(L + U = D-A) = (I - D-1A) v (old) + D-1f v (new) = v (old) - D-1(Av (old) - f)= v (old) - D-1r • Exact solution: u = u - D-1(Au - f) • Subtracting: e (new)= e (old)- D-1Ae (old) • General form: u= u- B(Au - f)with B ~ A-1 • Damped Jacobi: u= u- D-1 (Au - f)with 0<<2 • Gauss-Seidel: u= u- (D - L)-1 (Au - f) • Exact: u= u- A-1 (Au - f) = A-1f

  23. Weighted Jacobisafer(0<<1)changes; • Consider the iteration • LettingA = D-L-U,the matrix form is • Note that • It is easy to see that if , then

  24. Gauss-Seidel (1-D) • Solve equation ifor ui& update immediately. • Equivalently: set each component ofrto zero in turn. • Component form: for set • Matrix form: • Let • Then iterate: • Error propagation:

  25. Red-black Gauss-Seidel • Update the EVEN points: • Update the ODD points: • 2-D:

  26. 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Numerical experiments • Solve , • Use Fourier modes as initial iterates, with N = 64: component mode sin(kπx), x=i/N

  27. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 20 40 60 80 100 120 Convergence factors differ for different error components Error, ||e|| , in weighted (w=2/3) Jacobi on Au = 0 for 100 iterations using initial guesses v1, v3, & v6

  28. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 70 80 90 100 Stalling convergencerelaxation shoots itself in the foot • Weighted (w=2/3) Jacobi on 1-D problem. • Initial guess: • Error plotted against iteration number:

  29. Analysis of stationary linear iteration • Let v(new) = Rv(old) + g . The exact solution is unchanged by the iteration: u= Ru + g . • Subtracting: e(new) = Re(old) . • Let e(0) be the initial error & e(i) be the error after the i th iteration. After n iterations, we have e(n) = Rne(0) .

  30. N  v v w = k k k = 1 N n  n B v v w = l k k k = 1 Quick review of eigenvectors & eigenvalues • The number l is an eigenvalue of a matrix B & w ≠ 0 its associated eigenvector if Bw = lw. • The eigenvalues & eigenvectors are characteristics of a given matrix. • Eigenvectors are linearly independent, & if there is a complete set of N distinct eigenvectors for an NxN matrix, then they form a basis: for any v, there exist unique scalars vk such that • Propagation:

  31. 1 ( R ) max r = l < 1 £ k £ N k “Fundamental Theorem of Iteration” • Ris convergent (Rn  0as n ) iff . Thus, e(n) = Rne(0)  0for any initial vector v(0) iff (R)<1. • (R)<1 assures convergence of the iteration for R. • (R) is the spectral convergence factor. • But is doesn’t tell you much by itself--it’s generally valid only asymptotically. It’s useful for the symmetric case in particular, so we’ll accept it for now, but first a little background & then a warning…

  32. The eigenvalues are related as well. Convergence analysis: Weighted Jacobi 1-D For our 1-D model, the eigenvectors of weighted Jacobi Rw& the eigenvectors of A are the same!

  33. 1 1 0.8 0.8 0.6 0.6 N = 64 0.4 0.4 0.2 0.2 0 0 -0.2 -0.2 -0.4 -0.4 -0.6 -0.6 -0.8 -0.8 -1 -1 0 10 20 30 40 50 60 k = 1 k = 2 0 10 20 30 40 50 60 1 1 1 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 0.4 0.2 0.2 0.2 0 0 0 -0.2 -0.2 -0.2 -0.4 -0.4 -0.4 -0.6 -0.6 -0.6 -0.8 -0.8 -0.8 -1 -1 -1 0 10 20 30 40 50 60 k = 4 k = 8 k = 16 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Eigenpairs of A [-1 2 -1] N-1@4 1 @ ph2 The eigenvectors of A are Fourier modes!

  34. Eigenvectors of R= eigenvectors ofA • Expand the initial error in terms of the eigenvectors: • After n iterations: • The kth error mode is reduced by k (R) each iteration.

  35. 1 0.8 0.6 0.4 0.2  0 -0.2 -0.4 -0.6 -0.8 -1 0 k N Relaxation suppresses eigenmodes unevenly • Look carefully at . Note that if 0 < ≤ 1, then for . For 0 < ≤ 1,

  36. 1 0.8 0.6 0.4 0.2  axis 0 -0.2 -0.4 -0.6 -0.8 -1 0 N k axis Low frequencies are “undamped” Notice that no value of will efficiently damp out long waves or low frequencies. What value of gives the best damping of short waves or high frequencies N/2≤k≤N-1? Choose such that

  37. Smoothing factor • The smoothing factor is the largest magnitude of the iteration matrix eigenvalues corresponding to the oscillatory Fourier modes: smoothing factor= max |k(R)|forN/2≤k≤N-1. • Why only the upper spectrum? • For R with =2/3, the smoothing factor is 1/3: |N/2|=|N|=1/3& |k|<1/3forN/2<k<N. • But |k| ≈ 1 - k22h2 for long waves (k << N/2). “MG” spectral radius?

  38. 100 Unweighted Jacobi Weighted Jacobi 100 90 90 80 80 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 0 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Wavenumber, k Wavenumber, k Convergence of Jacobi on Au = 0 • Jacobi on Au = 0 with N = 64. Number of iterations needed to reduce initial error ||e||by 0.01. • Initial guess :

  39. Weighted Jacobi = smoother (error) • Initial error: • Error after 35 iteration sweeps: Many relaxation schemes are smoothers: oscillatory error modes are quickly eliminated, but smooth modes are slowly damped.

  40. 100 90 80 70 60 50 40 30 20 10 0 0 10 20 30 40 50 60 Gauss-Seidel convergenceAu = 0 Eigenvectors of RG are not the same as those of A.Gauss-Seidel mixes the modes ofA. Gauss-Seidel onAu = 0,withN = 64. Number of iterations needed to reduce initial error ||e||by 0.01. Initial guess (modes of A):

  41. 3. Elements of multigrid 1st observation toward multigrid • Many relaxation schemes have the smoothing property:oscillatory error modes are quickly eliminated, while smooth modes are often very slow to disappear. • We’ll turn this adversity around: the idea is to use coarse grids to take advantage of smoothing. How?

  42. Reason #1 for coarse grids:Nested iteration • Coarse grids can be used to compute an improved initial guess for the fine-grid relaxation. This is advantageous because: • Relaxation on the coarse-grid is much cheaper: half as many points in 1-D, one-fourth in 2-D, one-eighth in 3-D,… • Relaxation on the coarse grid converges faster (|1(R)| ≈ 1 - 2h2 ): instead of

  43. Idea! Nested iteration • • • • Relax on Au = f on 4h to obtain initial guess v2h. • Relax on Au = f on 2h to obtain initial guess vh. • Relax on Au = f on h to obtain … final solution??? • What is A2hu2h= f2h? Analogous toAhuh = fhfor now. • How do wemigratebetween grids? Hang on… • What if the error still has large smooth components • when we get to the fine grid h ? Stay tuned for 2nd • observation toward multigrid…

  44. Reason #2 for coarse grids:Smooth error becomes more oscillatory • A smooth function: can be represented by linear interpolation from a coarser grid: On the coarse grid, the smooth error appears to be relativelyhigherin frequency: in this example it is the 4-mode out of a possible 15 on the fine grid,~1/4 the way up the spectrum. On the coarse grid, it is the 4-mode out of a possible 7,~1/2 the way up the spectrum. Relaxation on 2h is (cheaper &) faster on this mode!!!

  45. 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 2 4 6 8 10 12 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 1 2 3 4 5 6 For k=1,2,…N/2, the kth mode is preserved on the coarse grid. Also, note that on the coarse grid. k=4 mode, N=12 grid What happens to the modes between N/2 & N ? k=4 mode, N=6 grid

  46. 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 2 4 6 8 10 12 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1 0 1 2 3 4 5 6 For k > N/2,wkh is disguised on the coarse grid: aliasing!!! • For k > N/2, the kth mode on the fine grid is aliased & appears as the (N - k)th mode on the coarse grid: k=9 mode, N=12 grid k=3 mode, N=12 grid j

  47. 1-D interpolation (prolongation)to migrate from coarse to fine grids • Mapping from the coarse grid to the fine grid: • Let , be defined on , . Then where } for .

  48. 1-D interpolation (prolongation) • Values at points on the coarse grid map unchanged to the fine grid. • Values at fine-grid points NOT on the coarse grid are the averages of their coarse-grid neighbors.

  49. When is v2h = 0? 1-D prolongation operator P • P = is a linear operator: N/2-1N-1 . • N = 8: • has full rank, so (P) = {0}.

  50. “Give To” stencil for P ]1/2 1 1/2 [ o xox o x o 1/2 1 1/2

More Related