1 / 17

Threshold partitioning for iterative aggregation – disaggregation method

Threshold partitioning for iterative aggregation – disaggregation method. Ivana Pultarova Czech Technical University in Prague, Czech Republic. We consider column stochastic irreducible matrix B of type N × N. The Problem is to find stationary probability vector x p , || x p || = 1 ,

kevlyn
Download Presentation

Threshold partitioning for iterative aggregation – disaggregation method

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threshold partitioning for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic ILAS 2004

  2. We consider column stochastic irreducible matrixB of type N×N. The Problemis to find stationary probability vectorxp, ||xp || = 1, We explore the iterative aggregation-disaggregation method (IAD). Notation: • Spectral decomposition of B, B = P + Z, P2 = P, ZP = PZ = 0, r(Z)<1(spectral radius). • Number of aggregation groups n, n < N. • Restriction matrix R of type n×N. The elements are 0 or 1, all column sums are 1. • Prolongation N×n matrix S(x) for any positive vector x : (S(x))ij := xi iff (R)ji = 1, then divide all elements in each column with the sum of the column. • Projection N×N matrix P(x) = S(x)R. • || . || denote 1-norm. ILAS 2004

  3. Iterative aggregation disaggregation algorithm: step 1. Take the first approximation x0 RN,x0 > 0, and set k = 0. step 2. Solve RBsS(xk) zk+1 = zk+1, zk+1 Rn, || zk+1 || = 1, for (appropriate) integer s, (solution on the coarse level). step 3. Disaggregate xk+1,1 = S(xk) zk+1. step 4. Compute xk+1 = Btxk+1,1 for appropriate integer t, (smoothing on the fine level). step 5. Test whether || xk+1 – xk|| is less then a prescribed tolerance. If not, increase k and go to step 2. If yes, consider xk+1 be the solution of the problem. ILAS 2004

  4. Propositon 1. • If s = t then the computed approximations xk, k = 1,2,…, follow the formulae • BsP(xk) xk+1 = xk+1, • xk+1 = (I – ZsP(xk))-1xp, • xk+1 – xp = J(xk)(xk – xp), • where J(x) = Bs(I – P(x) Zs)-1(I – P(x)) • and also J(x) = Bs(I – P(x) + P(x) J(x)). • Proposition 2. • Let V be a global core matrix associated with Bs. Then • J(x) = V(I – P(x) V)-1(I – P(x)) and J(x) = V(I – P(x) + P(x) J(x)). ILAS 2004

  5. Note. The global core matrix V is here ηP + Z s. Using Z k→ 0 for k → ∞, we have V =ηP + Z s ≥ 0for a givenηand for a sufficiently large s. This is equivalent to B s = P + Z s≥ (1- η) P. ILAS 2004

  6. Local convergence. It is known that for arbitrary integers t≥ 1 and s ≥ 1 there existsa neighborhood Ω of xp such, that if xkΩ then xrΩ, r = k +1, k + 2,…, and that || xk+1 - xp|| ≤ c αk || xk - xp||, where c R andα ≤ min{|| Vloc ||μ, ||(I-P(xp))Z(I-P(xp))||μ}, where ||.||μis some special norm in RN. Here, Vloc is a local core matrix associated with B. Thus, the local convergence rate of IAD algorithm is the same or better comparing with the Jacobi iteration of the original matrix B. ILAS 2004

  7. Global convergence. From Proposition 2 we have ||J(xk)|| ≤||V|| ||I – P(xk)|| + ||V|| ||P(xk)|| ||J(xk)||, i.e. ||J(xk)|| (1 – η) < 2η. So that the sufficient condition for the global convergence of IAD isη< 1/3, i.e. the relation Bs > (2/3) P is the sufficient condition for the global convergence of IAD method. (It also means r(Z s) ≤ 1/3. B s ≥ (2/3) P is equivalent to P/3 + Zs ≥ 0. Then P + 3Z s≥ 0 is a spectral decomposition of an irreducible column stochastic matrix and then r(Zs) ≤ 1/3.) ILAS 2004

  8. In practical computation of large problems we cannot verify the validity of relation B s≥ η P > 0 to estimate the value of s. But, we can predict the constant k for which B k > 0. The value is known to be less than or equal to N 2- 2 N + 2. ILAS 2004

  9. We propose a new method for achieving B s≥ ηP > 0 with some η > 0. Let I – B = M – W be a regular splitting, M -1 ≥ 0, W ≥ 0. Then the solution of Problem is identical with solution of (M – W) x = 0. Denoting Mx = y and setting y:=y/||y||, we have (I – WM -1) y = 0, where WM -1 is column stochastic matrix. Thus, the solution of the Problem is transformed to the solution of WM -1 y = y, ||y|| = 1, for any regular splitting M, W of the matrix I – B. ILAS 2004

  10. The good choice of M, W. • According to IAD algorithm we will usea block diagonal matrix M which is composed of blocks M1, … Mn , each of them invertible. • To achieve (WM -1) s > 0 for low s, we need • Mi-1> 0, i =1,…, n, • nnz (WM -1) >> nnz (B), (number of nonzeros). ILAS 2004

  11. Algorithm of a good partitioning. • step 1. For an apropriate threshold τ, 0 < τ < 1, use Tarjan’s parametrized algorithm to find the irreducible diagonal blocks Bi, i = 1,…,n, of the permuted matrix B, (we now suppose “B := permuted B”). • step 2. Compose the block diagonal matrix BTar from the blocks Bi, i = 1,…,n, and set • M = I – BTar / 2 and W = M – (I – B). • Properties of WM -1 . • WM -1 is irreducible. • Diagonal blocks of WM -1 are positive. • (WM -1) s is positive for s≤n2 - 2n + 3, n is the number of aggregation groups. (n = 3 → s = 2) • The second largest eigenvalue of the aggregated n× nmatrix is approximately the same as that of WM -1. ILAS 2004

  12. Example 1. Matrix B is composed from n× n blocks of size m. We set ε = 0.01,δ = 0.01. Then B is normalized. ILAS 2004

  13. Example 1. a) IAD method for WM -1andthreshold Tarjan’s block matrix M, s = 1, r(ZWM) = 0.9996. (Exact solution – red, the last of approximations - black circles). b)Power iterationsfor WM -1and the same M as in a), s = 1, r(ZWM) = 0.9996. (Exact solution – red, the last of 500 approximations - black circles. No local convergence effect.). c) Rates of convergence of a) and b). ILAS 2004

  14. Example 2. Matrix B is composed from n× n blocks of size m. We set ε = 0.01,δ = 0.01. Then B := B + C (10% of C are 0.1) and normalized. ILAS 2004

  15. Example 2. IAD for B and WM -1. Power method for B and WM -1. Convergence rate for IAD and power method. ILAS 2004

  16. Example 2. Another random entries. a) IAD for B and WM -1. b) Power method for B and WM -1. c) Convergence rate for IAD and power method. ILAS 2004

  17. I. Marek and P. Mayer Convergence analysis of an aggregation/disaggregationiterative method forcomputation stationary probability vectors Numerical Linear Algebra With Applications, 5, pp. 253-274, 1998 I. Marek and P. Mayer Convergence theory of some classes of iterativeaggregation-disaggregation methods for computingstationary probability vectors of stochastic matrices Linear Algebra and Its Applications, 363, pp. 177-200, 2003 G. W. Stewart Introduction to the numerical solutions of Markov chains, 1994 A. Berman, R. J. Plemmons Nonnegative matrices in the mathematical sciences, 1979 G. H. Golub,C. F. Van Loan Matrix Computations, 1996 ETC. ILAS 2004

More Related