1 / 88

Introduction to Model Order Reduction II.2 The Projection Framework Methods

Introduction to Model Order Reduction II.2 The Projection Framework Methods. Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White. Projection Framework: Non invertible Change of Coordinates. Note: q << N. reduced state.

chuong
Download Presentation

Introduction to Model Order Reduction II.2 The Projection Framework Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Model Order Reduction II.2 The Projection Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White

  2. Projection Framework:Non invertible Change of Coordinates Note: q << N reduced state original state

  3. Projection Framework • Original System • Substitute • Note: now few variables (q<<N) in the state, but still thousands of equations (N)

  4. Projection Framework (cont.) Reduction of number of equations: test by multiplying byVqT • If VqT and UqT are chosen biorthogonal

  5. qxn nxn nxq nxq Projection Framework (graphically) qxq

  6. Projection Framework Equation Testing (Projection) Non-invertible change of coordinates (Projection)

  7. Approaches for picking V and U • Use Eigenvectors of the system matrix (modal analysis) • Use Frequency Domain Data • Compute • Use the SVD to pick q < k important vectors • Use Time Series Data • Compute • Use the SVD to pick q < k important vectors Point Matching II.2.b POD Principal Component Analysis or SVD Singular Value Decomposition or KLD Karhunen-Lo`eve Decomposition or PCA Principal Component Analysis

  8. Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

  9. A canonical form for model order reduction Assuming A is non-singular we can cast the dynamical linear system into a canonical form for moment matching model order reduction Note: this step is not necessary, it just makes the notation simple for educational purposes

  10. Intuitive view of Krylov subspace choice for change of base projection matrix Taylor series expansion: • change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point U

  11. Aside on Krylov Subspaces - Definition The order k Krylov subspace generated from matrix A and vector b is defined as

  12. Moment matching around non-zero frequencies • In stead of expanding around only s=0 we can expand around another points • For each expansion point the problem can then be put again in the canonical form

  13. Projection Framework: Moment Matching Theorem (E. Grimme 97) If and Then Total of 2q moment of the transfer function will match

  14. Combine point and moment matching: multipoint moment matching • Multipole expansion points give larger band • Moment (derivates) matching gives more • accurate behavior in between expansion points

  15. Compare Pade’ Approximationsand Krylov Subspace Projection Framework • Pade approximations: • moment matching at • single DC point • numerically very • ill-conditioned!!! • Krylov Subspace Projection Framework: • multipoint moment • matching • AND numerically very • stable!!!

  16. Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

  17. Special simple case #1: expansion at s=0,V=U, orthonormal UTU=I If U and V are such that: Then the first q moments (derivatives) of the reduced system match

  18. Algebraic proof of case #1: expansion at s=0, V=U, orthonormal UTU=I apply k times lemma in next slide

  19. Lemma: . Note in general: BUT... Substitute: Iq U is orthonormal

  20. Need for Orthonormalization of U Vectors{b,Eb,...,Ek-1b}cannot be computed directly Vectors will quickly line up with dominant eigenspace!

  21. Need for Orthonormalization of U (cont.) • In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space • In particular we can ORTHONORMALIZE the Krylov subspace vectors

  22. For i = 1 to q Generates new Krylov subspace vector For j = 1 to i Orthogonalize new vector Normalize new vector Orthonormalization of U: The Arnoldi Algorithm Computational Complexity Normalize first vector O(n) sparse: O(n) dense:O(n2) O(q2n) O(n)

  23. Generating vectors for the Krylov subspace • Most of the computation cost is spent in calculating: • Set up and solve a linear system using GCR • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) • The total complexity for calculating the projection matrix Uq is O(qn)

  24. What about computing the reduced matrix ? Orthonormalization of the i-th column ofUq Orthonormalization of all columns ofUq So we don’t need to compute the reduced matrix. We have it already:

  25. Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

  26. Special case #2: expansion at s=0, biorthogonal VTU=I If U and V are such that: Then the first 2q moments of reduced system match

  27. Proof of special case #2: expansion at s=0, biorthogonal VTU=UTV=Iq (cont.) apply k times the lemma in next slide

  28. Lemma: . Substitute: biorthonormality Iq Substitute: biorthonormality Iq

  29. PVL: Pade Via Lanczos[P. Feldmann, R. W. Freund TCAD95] • PVL is an implementation of the biorthogonal case 2: Use Lanczos process to biorthonormalize the columns of U and V: gives very good numerical stability

  30. Example: Simulation of voltage gain of a filter with PVL (Pade Via Lanczos)

  31. Compare to Pade via AWE (Asymptotic Waveform Evaluation)

  32. Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

  33. Case #3: Intuitive view of subspace choice for general expansion points • In stead of expanding around only s=0 we can expand around another points • For each expansion point the problem can then be put again in the canonical form

  34. Case #3: Intuitive view of Krylov subspace choice for general expansion points (cont.) Hence choosing Krylov subspace s2 s1 matches first kj of transfer function around each expansion point sj s1=0 s3

  35. Generating vectors for the Krylov subspace • Most of the computation cost is spent in calculating: • Set up and solve a linear system using GCR • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) • The total complexity for calculating the projection matrix Uq is O(qn)

  36. Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

  37. Sufficient conditions for passivity • Sufficient conditions for passivity: i.e. A is negative semidefinite • Note that these are NOT necessary conditions (common misconception)

  38. Heat In Example Finite Difference System from on Poisson Equation (heat problem) We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A-1 are negative semidefinite.

  39. Sufficient conditions for passivity • Sufficient conditions for passivity: i.e. E is negative semidefinite • Note that these are NOT necessary conditions (common misconception)

  40. Congruence Transformations Preserve Negative (or positive) Semidefinitness • Def. congruence transformation same matrix • Note: case #1 in the projection framework V=U produces congruence transformations • Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrix • Proof. Just rename

  41. qxn nxn nxq nxq Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability) If we use • Then we loose half of the degrees of freedom • i.e. we match only q moments instead of 2q • But if the original matrix E is negative semidefinite • so is the reduced, hence the system is passive and stable

  42. Sufficient conditions for passivity • Sufficient conditions for passivity: i.e. E is positive semidefinite i.e. A is negative semidefinite • Note that these are NOT necessary conditions (common misconception)

  43. + + - - Example. hState-Space Model from MNA of R, L, C circuits Lemma: A is negative semidefinite if and only if When using MNA For immittance systems in MNA form A is Negative Semidefinite E is Positive Semidefinite

  44. PRIMA (for preserving passivity) (Odabasioglu, Celik, Pileggi TCAD98) A different implementation of case #1: V=U, UTU=I, Arnoldi Krylov Projection Framework: Use Arnoldi: Numerically very stable

  45. PRIMA preserves passivity • The main difference between and case #1 and PRIMA: • case #1 applies the projection framework to • PRIMA applies the projection framework to • PRIMA preserves passivity because • uses Arnoldi so that U=V and the projection becomes a congruence transformation • E and -A produced by electromagnetic analysis are typically positive semidefinite • input matrix must be equal to output matrix

  46. Algebraic proof of moment matching for PRIMA expansion at s=0, V=U, orthonormal UTU=I Used Lemma: If U is orthonormal (UTU=I) and b is a vector such that

  47. Proof of lemma Proof:

  48. Compare methods

  49. Conclusions • Reduction via eigenmodes • expensive and inefficient • Reduction via rational function fitting (point matching) • inaccurate in between points, numerically ill-conditioned • Reduction via Quasi-Convex Optimization • quite efficient and accurate • Reduction via moment matching: Pade approximations • better behavior but covers small frequency band • numerically very ill-conditioned • Reduction via moment matching: Krylov Subspace Projection Framework • allows multipoint expansion moment matching (wider frequency band) • numerically very robust and computationally very efficient • use PVL is more efficient for model in frequency domain • use PRIMA to preserve passivity if model is for time domain simulator

  50. Case study: Passive Reduced Models from an Electromagnetic Field Solver long coplanar T-line, shorted on other side dielectric layer

More Related