1 / 48

Engineering Optimization

Concepts and Applications. Engineering Optimization. Fred van Keulen Matthijs Langelaar CLA H21.1 A.vanKeulen@tudelft.nl. Contents. Constrained Optimization: Optimality conditions recap Constrained Optimization: Algorithms Linear programming. g 2. x 2. g 1.  f.

arne
Download Presentation

Engineering Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concepts and Applications Engineering Optimization • Fred van Keulen • Matthijs Langelaar • CLA H21.1 • A.vanKeulen@tudelft.nl

  2. Contents • Constrained Optimization: Optimality conditions recap • Constrained Optimization: Algorithms • Linear programming

  3. g2 x2 g1 f • At optimum, only activeconstraints matter: g3 x1 • Optimality conditions similar to equality constrained case Inequality constrained problems • Consider problem with only inequality constraints:

  4. Consider feasible local variation around optimum: Since (boundary optimum) (feasible perturbation) Inequality constraints • First order optimality:

  5. g2 x2 g1 f -f x1 • Interpretation: negative gradient (descent direction) lies in cone spanned by positive constraint gradients -f Optimality condition • Multipliers must be non-negative:

  6. Feasible cone • Descent direction: -f Optimality condition (2) g2 • Feasible direction: x2 g1 f x1 • Equivalent interpretation: no descent direction exists within the cone of feasible directions

  7. Lagrangian: Karush-Kuhn-Tucker conditions • First order optimality conditions for constrained problem:

  8. on tangent subspace of h and active g. Sufficiency • KKT conditions are necessary conditions for local constrained minima • For sufficiency, consider the sufficiency conditions based on the active constraints: • Special case: convex objective & convex feasible region: KKT conditions sufficient for global optimality

  9. KKT: Looking for: Significance of multipliers • Consider case where optimization problem depends on parameter a: Lagrangian:

  10. Multipliers give “price of raising the constraint” • Note, this makes it logical that at an optimum, multipliers of inequality constraints must be positive! Significance of multipliers (3) • Lagrange multipliers describe the sensitivity of the objective to changes in the constraints: • Similar equations can be derived for multiple constraints and inequalities

  11. Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Linear Programming

  12. Constrained optimization methods • Approaches: • Transformation methods (penalty / barrier functions)plus unconstrained optimization algorithms • Random methods / Simplex-like methods • Feasible direction methods • Reduced gradient methods • Approximation methods (SLP, SQP) • Note, constrained problems can also have interior optima!

  13. Augmented Lagrangian method • Recall penalty method: • Disadvantages: • High penalty factor needed for accurate results • High penalty factor causes ill-conditioning, slow convergence

  14. Also possible for inequality constraints • Multiplier update rules determine convergence • Exact convergence for moderate values of p Augmented Lagrangian method • Basic idea: • Add penalty term to Lagrangian • Use estimates and updates of multipliers

  15. Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming

  16. Feasible direction methods • Moving along the boundary • Rosen’s gradient projection method • Zoutendijk’s method of feasible directions • Basic idea: • move along steepest descent direction until constraints are encountered • step direction obtained by projecting steepest descent direction on tangent plane • repeat until KKT point is found

  17. For simplicity, consider linear equality constrained problem: 1. Gradient projection method x3 • Iterations follow the constraint boundary: h = 0 • For nonlinear constraints, mapping back to the constraint surface is needed, in normal space x1 x2

  18. Projection: decompose in tangent/normal vector: Gradient projection method (2) • Recall: • Tangent space: • Normal space:

  19. Projection matrix • Nonlinear case: • Correction in normal space: Gradient projection method (3) • Search direction in tangent space:

  20. xk+1 First order Taylor approximation: Iterations: Correction to constraint boundary • Correction in normal subspace, e.g. using Newton iterations: xk x’k+1 sk

  21. Practical aspects • How to deal with inequality constraints? • Use active set strategy: • Keep set of active inequality constraints • Treat these as equality constraints • Update the set regularly (heuristic rules) • In gradient projection method, if s = 0: • Check multipliers: could be KKT point • If any mi < 0, this constraint is inactive and can be removed from the active set

  22. Slack variables • Alternative way of dealing with inequality constraints: using slack variables: • Disadvantages: all constraints considered all the time, + increased number of design variables

  23. Subproblem: Descending: Feasible: 2. Zoutendijk’s feasible directions • Basic idea: • move along steepest descent direction until constraints are encountered • at constraint surface, solve subproblem to find descending feasible direction • repeat until KKT point is found

  24. Zoutendijk’s method • Subproblem linear: efficiently solved • Determine active set before solving subproblem! • When a = 0: KKT point found • Method needs feasible starting point.

  25. Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming

  26. Recall: reduced gradient • State variables s can be determined from: (iteratively for nonlinear constraints) Reduced gradient methods • Basic idea: • Choose set of n - m decision variables d • Use reduced gradient in unconstr. gradient-based method

  27. until convergence Reduced gradient method • Nonlinear constraints: Newton iterations to return to constraint surface (determine s): • Variants using 2nd order information also exist • Drawback: selection of decision variables(but some procedures exist)

  28. Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming

  29. Approximation methods • SLP: Sequential Linear Programming • Solving series of linear approximate problems • Efficient methods for linear constrained problems available

  30. x = 0.8 x = 0.988 SLP 1-D illustration • SLP iterations approach convex feasible domain from outside: f g

  31. SLP points of attention • Solves LP problem in every cycle: efficient only when analysis cost is relatively high • Tendency to diverge • Solution: trust region (move limits) x2 x1

  32. k sufficiently large to force solution into feasible region SLP points of attention (2) • Infeasible starting point can result in unsolvable LP problem • Solution: relaxing constraints in first cycles

  33. f SLP points of attention (3) • Cycling can occur when optimum lies on curved constraint • Solution: move limit reduction strategy x2 x1

  34. Method of Moving Asymptotes • First order method, by Svanberg (1987) • Builds convex approximate problem, approximating responses using: • Approximate problem solved efficiently • Popular method in topology optimization

  35. Sequential Approximate Optimization • Zeroth order method: • Determine initial trust region • Generate sampling points (design of experiments) • Build response surface (e.g. Least Squares, Kriging, …) • Optimize approximate problem • Check convergence, update trust region, repeate from 2 • Many variants! • See also Lecture 4

  36. Optimum Sub-optimal point Response surface Trust region Sequential Approximate Optimization • Good approach for expensive models • RS dampens noise • Versatile Design domain

  37. Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming

  38. KKT points: Newton: SQP • SQP: Sequential Quadratic Programming • Newton method to solve the KKT conditions

  39. SQP (2) • Newton:

  40. Note: KKT conditions of: Quadratic subproblem for finding search direction sk SQP (3)

  41. KKT condition: • Solution: Quadratic subproblem • Quadratic subproblem with linear constraints can be solved efficiently: • General case:

  42. Basic SQP algorithm • Choose initial point x0 and initial multiplier estimates l0 • Set up matrices for QP subproblem • Solve QP subproblem  sk ,lk+1 • Set xk+1 = xk + sk • Check convergence criteria Finished

  43. To avoid computation of Hessian information for , quasi-Newton approaches (DFP, BFGS) can be used (also ensure positive definiteness) SQP refinements • For convergence of Newton method, must be positive definite • Line search along sk improves robustness • For dealing with inequality constraints, various active set strategies exist

  44. Comparison Method AugLag Zoutendijk GRG SQP Feasible starting point? No Yes Yes No Nonlinear constraints? Yes Yes Yes Yes Equality constraints? Yes Hard Yes Yes Uses active set? Yes Yes No Yes Iterates feasible? No Yes No No Derivatives needed? Yes Yes Yes Yes SQP generally seen as best general-purpose method for constrained problems

  45. Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Linear programming

  46. Linear programming problem • Linear objective and constraint functions:

  47. Feasible domain = intersection of convex half-spaces: • Result: X = convex polyhedron Feasible domain • Linear constraints divide design space into two convex half-spaces: x2 x1

  48. KKT conditions: Global optimality • Convex objective function on convex feasible domain: KKT point = unique global optimum

More Related