1.13k likes | 2.74k Views
Concepts and Applications. Engineering Optimization. Fred van Keulen Matthijs Langelaar CLA H21.1 A.vanKeulen@tudelft.nl. Contents. Constrained Optimization: Optimality conditions recap Constrained Optimization: Algorithms Linear programming. g 2. x 2. g 1. f.
E N D
Concepts and Applications Engineering Optimization • Fred van Keulen • Matthijs Langelaar • CLA H21.1 • A.vanKeulen@tudelft.nl
Contents • Constrained Optimization: Optimality conditions recap • Constrained Optimization: Algorithms • Linear programming
g2 x2 g1 f • At optimum, only activeconstraints matter: g3 x1 • Optimality conditions similar to equality constrained case Inequality constrained problems • Consider problem with only inequality constraints:
Consider feasible local variation around optimum: Since (boundary optimum) (feasible perturbation) Inequality constraints • First order optimality:
g2 x2 g1 f -f x1 • Interpretation: negative gradient (descent direction) lies in cone spanned by positive constraint gradients -f Optimality condition • Multipliers must be non-negative:
Feasible cone • Descent direction: -f Optimality condition (2) g2 • Feasible direction: x2 g1 f x1 • Equivalent interpretation: no descent direction exists within the cone of feasible directions
Lagrangian: Karush-Kuhn-Tucker conditions • First order optimality conditions for constrained problem:
on tangent subspace of h and active g. Sufficiency • KKT conditions are necessary conditions for local constrained minima • For sufficiency, consider the sufficiency conditions based on the active constraints: • Special case: convex objective & convex feasible region: KKT conditions sufficient for global optimality
KKT: Looking for: Significance of multipliers • Consider case where optimization problem depends on parameter a: Lagrangian:
Multipliers give “price of raising the constraint” • Note, this makes it logical that at an optimum, multipliers of inequality constraints must be positive! Significance of multipliers (3) • Lagrange multipliers describe the sensitivity of the objective to changes in the constraints: • Similar equations can be derived for multiple constraints and inequalities
Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Linear Programming
Constrained optimization methods • Approaches: • Transformation methods (penalty / barrier functions)plus unconstrained optimization algorithms • Random methods / Simplex-like methods • Feasible direction methods • Reduced gradient methods • Approximation methods (SLP, SQP) • Note, constrained problems can also have interior optima!
Augmented Lagrangian method • Recall penalty method: • Disadvantages: • High penalty factor needed for accurate results • High penalty factor causes ill-conditioning, slow convergence
Also possible for inequality constraints • Multiplier update rules determine convergence • Exact convergence for moderate values of p Augmented Lagrangian method • Basic idea: • Add penalty term to Lagrangian • Use estimates and updates of multipliers
Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming
Feasible direction methods • Moving along the boundary • Rosen’s gradient projection method • Zoutendijk’s method of feasible directions • Basic idea: • move along steepest descent direction until constraints are encountered • step direction obtained by projecting steepest descent direction on tangent plane • repeat until KKT point is found
For simplicity, consider linear equality constrained problem: 1. Gradient projection method x3 • Iterations follow the constraint boundary: h = 0 • For nonlinear constraints, mapping back to the constraint surface is needed, in normal space x1 x2
Projection: decompose in tangent/normal vector: Gradient projection method (2) • Recall: • Tangent space: • Normal space:
Projection matrix • Nonlinear case: • Correction in normal space: Gradient projection method (3) • Search direction in tangent space:
xk+1 First order Taylor approximation: Iterations: Correction to constraint boundary • Correction in normal subspace, e.g. using Newton iterations: xk x’k+1 sk
Practical aspects • How to deal with inequality constraints? • Use active set strategy: • Keep set of active inequality constraints • Treat these as equality constraints • Update the set regularly (heuristic rules) • In gradient projection method, if s = 0: • Check multipliers: could be KKT point • If any mi < 0, this constraint is inactive and can be removed from the active set
Slack variables • Alternative way of dealing with inequality constraints: using slack variables: • Disadvantages: all constraints considered all the time, + increased number of design variables
Subproblem: Descending: Feasible: 2. Zoutendijk’s feasible directions • Basic idea: • move along steepest descent direction until constraints are encountered • at constraint surface, solve subproblem to find descending feasible direction • repeat until KKT point is found
Zoutendijk’s method • Subproblem linear: efficiently solved • Determine active set before solving subproblem! • When a = 0: KKT point found • Method needs feasible starting point.
Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming
Recall: reduced gradient • State variables s can be determined from: (iteratively for nonlinear constraints) Reduced gradient methods • Basic idea: • Choose set of n - m decision variables d • Use reduced gradient in unconstr. gradient-based method
until convergence Reduced gradient method • Nonlinear constraints: Newton iterations to return to constraint surface (determine s): • Variants using 2nd order information also exist • Drawback: selection of decision variables(but some procedures exist)
Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming
Approximation methods • SLP: Sequential Linear Programming • Solving series of linear approximate problems • Efficient methods for linear constrained problems available
x = 0.8 x = 0.988 SLP 1-D illustration • SLP iterations approach convex feasible domain from outside: f g
SLP points of attention • Solves LP problem in every cycle: efficient only when analysis cost is relatively high • Tendency to diverge • Solution: trust region (move limits) x2 x1
k sufficiently large to force solution into feasible region SLP points of attention (2) • Infeasible starting point can result in unsolvable LP problem • Solution: relaxing constraints in first cycles
f SLP points of attention (3) • Cycling can occur when optimum lies on curved constraint • Solution: move limit reduction strategy x2 x1
Method of Moving Asymptotes • First order method, by Svanberg (1987) • Builds convex approximate problem, approximating responses using: • Approximate problem solved efficiently • Popular method in topology optimization
Sequential Approximate Optimization • Zeroth order method: • Determine initial trust region • Generate sampling points (design of experiments) • Build response surface (e.g. Least Squares, Kriging, …) • Optimize approximate problem • Check convergence, update trust region, repeate from 2 • Many variants! • See also Lecture 4
Optimum Sub-optimal point Response surface Trust region Sequential Approximate Optimization • Good approach for expensive models • RS dampens noise • Versatile Design domain
Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Augmented Lagrangian • Feasible directions methods • Reduced gradient methods • Approximation methods • SQP • Linear Programming
KKT points: Newton: SQP • SQP: Sequential Quadratic Programming • Newton method to solve the KKT conditions
SQP (2) • Newton:
Note: KKT conditions of: Quadratic subproblem for finding search direction sk SQP (3)
KKT condition: • Solution: Quadratic subproblem • Quadratic subproblem with linear constraints can be solved efficiently: • General case:
Basic SQP algorithm • Choose initial point x0 and initial multiplier estimates l0 • Set up matrices for QP subproblem • Solve QP subproblem sk ,lk+1 • Set xk+1 = xk + sk • Check convergence criteria Finished
To avoid computation of Hessian information for , quasi-Newton approaches (DFP, BFGS) can be used (also ensure positive definiteness) SQP refinements • For convergence of Newton method, must be positive definite • Line search along sk improves robustness • For dealing with inequality constraints, various active set strategies exist
Comparison Method AugLag Zoutendijk GRG SQP Feasible starting point? No Yes Yes No Nonlinear constraints? Yes Yes Yes Yes Equality constraints? Yes Hard Yes Yes Uses active set? Yes Yes No Yes Iterates feasible? No Yes No No Derivatives needed? Yes Yes Yes Yes SQP generally seen as best general-purpose method for constrained problems
Contents • Constrained Optimization: Optimality Criteria • Constrained Optimization: Algorithms • Linear programming
Linear programming problem • Linear objective and constraint functions:
Feasible domain = intersection of convex half-spaces: • Result: X = convex polyhedron Feasible domain • Linear constraints divide design space into two convex half-spaces: x2 x1
KKT conditions: Global optimality • Convex objective function on convex feasible domain: KKT point = unique global optimum