1 / 18

Introduction to Operations Research

Nonlinear Programming. Introduction to Operations Research. A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints, and variable bounds.

tacy
Download Presentation

Introduction to Operations Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nonlinear Programming Introduction to Operations Research

  2. A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints, and variable bounds. • A nonlinear program includes at least one nonlinear function, which could be the objective function, or some or all of the constraints. • Z = x12 + 1/x2 • Many real systems are inherently nonlinear • Unfortunately, nonlinear models are much more difficult to optimize NONLinear programming

  3. NONLinear programming • General Form of Nonlinear • No single algorithm will solve every specific problem • Different algorithms are used for different types of problems Maximize Z = f(x) Subject to. gi(x) ≤ b1 xi ≥ 0

  4. EXAMPLE • Wyndor Glass example with nonlinear constraint • The optimal solution is no longer a CPF anymore, but it still lies on the boundary of the feasible region. • We no longer have the tremendous simplification used in LP of limiting the search for an optimal solution to just the CPF solutions. Maximize Z = 3x1 + 5x2 Subject to x1≤ 4 9x12 + 5x22≤ 216 x1, x2 ≥ 0

  5. EXAMPLE • Wyndor Glass example with nonlinear objective function • The optimal solution is no longer a CPF anymore, but it still lies on the boundary of the feasible region. Maximize Z = 126x1–9x12 + 182x2– 13 x22 Subject to x1≤ 4 x2≤ 12 3x1 + 2x2 ≤ 18 x1, x2 ≥ 0

  6. EXAMPLE • Wyndor Glass example with nonlinear objective function • The optimal solution lies inside the feasible region. • That means we need to look at the entire feasible region, not just the boundaries. Maximize Z = 54x1–9x12 + 78x2– 13 x22 Subject to x1≤ 4 x2≤ 12 3x1 + 2x2 ≤ 18 x1, x2 ≥ 0

  7. Characteristics • Unlike linear programming, solution is often not on the boundary of the feasible solution space. • Cannot simply look at points on the solution space boundary, but must consider other points on the surface of the objective function. • This greatly complicates solution approaches. • Solution techniques can be very complex.

  8. TYPes of nonlinear problems • Nonlinear programming problems come in many different shapes and forms • Unconstrained Optimization • Linearly Constrained Optimization • Quadratic Programming • Convex Programming • Separable Programming • Nonconvex Programming • Geometric Programming • Fractional Programming

  9. One variable unconstrained • These problems have no constraints, so the objective is simply to maximize the objective function • Basic function types • Concave • Entire function is concave down • Convex • Entire function is concave up

  10. Basic calculus Find the critical points Unfortunately this may be difficult for many functions Estimate the maximum Bisection Method Newton’s Method One variable unconstrained

  11. Bisection method

  12. Newton’s method

  13. Multivariable unconstrained

  14. Gradient Search procedure

  15. Gradient search example • Gradient Search procedure: Z = 2x1x2 + 2x2 – x12 – 2x22 • df / dx1 = 2x2 – 2x1 • df / dx2 = 2x1 + 2 – 4x2 • Initialization: set x1*, x2*= 0. • Set x1 = x1* + t(df / dx1) = 0 + t(2×0– 2×0) = 0. • Set x2 = x2* + t(df / dx2) = 0 + t(2×0 + 2– 4×0) = 2t. • f (x1 , x2) = (2)(0)(2t) +(2)(2t) – (0)(0) – (2)(2t)(2t) = 4t – 8t2. • f ’(x1 , x2) = 4 – 16t. Let 4 – 16t = 0 then t* = ¼. • ReSet x1* = x1* + t (df / dx1) = 0 + ¼(2×0 – 2×0) = 0. • x2* = x2* + t(df / dx2) = 0 + ¼(2×0 + 2 – 4×0) = ½. • Stopping rule: df / dx1 = 1, df / dx2 = 0

  16. Gradient search example • Gradient Search procedure: Z = 2x1x2 + 2x2 – x12 – 2x22 • df / dx1 = 2x2 – 2x1 • df / dx2 = 2x1 + 2 – 4x2 • Iteration 2: x1*= 0 x2*= ½ . • Set x = (0,1/2) + t(1,0) = (t,1/2). • f (t,1/2) = (2)(t)(1/2) +(2)(1/2) – (t)(t) – (2)(1/2)(1/2) = t – t2 + ½. • f ’(t,1/2) = 1 - 2t. Let 1 – 2t = 0 then t* = ½ . • ReSet x* = (0,1/2) + ½ (1,0) = (½, ½). • Stopping rule: df / dx1 = 0, df / dx2 = 1.

  17. Gradient search example • Gradient Search procedure: Z = 2x1x2 + 2x2 – x12 – 2x22 • df / dx1 = 2x2 – 2x1 • df / dx2 = 2x1 + 2 – 4x2 • Continue for a few more iterations: • Iteration 1: x* = (0, 0) • Iteration 2: x* = (½, ½) • Iteration 3: x* = (½, ¾) • Iteration 4: x* = (¾, ⅞) • Iteration 5: x* = (⅞, ⅞) • Notice the value is converging toward x* = (1, 1) • This is the optimal solution since the gradient is 0 f (1,1) = (0,0)

  18. Nonlinear objective function with linear constraints Karush-Kuhn Tucker conditions KKT conditions Constrained optimization

More Related