1 / 58

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants. Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Mexico, April 2012. Trondheim, Norway. Mexico. Arctic circle. North Sea.

Download Presentation

ECONOMIC PLANTWIDE CONTROL: Control structure design for complete processing plants

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECONOMICPLANTWIDE CONTROL: Control structure design for complete processing plants Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway Mexico, April 2012

  2. Trondheim, Norway Mexico

  3. Arctic circle North Sea Trondheim SWEDEN NORWAY Oslo DENMARK GERMANY UK

  4. NTNU, Trondheim

  5. Outline 1. Introduction plantwide control (control structure design) 2. Plantwide control procedure I Top Down Step 1: Define optimal operation Step 2: Optimize for expected disturbances Step 3: Select primary controlled variables c=y1 (CVs) Step 4: Where set the production rate? (Inventory control) II Bottom Up Step 5: Regulatory / stabilizing control (PID layer) What more to control (y2)? Pairing of inputs and outputs Step 6: Supervisory control (MPC layer) Step 7: Real-time optimization (Do we need it?) y1 y2 MVs Process

  6. Idealized view of control(“PhD control”)

  7. Practice: Tennessee Eastman challenge problem (Downs, 1991)(“PID control”) TC PC LC AC x SRC Where place ??

  8. How do we get from PID to PhD control?How we design a control system for a complete chemical plant? Where do we start? What should we control? and why? etc. etc.

  9. Alan Foss (“Critique of chemical process control theory”, AIChE Journal,1973): The central issue to be resolved ... is the determination of control system structure. Which variables should be measured, which inputs should be manipulated and which links should be made between the two sets? There is more than a suspicion that the work of a genius is needed here, for without it the control configuration problem will likely remain in a primitive, hazily stated and wholly unmanageable form. The gap is present indeed, but contrary to the views of many, it is the theoretician who must close it. • Previous work on plantwide control: • Page Buckley (1964) - Chapter on “Overall process control” (still industrial practice) • Greg Shinskey (1967) – process control systems • Alan Foss (1973) - control system structure • Bill Luyben et al. (1975- ) – case studies ; “snowball effect” • George Stephanopoulos and Manfred Morari (1980) – synthesis of control structures for chemical processes • Ruel Shinnar (1981- ) - “dominant variables” • Jim Downs (1991) - Tennessee Eastman challenge problem • Larsson and Skogestad (2000): Review of plantwide control

  10. Theory: Optimal operation ALLMIGHTY GOD? IDEAL COMMUNISM? PROCESS CONTROL? Objectives • Theory: • Model of overall system • Estimate present state • Optimize all degrees of freedom • Problems: • Model not available • Optimization complex • Not robust (difficult to handle uncertainty) • Slow response time • Process control: • Excellent candidate for centralized control CENTRALIZED OPTIMIZER Present state Model of system (Physical) Degrees of freedom

  11. Practice: Process control • Practice: • Hierarchical structure

  12. Process control: Hierarchical structure Director Process engineer Operator Logic / selectors / operator PID control u = valves

  13. Dealing with complexity Plantwide control: Objectives The controlled variables (CVs) interconnect the layers OBJECTIVE Min J (economics) RTO cs = y1s Follow path (+ look after other variables) MPC y2s Stabilize + avoid drift PID u (valves)

  14. y1 = c ? (economics) y2 = ? (stabilization) Translate optimal operation into simple control objectives:What should we control?

  15. Example: Bicycle riding y1 = distance to curb (1 m) y2 = bike tilt (stabilization) u = muscles

  16. Control structure design procedure I Top Down • Step 1: Define operational objectives (optimal operation) • Cost function J (to be minimized) • Operational constraints • Step 2: Identify degrees of freedom (MVs) and optimize for expected disturbances • Step 3: Select primary controlled variables c=y1 (CVs) • Step 4: Where set the production rate? (Inventory control) II Bottom Up • Step 5: Regulatory / stabilizing control (PID layer) • What more to control (y2; local CVs)? • Pairing of inputs and outputs • Step 6: Supervisory control (MPC layer) • Step 7: Real-time optimization (Do we need it?) y1 y2 MVs Process

  17. Step 1. Define optimal operation (economics) • What are we going to use our degrees of freedom u(MVs) for? • Define scalar cost function J(u,x,d) • u: degrees of freedom (usually steady-state) • d: disturbances • x: states (internal variables) Typical cost function: • Optimize operation with respect to u for given d (usually steady-state): minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 J = cost feed + cost energy – value products

  18. Step 2. Optimize • Identify degrees of freedom (u) • Optimize for expected disturbances (d) • Identify regions of active constraints • Need model of system • Time consuming, but it is offline

  19. Example: Regions of active constraints Two distillation columns in series. 4 degrees of freedom + feedrate 3+1 Control: 2 active constraints (xA, xB) + 2 “selfoptimizing” 5 3+1 4+0 1+3 (Magnus G. Jacobsen):

  20. Step 3: Implementation of optimal operation • Optimal operation for given d*: minu J(u,x,d) subject to: Model equations: f(u,x,d) = 0 Operational constraints: g(u,x,d) < 0 → uopt(d*) Problem: Usally cannot keep uopt constant because disturbances d change How should we adjust the degrees of freedom (u)?

  21. Implementation (in practice): Local feedback control! y “Self-optimizing control:” Constant setpoints for c gives acceptable loss Main issue: What should we control? d Local feedback: Control c (CV) Optimizing control Feedforward

  22. Optimal operation - Runner Example: Optimal operation of runner • Cost to be minimized, J=T • One degree of freedom (u=power) • What should we control?

  23. Optimal operation - Runner Sprinter (100m) • 1. Optimal operation of Sprinter, J=T • Active constraint control: • Maximum speed (”no thinking required”)

  24. Optimal operation - Runner Marathon (40 km) • 2. Optimal operation of Marathon runner, J=T • Unconstrained optimum! • Any ”self-optimizing” variable c (to control at constant setpoint)? • c1 = distance to leader of race • c2 = speed • c3 = heart rate • c4 = level of lactate in muscles

  25. Optimal operation - Runner Conclusion Marathon runner select one measurement c = heart rate • Simple and robust implementation • Disturbances are indirectly handled by keeping a constant heart rate • May have infrequent adjustment of setpoint (heart rate)

  26. Further examples • Central bank. J = welfare. c=inflation rate (2.5%) • Cake baking. J = nice taste, c = Temperature (200C) • Business, J = profit. c = ”Key performance indicator (KPI), e.g. • Response time to order • Energy consumption pr. kg or unit • Number of employees • Research spending Optimal values obtained by ”benchmarking” • Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) • Biological systems: • ”Self-optimizing” controlled variables c have been found by natural selection • Need to do ”reverse engineering” : • Find the controlled variables used in nature • From this identify what overall objective J the biological system has been attempting to optimize

  27. Step 3. What should we control (c)?(primary controlled variables y1=c) Selection of controlled variables c • Control active constraints! • Unconstrained variables: Control self-optimizing variables!

  28. Back-off = Lost production Time Example active constraint Optimal operation = max. throughput (active constraint) Want tight bottleneck control to reduce backoff! Rule for control of hard output constraints: “Squeeze and shift”! Reduce variance (“Squeeze”) and “shift” setpoint cs to reduce backoff

  29. Unconstrained degrees of freedom Control “self-optimizing” variables • Old idea (Morari et al., 1980): “We want to find a function c of the process variables which when held constant, leads automatically to the optimal adjustments of the manipulated variables, and with it, the optimal operating conditions.” • The ideal self-optimizing variable c is the gradient (c =  J/ u = Ju) • Keep gradient at zero for all disturbances (c = Ju=0) • Problem: no measurement of gradient cost J Ju Ju=0 u

  30. H Ideal: c = Ju In practise: c = H y. Task: Determine H!

  31. Systematic approach: What to control? • Define optimal operation: Minimize cost function J • Each candidate c = Hy: ”Brute force approach”: With constant setpoints cs compute loss L for expected disturbances d and implementation errors n • Select variable c with smallest loss Acceptable loss ) self-optimizing control

  32. Example recycle plant. 3 steady-state degrees of freedom (u)Minimize J=V (energy) Active constraint Mr = Mrmax Self-optimizing Active constraint xB = xBmin L/F constant: Easier than “two-point” control

  33. Unconstrained optimum Optimal operation Cost J Jopt copt Controlled variable c

  34. Unconstrained optimum Optimal operation Cost J d Jopt n copt Controlled variable c • Two problems: • 1. Optimum moves because of disturbances d: copt(d) • 2. Implementation error, c = copt + n

  35. Good Good BAD Good candidate controlled variables c (for self-optimizing control) • The optimal value of c should be insensitive to disturbances • Small Fc = dcopt/dd • c should be easy to measure and control • Want “flat” optimum -> The value of c should be sensitive to changes in the degrees of freedom (“large gain”) • Large G = dc/du = HGy

  36. “Flat optimum is same as large gain” J Optimizer c cs n cm = c + n Controller that adjusts u to keep cm = cs cs=copt u H y ny d Plant uopt u )Want c sensitive to u (”large gain”)

  37. Optimal measurement combination • Candidate measurements (y): Include also inputs u H

  38. Optimal measurement combination: Nullspace method • Want optimal value of c independent of disturbances )  copt = 0 ¢ d • Find optimal solution as a function of d: uopt(d), yopt(d) • Linearize this relationship: yopt = F d • F – optimal sensitivity matrix • Want: • To achieve this for all values of d (Nullspace method): • Always possible if • Comment: Nullspace method is equivalent to Ju=0

  39. Example. Nullspace Method for Marathon runner u = power, d = slope [degrees] y1 = hr [beat/min], y2 = v [m/s] F = dyopt/dd = [0.25 -0.2]’ H = [h1 h2]] HF = 0 -> h1 f1 + h2 f2 = 0.25 h1 – 0.2 h2 = 0 Choose h1 = 1 -> h2 = 0.25/0.2 = 1.25 Conclusion: c = hr + 1.25 v Control c = constant -> hr increases when v decreases (OK uphill!)

  40. Refrigeration cycle: Proposed control structure Control c= “temperature-corrected high pressure”. k = -8.5 bar/K

  41. With measurement noise cs = constant + u + + y K - + + c H Optimal measurement combination, c = Hy “=0” in nullspace method (no noise) “Minimize” in Maximum gain rule ( maximize S1 G Juu-1/2 , G=HGy) “Scaling” S1

  42. Case study Control structure design using self-optimizing control for economically optimal CO2 recovery* Step S1. Objective function= J = energy cost + cost (tax) of released CO2 to air • 4 equality and 2 inequality constraints: • stripper top pressure • condenser temperature • pump pressure of recycle amine • cooler temperature • CO2 recovery ≥ 80% • Reboiler duty < 1393 kW (nominal +20%) Step S2. (a) 10 degrees of freedom: 8 valves + 2 pumps 4 levels without steady state effect: absorber 1,stripper 2,make up tank 1 Disturbances: flue gas flowrate, CO2 composition in flue gas + active constraints (b) Optimization using Unisim steady-state simulator. Region I (nominal feedrate):No inequality constraints active 2 unconstrained degrees of freedom =10-4-4 Step S3 (Identify CVs). 1. Control the 4 equality constraints 2. Identify 2 self-optimizing CVs. Use Exact Local method and select CV set with minimum loss. *M. Panahi and S. Skogestad, ``Economically efficient operation of CO2 capturing process, part I: Self-optimizing procedure for selecting the best controlled variables'', Chemical Engineering and Processing, 50, 247-253 (2011).

  43. Proposed control structure with given feed

  44. Step 4. Where set production rate? • Where locale the TPM (throughput manipulator)? • The ”gas pedal” of the process • Very important! • Determines structure of remaining inventory (level) control system • Set production rate at (dynamic) bottleneck • Link between Top-down and Bottom-up parts

  45. Production rate set at inlet :Inventory control in direction of flow* TPM * Required to get “local-consistent” inventory controlC

  46. Production rate set at outlet:Inventory control opposite flow TPM

  47. Production rate set inside process TPM Radiating inventory control around TPM (Georgakis et al.)

  48. LOCATE TPM? • Conventional choice: Feedrate • Consider moving if there is an important active constraint that could otherwise not be well controlled • Good choice: Locate at bottleneck

  49. Step 5. Regulatory control layer Purpose: “Stabilize” the plant using a simple control configuration (usually: local SISO PID controllers + simple cascades) Enable manual operation (by operators) Main structural decisions: What more should we control? (secondary CV’s, y2, use of extra measurements) Pairing with manipulated variables (MV’s u2) y1 = c y2 = ?

  50. Optimizer (RTO) Optimally constant valves CV1s Always active constraints CV1 Supervisory controller (MPC) CV2s CV2 Regulatory controller (PID) H H2 Physical inputs (valves) d y PROCESS Stabilized process ny Degrees of freedom for optimization (usually steady-state DOFs), MVopt = CV1s Degrees of freedom for supervisory control, MV1=CV2s + unused valves Physical degrees of freedom for stabilizing control, MV2 = valves (dynamic process inputs)

More Related