760 likes | 980 Views
Advanced Layout Algorithms. Chapter 8. Layout Algorithms. Optimal Heuristic. Optimal Algorithms. Branch and bound Decomposition Benders’ decomposition Cutting Plane Algorithms. B&B Objective Function Value. Branch and bound Algorithm. Branch and bound Algorithm.
E N D
Advanced Layout Algorithms Chapter 8
Layout Algorithms • Optimal • Heuristic
Optimal Algorithms • Branch and bound • Decomposition • Benders’ decomposition • Cutting Plane Algorithms
Branch and bound Algorithm • Step 1: Computer lower bound (LB*) by solving a linear assignment problem (LAP) with a matrix [wij]. Matrix [wij] is obtained by taking dot product of two vectors [fi] and [dj]. Vector [fi], [dj] are obtained by removing fii, djj and arranging the remaining flow and distance values in non-increasing and non-decreasing order, respectively • Step 2:Computer lower bound for other nodes
Branch and bound • Assignment: Matching a department with a specific location and vice‑versa • Partial assignment: An assignment in which a subset of n departments is matched with an equal-sized subset of locations and vice‑versa • Complete assignment: All the n departments are matched with n locations and vice‑versa • A complete assignment obtained from a partial assignment must not disturb the partial assignment but only grow from it
Lower bound calculation for partial assignment • Given a partial assignment in which a certain subset S={1,2,...,q} of n departments is assigned to a subset L={s1,s2,...,sq} of n locations, the optimal objective function for a complete assignment is equal to the sum of the products of flow and distance computed for these three categories of departments: • Pairs of departments i, j such that i, j belongs to S; • Pairs of departments i, j such that i belongs to S, j does not belong to S; and • Pairs of departments i, j such that i, j do not belong to S
Branch and bound Algorithm • Step 2a: Calculate cost of partial assignment • Step 2b: Computer lower bound (LB*ij) for lower level nodes by solving a LAP with a matrix [wij]. • Matrix [wij] is obtained by adding two matrices [w’ij] and [w’’ij] • Matrix [w’’ij] is obtained by taking half the dot product of two vectors [fi] and [dj]. Vector [fi], [dj] are obtained by arranging flow and distance values in non-increasing and non-decreasing order, respectively. Do this only for ‘available’ departments and locations • Matrix [w’ij] is obtained as follows: where
Explain Branch and Bound Algorithm with Example 1 Office Site Figure 7.2 Flow and distance matrices for the LonBank layout problem
Branch and bound Algorithm • Why is the specialized B&B algorithm more efficient than a general purpose B&B? • How to terminate algorithm for large problems • Terminate after preset CPU time limit has exceeded • Terminate after preset number of nodes have been examined
Benders’ decomposition algorithm Consider this MIP Minimize cx Subject to Ax+By>b x>0 y = 0 or 1 Now, consider a feasible y solution vector to MIP - say yi. Then, MIP becomes the following linear model. LP i Minimize cx Subject to Ax>b - Byi x>0
Dual of Linear Program LP i Minimize cx Subject to Ax>b - Byi x>0 The dual of LP i is the following model DLP i Maximize u(b - Byi) Subject to uA<c u>0
Dual of Linear Program • Let ui be the optimal solution to DLP i • From duality theory, ui(b - Byi) is equal to the optimal OFV of LPi (because LP i and DLP i are both feasible) • Hence, ui(b - Byi) is equal to the OFV of some feasible solution to MIP (the one in which y = yi). Because each variable yij in the vector y can take on a value of 0 or 1 only and because the number of such variables is finite, it is clear that the number of y vectors are also finite • In fact, if there are nyij variables, then the number of y vectors is equal to 2n. Of course, not all of these may be feasible to MIP. We assume that there are s feasible y solution vectors to MIP - {y1, y2, ..., yi, ..., ys}, arranged in any order
Dual of Linear Program • Let DLP 1, DLP 2, ..., DLP i, ..., DLP s be the duals obtained by substituting y1, y2, ..., yi, ..., ysfor yiin DLP i. Let u1, u2, ... , ui, ..., us be the optimal solution vectors to DLP 1, DLP 2, ..., DLP i, ..., DLP s, respectively • The optimal OFV of each corresponds to the OFV of some feasible y solution vector to MIP • Because we have considered all feasible solution vectors, the dual with the least OFV among DLP 1, DLP 2, ..., DLP s, provides the optimal OFV to MIP • Thus, the original problem MIP may be reduced to the following problem: Minimize {ui(b - By)} 1 <i<s Subject to y = 0 or 1 and feasible to MIP
Master Problem Minimize {ui(b - By)} 1 <i<s Subject to y = 0 or 1 and feasible to MIP The above model can be restated as: MP Minimize z Subject to z > ui(b - By) i = 1, 2, ..., s y= 0 or 1 and feasible to MIP z = 0 or 1 MP requires us to generate all the feasible y solution vectors and the corresponding s dual problems - DLP 1, DLP 2, ..., DLP s Not computationally feasible because the number of dual problems, though finite, may be very large The dual associated with each of these has to be solved - a time consuming task
Solving the Master Problem • However, we can overcome the computational problem by generating a subset of the constraints in MP and solving a restricted problem • Because we are solving MP with only a small subset of constraints, its optimal solution will provide a lower bound on MIP • Thus, beginning with few or no constraints, we solve MP, obtain a new y vector, setup DLP i corresponding to this y vector and obtain an upper bound • Using the optimal solution to DLP i, we add the corresponding constraint [z>ui(b - By)] in the master problem MP and solve it • If the resulting lower bound is greater than or equal to the upper bound, we stop because the last solution to MP provides the optimal solution to MIP. Otherwise, we repeat the procedure until the termination criterion is met
Benders’ decomposition algorithm Step 0: Set i=1, yi = {0,0,...,0}, lower bound LB=0 and upper bound UB=infinity. Step 1: Solve DLP i. Let ui be the optimal solution to DLP i. If ui(b - Byi) < UB, set UB = ui(b - Byi) Step 2: Update MP by adding the constraint z>ui(b - By). Solve MP. Let y* be the optimal solution and z be the optimal OFV of MP. Set LB = z. If LB > UB, stop. Otherwise, set i = i+1, yi = y* and return to step 1.
Explain Benders’ decomposition algorithm with Example 2 Machine Dimension Horizontal Clearance Matrix Flow Matrix Figure 7.6Flow and clearance matrices and dimensions for four machines
LMIP 1 for Example 2 Minimize Subject to
Example 2 MIP MIN 25 XP12 + 35 XP13 + 50 XP14 + 10 XP23 + 15 XP24 + 50 XP34 + 25 XN12 + 35 XN13 + 50 XN14 + 10 XN23 + 15 XN24 + 50 XN34 SUBJECT TO C1) 999 Y12 + X1 - X2 >= 33.5 C2) 999 Y12 + X1 - X2 <= 965.5 C3) 999 Y13 + X1 - X3 >= 32.5 C4) 999 Y13 + X1 - X3 <= 966.5 C5) 999 Y14 + X1 - X4 >= 37.5 C6) 999 Y14 + X1 - X4 <= 961.5 C7) 999 Y23 + X2 - X3 >= 37.5 C8) 999 Y23 + X2 - X3 <= 961.5 C9) 999 Y24 + X2 - X4 >= 40.5 C10) 999 Y24 + X2 - X4 <= 958.5 C11) 999 Y34 + X3 - X4 >= 40 C12) 999 Y34 + X3 - X4 <= 959 C13) - XP12 + XN12 + X1 - X2 = 0 C14) - XP13 + XN13 + X1 - X3 = 0 C15) - XP14 + XN14 + X1 - X4 = 0 C16) - XP23 + XN23 + X2 - X3 = 0 C17) - XP24 + XN24 + X2 - X4 = 0 C18) - XP34 + XN34 + X3 - X4 = 0 END INTE 6
LP 1 MIN 25 XP12 + 35 XP13 + 50 XP14 + 10 XP23 + 15 XP24 + 50 XP34 + 25 XN12 + 35 XN13 + 50 XN14 + 10 XN23 + 15 XN24 + 50 XN34 SUBJECT TO C1) X1 - X2 + 999 Y12 >= 33.5 C2) X1 - X2 + 999 Y12 <= 965.5 C3) X1 - X3 + 999 Y13 >= 32.5 C4) X1 - X3 + 999 Y13 <= 966.5 C5) X1 - X4 + 999 Y14 >= 37.5 C6) X1 - X4 + 999 Y14 <= 961.5 C7) X2 - X3 + 999 Y23 >= 37.5 C8) X2 - X3 + 999 Y23 <= 961.5 C9) X2 - X4 + 999 Y24 >= 40.5 C10) X2 - X4 + 999 Y24 <= 958.5 C11) X3 - X4 + 999 Y34 >= 40 C12) X3 - X4 + 999 Y34 <= 959 C13) - XP12 + XN12 + X1 - X2 = 0 C14) - XP13 + XN13 + X1 - X3 = 0 C15) - XP14 + XN14 + X1 - X4 = 0 C16) - XP23 + XN23 + X2 - X3 = 0 C17) - XP24 + XN24 + X2 - X4 = 0 C18) - XP34 + XN34 + X3 - X4 = 0 C19) Y12 = 0 C20) Y13 = 0 C21) Y14 = 0 C22) Y23 = 0 C23) Y24 = 0 C24) Y34 = 0 END TITLE ( MIN) OBJECTIVE FUNCTION VALUE 1) 12410.000 VARIABLE VALUE REDUCED COST XP12 33.500000 .000000 XP13 71.000000 .000000 XP14 111.000000 .000000 XP23 37.500000 .000000 XP24 77.500000 .000000 XP34 40.000000 .000000 X1 111.000000 .000000 X2 77.500000 .000000 X3 40.000000 .000000 Example 2 (Cont)
DLP 1 MAX 33.5 U12 - 965.5 V12 + 32.5 U13 - 966.5 V13 + 37.5 U14 - 961.5 V14 + 37.5 U23 - 961.5 V23 + 40.5 U24 - 958.5 V24 + 40 U34 - 959 V34 SUBJECT TO C1) - WP12 + WN12 <= 25 C2) WP12 - WN12 <= 25 C3) - WP13 + WN13 <= 35 C4) WP13 - WN13 <= 35 C5) - WP14 + WN14 <= 50 C6) WP14 - WN14 <= 50 C7) - WP23 + WN23 <= 10 C10) WP23 - WN23 <= 10 C11) - WP24 + WN24 <= 15 C12) WP24 - WN24 <= 15 C13) - WP34 + WN34 <= 50 C14) WP34 - WN34 <= 50 C15) U12 - V12 + U13 - V13 + U14 - V14 + WP12 - WN12 + WP13 -WN13 + WP14 - WN14 <= 0 C16) - U12 + V12 + U23 - V23 + U24 - V24 - WP12 + WN12 + WP23 -WN23 + WP24 - WN24 <= 0 C17) - U13 + V13 - U23 + V23 + U34 - V34 - WP13 + WN13 - WP23 +WN23 + WP34 - WN34 <= 0 C18) - U14 + V14 - U24 + V24 - U34 + V34 - WP14 + WN14 - WP24 + WN24 - WP34 + WN34 <= 0 END TITLE ( MAX) LP OPTIMUM FOUND AT STEP 10 OBJECTIVE FUNCTION VALUE 1) 12410.000 VARIABLE VALUE REDUCED COST U12 110.000000 .000000 U23 110.000000 .000000 U34 115.000000 .000000 WN12 25.000000 .000000 WN13 35.000000 .000000 WN14 50.000000 .000000 WN23 10.000000 .000000 WN24 15.000000 .000000 WN34 50.000000 .000000 Example 2 (Cont)
Feasibility Constraints • If an upper bound on z is U, then we can write U as • 1 >yij + yjk - yik> 0 i<n-1, i<j<n, j<k<n
Example 2 (Cont) MP 1 MIN Z1 + 2 Z2 + 4 Z3 + 8 Z4 + 16 Z5 + 32 Z6 + 64 Z7 + 128 Z8 + 256 Z9 + 512 Z10 + 1024 Z11 + 2048 Z12 + 4096 Z13 + 8192 Z14 + 16384 Z15 SUBJECT TO C1) Y12 - Y13 + Y23 >= 0 C2) Y12 - Y13 + Y23 <= 1 C3) Y12 - Y14 + Y24 >= 0 C4) Y12 - Y14 + Y24 <= 1 C5) Y23 - Y24 + Y34 >= 0 C6) Y23 - Y24 + Y34 <= 1 C7) Z1 + 2 Z2 + 4 Z3 + 8 Z4 + 16 Z5 + 32 Z6 + 64 Z7 + 128 Z8 + 256 Z9 + 512 Z10 + 1024 Z11 + 2048 Z12 + 4096 Z13 + 8192 Z14 + 16384 Z15 + 109890 Y12 + 109890 Y23 + 114885 Y34 >= 12410 END INTE 21 NEW INTEGER SOLUTION OF .000000000 AT BRANCH 1 PIVOT 3 OBJECTIVE FUNCTION VALUE 1) .00000000 VARIABLE VALUE REDUCED COST Y34 1.000000 .000000 LAST INTEGER SOLUTION IS THE BEST FOUND
Modified Benders’ decomposition algorithm Step 0: Set i=1, yi = {0,0,...,0} and upper bound UB=infinity. Step 1: Because DLP i has a unique solution, find this using the technique discussed above. Let ui be the solution to DLP i. If ui(b - Byi) < UB, set UB = ui(b - Byi) Step 2: Update MP by adding the constraint z>ui(b - By) and z > UB-epsilon. Solve MP. If the solution is infeasible, we have found an epsilon-optimal solution to MIP. Otherwise, let y* be the feasible solution. Set i=i+1, yi=y* and return to step 1.
Simulated Annealing Algorithm n number of departments in the layout problem T initial temperature r cooling factor ITEMP number of times temperature T is decreased NOVER maximum number of solutions evaluated at each temp NLIMIT max number of new solutions to be accepted at each temp δ difference in OFVs of previous (best) & current solutions
Simulated Annealing Algorithm Step 0: Set: S = initial feasible solution; z = corresponding OFV; T=999.0; r=0.9; ITEMP=0; NLIMIT=10n; NOVER=100n; p, q = maximum number of departments permitted in any row, column respectively. Step 1: Repeat step 2 NOVER times or until the number of successful new solutions is equal to NLIMIT. Step 2: Pick a pair of departments randomly and exchange the position of the two departments. If the exchange of the positions of the two departments results in the overlapping of some other pair(s) of departments, appropriately modify the coordinates of the centers of the concerned departments to ensure there is no overlapping. If the resulting solution S* has an OFV < z, set S=S* and z=corresponding OFV. Otherwise, compute δ = difference between z and the OFV of solution S*, and set S=S* with a probability e-δ/T. Step 3: Set T=rT and ITEMP=ITEMP+1. If ITEMP is < 100, go to Step 1; otherwise STOP.
Modified Penalty Algorithm Minimize c11x11 + c12x12 + ... + c3nx3n Subject to a11x11 + a12x12 + ... + a1nx1n>b1 a21x21 + a22x22 + ... + a2nx2n<b2 a31x31 + a32x32 + ... + a3nx3n = b3 x21, x22, ..., x3n> 0
Hybrid Simulated Annealing Algorithm Step 0: Set: S = initial feasible solution; z = corresponding OFV; T=999.0; r=0.9; ITEMP=0; NOVER=100n; NLIMIT=10n; and p, q = maximum number of departments permitted in any row, column respectively; Step 2: Apply the MP algorithm to the initial feasible layout. If the departments overlap, modify the coordinates of the departments to eliminate overlapping. If z* (OFV of the resulting solution S*) is <z, set z=z*; S=S*. Set i=1; j=i+1. Step 3: If i<n-1, exchange the positions of departments i and j; otherwise go to step 4. If the exchange of the positions of departments i, j results in the overlapping of some other pair(s) of departments, appropriately modify the coordinates of the centers of the concerned departments to ensure there is no overlapping. If the resulting solution has an OFV z* <z, set S=S*; z=z*; i=1; j=i+1 and repeat step 3. Otherwise, set j=j+1. If j>n, set i=i+1, j=i+1 and repeat step 3. Step 4: Repeat step 5 NOVER times or until the number of successful new solutions is equal to NLIMIT. Step 5: Pick a pair of departments randomly and exchange the position of the two departments. If the exchange of the positions of the two departments results in the overlapping of some other pair(s) of departments, appropriately modify the coordinates of the centers of the concerned departments to ensure there is no overlapping. If the resulting solution S* has an OFV <z, set S=S* and z=corresponding OFV. Otherwise, compute δ = difference between z and the OFV of solution S*, and set S=S* with a probability 1-eδ/T. Step 6: Set T=rT and ITEMP=ITEMP+1. If ITEMP is < 100, go to Step 4; otherwise STOP.
SA and HSA Algorithms • Do Example 3, 4 and 5 using SINROW and MULROW
Tabu Search Algorithm Step 1: Read the flow (F) and distance (D) matrices. Construct the zero long term memory (LTM) matrix of size nxn, where n is the number of departments in the problem. Step 2: Construct an initial solution using any construction algorithm. Obtain values for the following two short-term memory parameters - size of tabu list (t = 0.33n – 0.6n) and maximumnumber of iterations (v = 7n-10n). Construct the zero tabu list (TL) vector and set iteration counter k=1. Step 3: For iteration k, examine all possible pairwise exchanges to the current solution and make the exchange {i,j} that leads to the greatest reduction in the OFV and satisfies one of the following two conditions. (i) Exchange {i,j} is not contained in the tabu list. (ii) If exchange{i,j} is in the tabu list, it satisfies the aspiration criteria. Update tabu list vector TL by including the pair {i,j} as the first element in TL. If the number of elements in TL is greater than t, drop the last element. Update LTM matrix by setting LTMij=LTMij+1. Step 4: Set k=k+1. If k>v, invoke long term memory by replacing the original distance matrix D with D+LTM and go to step 2. Otherwise STOP.
Genetic Algorithm Step 0: Obtain the maximum number of individuals in the population N and the maximum number of generations G from the user, generate N solutions for the first generation’s population randomly and represent each solution as a string. Set generation counter Ngen=1. Step 1: Determine the fitness of each solution in the current generation’s population and record the string with the best fitness. Step 2: Generate solutions for the next generation’s population as follows. (i) Retain 0.1N of the solutions with the best fitness in the previous population. (ii) Generate 0.89N solutions via mating. (iii) Select 0.01N solutions from the previous population randomly and mutate them. Step 3: Update Ngen = Ngen + 1. If Ngen< G, go to step 1. Otherwise, STOP.
Fitness functions & Population Generation • Population Generation • Mating (70-90%) • Retain a small percentage (10-30%) of individuals from the previous generation, and • Mutate, i.e. randomly alters a randomly selected chromosome (or individual) from the previous population (0.1 to 1%)
Population Generation • Mating • Two-point crossover method, • Partially matched crossover method • In the two-point crossover method, given two parent chromosomes {x1, x2, …, xn} and {y1, y2, …, yn}, two integers r, s, such that 1 <r < s<n are randomly selected and the genes in positions r to s of one parent are swapped (as one complete substring without disturbing the order) with that of the other to get two offspring as follows: {x1, x2, …, xr-1, yr,, yr+1, …, ys, xs+1, xs+2, …. xn} {y1, y2, …, yr-1, xr,, xr+1, …, xs, ys+1, ys+2, …. yn}
Population Generation • Mating • Partially matched crossover method • Partially matched crossover method is just like two-point, but genes are exchanged only if they lead to a feasible solution • Mutate • Take a solution and simply swap two genes
Population Generation • Mutate • Reproduction Method in which a prespecified percentage of individuals are retained based on probabilities that are inversely proportional to their OFVs • Clonal Propagation Method in which xN individuals with the best fitness are retained. (x is the prespecified proportion of individuals that are to be retained from the previous generation and N is the population size), and
Minimize w1C-w2R Subject to above constraints Multicriteria Layout
Parameters: i, j, k part, machine, cell indices, respectively ci intercellular movement cost per unit for part i vi number of units of part i uij cost of part i not utilizing machine j oij number of times each part i requires operation on machine j Mmax maximum number of machines permitted in a cell Mmin minimum number of machines permitted in a cell Cu maximum number of cells permitted S1 sets of machine pairs that cannot be located in the same cell S2 sets of machine pairs that must be located in the same cell np total number of part types nm total number of machines Model for CMS Design
Model for CMS Design • Decision Variables
Model 1 for CMS Design Minimize Subject to
Model 2 for CMS Design Minimize Subject to
Model P (Primal problem) Minimize Subject to
Model D (Dual problem) Minimize Subject to