An inexact bundle approach to cutting-stock problems

We show that the LP relaxation of the cutting-stock problem can be solved efficiently by the recently proposed inexact bundle methocl. This methocl saves work by allowing inaccurate solutions to knapsack subproblems. With suita.ble rouncling heuristics, our method solves almost all the cutting-stock inst.ances from the litera.ture.

Alternatively, the highly efficient hybrid approach of Degraeve and Peeters (2003) generates additional columns by applying subgradient optimization to its Lagrangian relaxation.
In this paper we show that its LP relaxation can also be solved efficiently by the inexact bundle met.hod of Kiwiel (2006).This QP-based met.hod saves work by allowing inaccurate solut.ion8t.o Lagrangian subproblems.For the CSP, each subproblem is a knapsack problem (KP).We give a simple test for inexact J(P solut-ions (see §2.2 below) that works well in pract.icefor a standard bra.nch-and-boundKP solver of Martello and Toth (1990).Further , to avoid the difficulties arising when a bounded KP is transformed into a 0-1 KP (Vanderbeck, 2002), we use relaxed bounds.Next, by adapting the idea.sof(Belov and Scheithauer, 2002;Holt.haus, 2002;Stadtler, 1990;Wiischer and Gau, 1996) to our inexact framework, we give rounding heuristics that.solve alm.ost all the CSP instances from the literature; in part.icular.they perform better than the best heuristics of Wiischer and Gau (1996).In effect, our inexact KP solutions, bound relaxation and rounding heuristics should be of interest also for ot.her, more traditional CG-based approaches to the CSP.
We now provide a historical perspective for our contributions.Our work was inspired by Briant et al. (2005), where (together with four other applications) the LP relaxation of the CSP was solved by severa!variants of CG and a standard bundle method.On some CSP instances , bundle was much slower than CG, mostly because its subproblems were more difficult for the KP solver of Vanderbeck (2002).Hence Cła.ucie Lema.recha.lsuggested the CSP a.s a. testing example for our inexa.ctbundle (Kiwiel, 2006).For technical rea.sons, instead of the KP solver of Va.nderbeck (2002), we used the MTlR procedure of Ma.rtello and Toth (1990).Our initia.lquite disappointing results improved grea.tly once we used rela.xedKP bounds and inexact solutions: our met.hodheca.memuch faster in practice tha.n all the algorithms tested in (Bria.nt et al., 2005, §2.2) (see §5.6.2).Next, we collected more test insta.ncesand adapted some rounding heuristics from the literature.The ma.in a.im was t.o appraise our inexact bundle solutions: they a.re deemed accurate enough if the heuristics solve almost all instances.
We now summarize our findings on admissible inexactness.The relative accuracy in dual function evalua.tions is controlled by the tolerance Er of our KP solver (cf.§2.2).First, for Er = O (i.e., exact bundle), the a.verage computing times are much greater than those for Er= 10-5 (usually by fa.ctors of 30 or more), a.lthough the itera.tionnumbers and the heuristic performance are almost the same.Second, the itera.tionnumbers and timings a.re close for E,.= 10-3, 10-4 and 10-5 ; however, relative to Er= 10-5 , our heuristics perform much worse for E,.= 10-3 , and just ma.rgina.llyworse for Er = 10-4 _ Third, furt.herexperiments (not. reported here for brevity) gave very close results for Er = 10-5 , 10-6 , 10-7 and 10-8 .To sum up, Er = 10-5 seems to be a good borderline choice.On the other hand, since in the CSP the gap between the prima.I va.lue and the relaxed dual va.lue is usua.llyless tha.n 1, and either rounding heuristics or bra.nch-a.nd-boundshould "close" this gap, it ma.yseem mare a.ppropria.teto ensure a. given absolute a.ccura.cyEa < 1 in dual function eva.lua.tions(see §5. 7.3).Quit.e suprisingly, our results for a. fairly large Ea = O.Ol are very close to those for E,.= 10-5 , wherea.s for Ea = 0.05 our heuristics per form slightly worse.
We thus present the first successful a.pplication of our inexact bundle method.
The pa.per is orga.nizedas follows.In §2 we reca.11 the classic CSP model of Gilmore and Gomory (1961) and introduce inexa.ctKP solutions for its La.gra.ngia.nrela.xa.tion.Our rounding heuristics a.re given in §3 in a. generał form suita.ble for other CSP solvers.The

Lagrangian relaxation of the CSP
The one-dimensional cutting-stock problem (CSP) is to minimize the number of stock pieces of width vV used to meet the demands di for items to be cut at their widths w; E (O, Wj, for i = 1, ... , m.The bin-packing problem (BPP) is a special case of the CSP with unit demands.

Inexact KP solutions
To strengthen our relaxation, we may consicler only proper patterns p such that Indeed, adcling the bound p ~ b to (1) and ( 4) does not change v., but it may raise VLP (Nitsche et al., 1999).Then the CG subproblem (4) becomes a bounded I<P, which can be turned into a 0-1 I<P via the transformation of (Martello and Toth, 1990, §3.2).However, this transformation may duplicate solution representations, thus creating difficulties for 0-1 I<P solvers (Vanclerbeck, 2002).To avoid duplicates, we may use the relaxed bound which corresponds to replacing d; in (8) by the smallest number of the form 21 -1 with j 2 1 such that 21 -1 2 d; (2d; -1 in the worst case); the number of transformed variables is the same.'vVe solve the transformed I<P by a double precision version of the branch-ancl-bound proceclure MTlR of Martello and Toth (1990).To reduce its work, we allow MTlR to fincl an approximate solution for a given relative acwmcy tolemnce c,-.Namely, the backtracking step exits if ( 2 (1 -c,.)(,where (:=up for the incumbent pand ( is IVITlR's upper bouncl on the optima!value up(u).Hence, by (5), we have the accuracy estimates For a norma] exit with an optima] p = p(u), we may replace ( by ( and Er by O in (10).
As for our choice of MTlR, we acid that Valerio cle Carvalho (2005) usecl MTlR as well, Belov and Scheithauer (2006) employed a similar branch-and-bouncl solver, whereas Vanclerbeck (1999) and Briant et al. (2005) usecl the more specializecl branch-ancl-bouncl solver of Vanclerbeck (2002).On the other hand, Degraeve and Peeters (2003) employecl a similar branch-ancl-bouncl solver but with prices multipliecl by 10,000 and rounclecl to integers, without cliscussing the effects of inexact KP solutions.Further, more recent KP solvers (Kellerer et al., 2004) accept integer data only; hence their use with suitable price rounclings is left open for a future study.To sum up, MTlR is outclated, but we coulcl not fine!anything better, and we believe that the current results will serve as a useful yarclstick for future work with modern KP solvers.

Heuristic rounding of relaxed solutions
Typical rouncling heuristics for the CSP proceecl as follows; cf.(Belov andScheithauer, 2002, 2006;Degraeve and Peeters, 2003;Holthaus, 2002;Scheithauer et al., 2001;Staclt.ler, 1990;Wa.scher and Gau, 1996).A solution z of the LP relaxation is rounclecl clown into an integer solution z := l z J. N ext, a sequential heuristic appliecl to the residua/ problem (2) wit.h d replacecl by d' := d -Lv pzP clelivers a residua!solution z.Then the sum z + z serves as a possibly inexact solution of (2) (which is exact if its value is equal to a !ower bouncl on v.; e.g., rvLP l).Since for simple rounding down (z= lzJ), the residua!problem may be too large to be solvecl optimally by a heuristic, some components of z may be increased (Holthaus, 2002;Scheithauer et al., 2001); however, if the residua!problem becomes too small to procluce a solution to the original problem, some components of z may be clecreasecl (Belov and Scheitha.uer, 2002).
As for sequential heuristics, in §3.2 we describe minor (but useful) modifications of the firstfit-clecreasing (FFD) of Chvatal (1983) and the heuristics of Belov and Scheithauer (2004) and Holthaus (2002).Since it pays to call lighter heuristics first, useful combinations of rouncling and sequential heuristics are cletailecl in §3.3.
We adcl that the rouncling procedures of (Vanclerbeck, 1999, §3.7) and (Wiischer and Gan, 1996, RSUC) would be difficult to implement in our context.As for sequential heuristics, we also tried the best-fit-decreasing of Chvatal (1983) and the fili bin heuristics of Vanderbeck (1999), but they dicl not perform significantly better than FFD in our tria.Is.

A generał rounding procedure
Numbering the patterns so that P = {pi}j'= Given an incumbent solution z' of (11) (e.g., found by FFD) and a point z E IR: (e.g., found by LP relaxation), the following procedure at.tempts to improve z' by calling a heuristic on residua!problems derived from rounded variants of ż.Lete:= (1, ... , 1) E IR".
One of our heuristics uses the following modification of Step 3, based on the ideas in (Holthaus, 2002, §3.2).

Combinations of rounding and sequential heuristics
We now give more details on the five heuristics used in our experiments.The heuristics are described as if being called by a generał solver for the LP rela,--<:ation of (11), which could be any variant of the CG procedure or the bundle method given in §4.
Our initial heuristic HO calls FFD with d' = d (i.e., on the original problem) to initialize the incumbent ;;* := z, the upper bound N := ez* and the !ower bound {/_ 1 := -oo.Suppose at iteration k 2 1 of the solver, the following quantities are available: z* is an incumbent solution of (11), zk E IR.~ and i1k E IR.~' are tentative prima! and dual solutions of the LP relaxation, and fl_k is a !ower bound on 0. = VLP (cf.(6)).If ez*= ffl.k l, the solver may stop (since z• is optima!).Otherwise, for iterations k specified below, the remaining heuristics consist in calling an extension of Procedure 1 with a copy of Step 4 inserted alter Step 1; the sequential heuristics employed at these steps are listed below.
Our periodic heuristic Hl is called by the solver every twentieth iteration, starting from iteration k = •m + 1 (i.e., for k = • m+ 1, m+21, ... ), with the current relaxed solution z:= zk and the !ower bound fl_k :S 0,.Hl employs FFD in Procedure 1, exiting if ez* = I fl_k l-Our finał heuristics H2, H3 and H4 are called successively upon termination of the solver, using the finał z := zk, ,1 := uk and fl.k-H2 employs both FFD and SHP, H3 just SHP and the modified Step 3', whereas H4 uses SVC.Of course, H3 and H4 (or just H4) are not called if H2 (or H3) exits with ez*= 1fl.k l, whereas SVC exits when ez+ ez= 1fl.k l- The impact, of the various heuristics will be discussed in §5.8.

The inexact proximal bundle method
We now sketch the main features of the inexact bundle method of Kiwiel (2006).

Data sets
In our computational experiments, for the CSP we use the 28 industrial instances of Vance (1998), the 10 industrial instances of Vanderbeck (1999), and the 20 industrial instances of Degraeve and Schrage (1999).In addition, we use the following randomly generated instances: the 4000 instances of Wiischer and Gau (1996), the 3360 instances of Degraeve and Peeters (2003) and the 120 instances of Vanderbeck (1999).For the BPP, we use the 540 randomly generated instances of Degraeve and Peeters (2003), and the 160 instances from the BINPACK collection of the OR-Library (Beasley, 1990).
Cornbining the different values form, c and d results in 40 classes; in each class, 100 instances are generated.
The uniform category has the capacity W = 150, m weights uniformly distributed in the int.erval [20,100], and 20 instances generated for each value of m = 120, 250, 500, 1000.(The classes with m = 500, 1000 also appear in the BPP category of Degraeve and Peeters (2003), but.with different inst.ances.)In the triplet category, each bin of capacity W = 1000 is filled wit.h exactly three items ( the first item w' is picked in [380,490], the second item w" in [250, (W -w')/2), and the third item equals Ww' -w").There are 20 instances for each value of m = 60, 120, 249, 501.

Implemented variants
Our codes were programmed in Fortran 77 and run on a notebook PC (Pentium M 755 2 GHz, 1.5 GB RAM) under IvIS Windows XP.
For solving the dual problem (6), we used a general-purpose bundle code that treats subgradients as dense vectors in double precision.A fast.er code could exploit the fact that   20) with the routine of Kiwiel (1994).We used 1\1 = rn + 3 to test how "minimal" bundle performs.
The bounded KPs arising in column generation and SHP were solved by the modified version of MTlR (cf.§2.2) with the accuracy tolerance Er= 10-5 (other choices are discussed in §5.7.2);MTlR's tolerance E was set to 10-12 .For column generation, we used the relaxed bounds of ( 9), because the tighter bounds of (8) produced longer computing times.In contrast, SHP employed in ( 14) the natura] bounds given by ( 8) with d replaced by d".
Our implementation of the rounding procedure of §3.1 is slower than necessary because the pa.tterns a.re recovered as pi= (d -'v0 1 )/N, instea.d of being stored separately.

Results for the cutting-stock problem
To ea.se comparisons, we follow closely the presentation of Degraeve and Peeters (2003).
Every data class is identified by three parameters: the number of items m, the interval in which the widths a.re distributed denoted by int, and the average demand d.An indicator "all" for any of these pa.ra.meters mea.ns that the reported results a.re a.ggregated over all relevant values for that particular para.meter.If a parameter is consta.ntfor all instances represented in a table, its value is indicated in the table heading.
Our result.sfor the small-item-size instances of Degraeve and Peeters (2003) with int= all, d = all are reported in Table l; full details a.re given in Tables 17-19 in the Online Supplement.
to this pa.per on the journa.l'swebsite.The columns mav and m~v give the a.verage numbers of it.emsand varia.bles in the associated 0-1 knapsack subproblems.The columns iav and i 111 x From the entries for n,, Hl through H4 and n 9 in Table 1, we see that early termination occured on between 47% and 69% of problems, HO and Hl solvecl between 70% and 85% of problems, H2 solved almost all the remaining problems, H3 and H4 helped in solving 2 problems, and just one out of the 1680 problems was not solved.Note that the best met.hodLR of Degraeve and Peeters (2003) also could not salve one instance within 15 minutes (two inst.anceswithin 6 minut.es),and its FFD-based rounding heuristic solved 91.6% of problems, whereas our "light.er"heuristics HO through H2 solved 99.8% of problems.
Our results for the medium-item-size instances of Degraeve and Peeters (2003) are presented in Table 2, where each row gives statistics over the 240 instances used for each value of m (see Tables 20 and 21 for more details).Early termination occured on between 22% and 35% of problems, HO and Hl solved between 49% and 56% of problems, H2 solved almost all the remaining problems, H3 solved one problem, H4 solved 7 problems, and just two out of the 1680 problems were not solved.The rounding heuristic of Degraeve and Peeters (2003) Table 3: CSP instances of Wiischer and Gau (1996)  Comparing Tables 1 and 2, we see that the average and maximum solution times are quite similar in the small-and medium-size-item cases for problem sizes m up to 50.However, for m = 75 and 100, in the medium-size-item case the average solution times grow significantly, and the maximum solution times jump up, most spectacularly on the instances with width interval [1500,2500]; see Table 21.This is due to the poor performance of our knapsack solver on these instances.Similar slowdowns on this interval were reported in (Degraeve and Peeters, 2003, Tab. 4a) already form= 20, i.e., even for smaller problems.
To save space, Table 3 presents only aggregate results on the instances of \Niischer and Gau (1996), with each row giving statistics over the 800 instances used for each value of m.Here our main point is that only three out of 4000 (0.075%) problems were not solved.
Our "lighter" heuristics HO through H2 solved 99.7% of problems, whereas the two best (and more complicated) heuristics RSUC and CSTAOPT of Wiischer and Gau (1996) solved 98.0% and 92.7% of problems, respectively (99.6% if they had been applied together).The fairly large maximum solution time in Tab. 3 stemmed from a single knapsack subproblem.
Table 4 gives our results for the 6 data classes of Vanderbeck (1999) with m = 50 and 20 instances per row.Since we used the original instances, the results are not identical to those suffice for solving all the CSP instances used by Vanderbeck (1999).
Quite suprisingly, all the industrial instances we could find in the literature turned out to be easy for our method: they were solved in a fraction of a second (see Tables 22-24).

Results for the bin-packing problem
Following Degraeve and Peeters (2003), in the next three tables we present our results for the BPP.Table 5 gives our results for the BPP instances of Degraeve and Peeters (2003) (20 instances per row).All the 360 instances were solved (H4 helped once).
Table 6 reports results for the BINPACK instances from the OR-Library (Beasley , 1990) (20 inst.ances per row).The first four uniform classes were solved by calling H4 just once.
However, only 19 out of the 80 triplet instances were solved (with H4 hel ping on one instance).
The remaining instances had unit gaps; the "gap" column gives averages of relative gaps (ez* -f/Zkl)/ffhl-We add that for the CSP instances of §5.3, the running times of H4 were not excessive, and H4 was called quite infrequently anyway.In contrast, on the triplet classes t249 and t501, the use of H4 increasecl the running times substantially, as illustratecl in Table (the influence of H3 could be ignored).Note that the triplet classes are quite clifficult for traclitional LP relaxation (Degraeve and Peeters, 2003, Tab. 12).., o 2 o

Impact of tighter knapsack bounds
The results of §5.3 were obtained for the relaxed bounds of (9).Using the tighter bounds of (8) allowed us to solve just two more instances at the expense of longer running times.To save space, the following tables and remarks list only data classes on which the tightening of KP bounds mattered most, giving more details for larger problem sizes.
Concerning Tables 9-10, the good news is that tighter bounds allowed us to solve all the small-item-size instances of Degraeve and Peeters (2003), and all but one of the mediumitem-size instances of Degraeve and Peeters (2003).Unfortunately the running times grew substantially relative to Tabs.1-2.On the small-item-size instances, form?. 40 the average running times grew by about 150%; on the medium-item-size instances, the average running t.imes grew by 200%, 217%, 303% and 446% form = 40, 50, 75 and 100 (see Tabs. 25-26 for more details).The iteration numbers were about the same.The increase in running times can be attributed to the knapsack solver (which made more than two million backtrackings on some subproblems).
For the instances of Wa.scher and Gau (1996), the same 3997 out of 4000 instances were solved, but relative to Tab. 3, for m = 40 and 50 the average running times grew by 100% and 143%.For the instances of Vanderbeck (1999), rela.tive to Tab. 4, the a.verage running times grew by between 67% and 205%; their sum increa.sedby 175%.

Comparison with Degraeve and Peeters (2003)
In Table 11 we compa.re the average running times of our bundle relaxation code BR with the two best procedures HR and LR of Degraeve and Peeters (2003) on the insta.ncesused for Ta.bs.17, 18, 20 and 21.The times for HR and LR obta.ined on a. Pentium Pro 200 lV!Hz were extra.ct.ed from (Degraeve and Peeters, 2003, Ta.bs. l -4b).Two points should be noted.
First, both HR and LR employed a.n industrial LP solver (much more sophisticated tha.n our dense QP solver), and LR a.dditionally used subgradient optimiza.tion.Second, due to lacking knowledge, let's assume that the ma.chine of Degra.eve and Peeters (2003) was ten times slower than ours.Then Ta.ble 11 suggests tha.t on the small-item-size instances BR was compa.ra.ble in speed with HR (a.bout twice slower tha.nLR), while on the medium-item-size instances BR could perform bet.ter than LR.Simila.rly, in view of Tab. 3 and (Degra.eve and Peeters, 2003, Tab. 10), on the instances of Wiischer and Ga.u (1996) BR was as fast as HR (twice slower than LR), whereas Tab. 4 and (Degra.eve and Peeters, 2003, Tab. 5a) indicate that on the instances of Vanderbeck (1999) BR was comparable with HR, and sometimes fast.erthan LR.On the industria.I instances of Degra.eve and Schrage (1999) (cf.Tab.24 and (Degraeve and Peeters, 2003, Tab. 9) ), BR beha.vedlike HR (sometimes bet.ter tha.nLR).Briant et al. (2005) We now compare our running times with those in (Briant et al., 2005, §2.2),where the task was just to produce sufficiently accurate prima! and dual solutions zk and fik that satisfy the .13, and for Er = 10-3 and 10-4 in Tables 27-28.The accuracy obtained was quite poor for E,.= 10-3 , a bit too weak for E,.= 10-4 , but very good for Er = 10-5 ( the results for smaller Er were similar).These values of Er are also "representative" when our code is run Om running times were short.erat least 7.5 times for ind_9, 22 times for 50bl00c4, 43 times for ul20, 56 times for u250, 237 t.imes for t120, and 197 times for t249.

Comparison with exact bundle
\1/hen the dual objective evaluat.ionshappen to be exact, om BR cocle runs essentially like the standard bundle met.hod used in (Feltenmark and Kiwiel, 2000).Therefore, Ta.bies 14-16 summarize our result.sfor exact KP solutions (er = O) relative to Tabs.1-3 (where c,.= 10-5 ); similar features were observed on ot.her instances.First, the iteration numbers and the performance of om heuristics dicl not change significantly.(In ot.her worcls, the errors occming in the inexact.case were small enough to be accommoclatecl gracefully by our code.) ~  (Dolan and IVIore, 2002) are given in Figs.1-3 in the supplement .

7.2. Other choices of the relative error tolerance
In the initial version of this paper we used the accuracy tolerance Er = 10-8 ; the results were very close to those in Tabs.1-10 ( where Er = 10-5 ).In par allel with Tabs.14-16, Tables 29-34 give results for Er = 10-3 and 10-4 _ The average iteration numbers and computing t.imes were similar for Er = 10-3 , 10-4 and 10-5 _ However, Er = 10-3 was too large, causing our heuristics to fai! mare frequently.On the other hand, E,-= 10-4 did not improve on our standard choice of E,. = 10-5 (giving one mare gap in Tab.32).
Thus we may expect failures when the absolute errors get close to NEr > 1.Now, in Tables 29-31 the average values of u, and N grow linearly with m, reaching order 5000, 2875 and 1250 for the finał classes, where NE,-> 1 for Er= 10-3 ; thus the small percentage of failures suggests t.hat the actual errors tended to be smaller than t.heir upper bouncls.

Absolute error tolerances
In view of the cliscussion in §5.7.2, we also considered choosing Er so that the evaluation errors die! not exceecl a given absolute error tolerance Ea < 1 (with SHP using Er = 10-5 as in §5.3).Specifically, for evalu at.ing 0 we used E,-:= Ea/ N. Rela.tive to Ta.bs.1-3, where bkmin = oo, for bkmin = O the average iteration numbers grew by 59-114% on the largest instances, the solution times decreased fairly mildly, and two mare gaps occured.In contrast, for bkmin = 1000 the average iteration numbers grew by only 5-13% on the largest instances , the solution times decreased quite mildly (although the decreases by 32% and 37% form = 75 and 100 in Tab.45 are noticeable), and three gaps disappeared.On the ot.her hand, the maximum iteration numbers increased substantially on the larger instances, giving same cause for concern.

7.A discussion of error tolerances
Although in generał one may expect tradeoffs between t.he accuracy of subproblem solutions and the speed of convergence, for the CSP such tradeoffs may have lit.tlepractical impact , since Ta.bies 12-34 exhibit.fairly small variations in iteration numbers and computing times for "reasonable" accuracy tolerances.Therefore, we would not expect much gain from dynam.ie tolemnce adjustment: loose at the beginning and progressively decreasing.
\Ne add that dynamie handling of the accuracy may be import.antin generał, especially if the oracle's work depends "continuously" on the accuracy required.However, t.his need not be the case for aur MTlR, which seems to have the following properties: (1) it.swork explodes on some subproblems when the accuracy required is "tao high"; and (2) its work does not.
vary much otherwise.Thus the main point is to avoid accuracies that are "tao high", or "tao
low" for the dual solver to succeecl, whereas for all "intermecliate" accuracies, the solution time shoulcl not vary significantly ( unless small er accuracies affect the iteration num bers "more than proportionally" ).V.le conjecture that similar effects are likely to hold for other integer-programrning applications with branch-ancl-bouncl oracles that cleliver relatively goocl incumbents quickly.
We now consicler the case where the heuristics Hl through H4 are replacecl by the heuristic narnecl H5, which consists in calling, upon bunclle termination, Proceclure 1 with Steps 2, 3 and 5 omittecl, and Step 4 using FFD; in other worcls, the relaxecl prima!solution is rounclecl clown and the residua!problem is solvecl by FFD.The results for H5 (with Er= 10-5 ) given in Tables 47-49 show that H5 performs quite poorly relative to Tabs.1-3 (and that Hl recluces the iteration numbers, and usually the cornputing times as well).On the other hand, we note that H5 solvecl 91.5% and 68.8% of problerns in Tabs.47-48, whereas the FFD-basecl heuristic of Degraeve and Peeters (2003) solvecl 91.6% and 69.9%; further, H5 solvecl 92.8% of problems in Tab.49, whereas the corresponcling heuristic RFFD of Wiischer and Gau (1996) solvecl 92.5%.Thus our bunclle results with H5 are very similar to those obtainecl with other CG solvers.

Table 2 :
Degraeve and Peeters (2003)ofDegraeve and Peeters (2003), int= all, d = 50 .t.he average and ma.--<:irnum numbers of iterations of the bunclle cocle.The co!Uinns tav and tm, give the avernge and maximum nmning t.imes in wall-clock seconds.The column reportn, lists the numbers of "early" tenninations due to cliscovering that ez* r fh l for the incumbent z* delivered by HO or Hl before bundle terminated on its own.Recall that Hl is called after HO, H2 after Hl, etc., unless ez* = rnk l occurs earlier.The columns labelled Hl through H4 give the numbers of instances in which the corresponding heuristic found the best prima!value ez* first (for the remaining instances ez* was found by HO); a zero entry means that.heuristic was not called or dicl not contribute usefully.The finał column ng reports the numbers of instances with a nonzero finał gap g := ez*rflk l; we stress that the finał gaps never exceeded one unit in all of our instances.The averages, maxima and sums in Table1are taken over the 240 instances used for each value of m.

Table 5 :
Degraeve and Peeters (2003)d Peeters (2003) in Tabs.17 and 21, but the performance of HO through H2 is similar; in fact HO through H2

Table 8 :
Degraeve and Peeters (2003)graeve and Peeters (2003)Table8presents our results for the modified BPP classes ofDegraeve and Peeters (2003)(20 instances per row as described in §5.1).Just one out of the 180 problems was not solvecl (H4 helpecl on one problem).The tra.nsformation into a CSP reclucecl the number of it.emsby at most 5% on average.For almost 500 varia.bies, the large iteration numbers and rnnning t.imes are not too suprising.

Table 12
was obtained for Er= O, i.e., exact.KP solutions.Results for Er= 10-5 are given in Table

Table 15 :
Degraeve and Peeters (2003)s ofDegraeve and Peeters (2003), c,.= O Briant et.al. (2005)tics, as shown in §5.7.2.In view of the excellent accuracy in Tab. 13, we may compare our timings in Tab. 13 with the best ones of(Briant et al. , 2005, Tabs.1, 2 and 5) for various CG and bundle variants, where the machine used was about twice slower than ours, and the CG variants could stop before the first part of (26) helcl.Since quoting the t.ables ofBriant et.al.(2005)would take too much space, we just state the conclusion:

Table 25 :
Small-item-size instances with tight KP bouncls, d = all