Tuesday, July 15, 2025

What is the Big-M Method, and Why Should I Care?

The Big-M method is a common modeling technique used in Mixed Integer Programming (MIP) to represent logical conditions or implications using linear constraints. It introduces a large constant, typically denoted as M, to "turn on" or "turn off" constraints based on the value of binary variables. I usually call these “switch constraints”.

How It Works

Suppose you have a binary variable y {0,1} and a constraint that should only be active when y=1. You can use the Big-M method to model this:

Example:

You want the constraint:

x ≤ b only if y = 1

Using Big-M, you rewrite it as:

x ≤ b + M(1−y)

  • When y = 1: the constraint becomes x ≤ b
  • When y = 0: the constraint becomes x ≤ b+M which is effectively non-binding if M is large enough

Common Use Cases

  1. Indicator constraints: Linking binary decisions to continuous variables.
  2. Conditional constraints: Enforcing constraints only when certain binary variables are active.
  3. Modeling piecewise linear functions.
  4. Disjunctive constraints: Representing "either-or" logic.

Choosing the Right M

Choosing M is critical:

  • Too small: The model may become infeasible.
  • Too large: It can cause numerical instability and slow down the solver.

A good practice is to estimate the smallest possible M that still ensures correctness, based on the problem's data bounds.

Example:

Suppose we wanted to know if any of stand 234 was harvested? If we have a theme (TH3) in the LANDSCAPE section that tracks standID, we can do this easily. We need to know the area of stand 234, a theme-based output that tracks harvest area by stand (oaHarv(_TH3)  and a binary variable (vb234).  The setup is straightforward:

OUTPUTS
*OUTPUT oaHarv(_TH3) Harvest area
*SOURCE .MASK() aHARV _AREA

OPTIMIZE
*VARIABLE
vb234 _BINARYARRAY ; valid for any planning period

*CONSTRAINTS
oaHARV(234) – M * vb234 <= 0 1.._LENGTH

So, what should be the value of M? Suppose that the AREAS section lists the area of stand 234 as 8.23483834 acres? If you use 8.2 as the value of M in the constraint, it will be too small to completely offset the harvest area if the entire stand is harvested. If you use M = 1E6, the constraint will be arithmetically correct, but the value of M is way too large and may cause problems with the solver. Remember, the value of M should be just big enough to offset the maximum value of your continuous variable (oaHarv). That means we should use the same value used in the AREAS section: 8.23483834.

Tip 1: Remember that stands may be composed of different polygons, so make sure that your total stand area includes all polygons (area records).

Tip 2: Consider carrying at least 8 decimal places for area records. Representing real numbers in computers is not always exact, so the more significant figures you carry, the less likely you’ll introduce numerical difficulties.

Tip 3: In Woodstock models, almost everything is a function of land area. If you are setting up a lot of constraints involving binary variables, consider using thematic indexing to represent stand area. In this construct, all area records have a value of 1 and the true area of the stand is introduced in the LANDSCAPE section. Doing so will simplify the writing of switch constraints because the value of M is always integer (usually 1 but may be larger depending on the number of area records represented). In turn, the value of M will necessarily be of small magnitude.

Wrap-Up

If you’re interested in learning more about mixed integer programming or how to deploy thematic indexing in your models, give me a shout! If you have questions, don’t hesitate to ask in the comments.


Monday, June 30, 2025

Tackling Numerical Difficulties in Linear Programming: What Every Modeler Should Know

Introduction

Linear programming (LP) is a go-to method for solving optimization problems in everything from supply chains to finance. But even the most elegant LP models can run into trouble—not because of logic errors, but due to numerical difficulties that creep in during computation.

Let’s explore what these issues are, why they happen, and how you can avoid them.


What Are Numerical Difficulties?

Numerical difficulties arise when the math behind your LP model becomes unstable or inaccurate due to the way computers handle numbers—especially floating-point arithmetic. These issues can lead to:

  • Slow performance
  • Incorrect solutions
  • Solver crashes
  • Misleading feasibility or optimality reports

Common Culprits (with Real-World Examples)

1. Ill-Conditioned Matrices

When your constraint matrix has values that vary wildly in magnitude, solvers can struggle.

Example: A harvest scheduling model with volumes in board-feet and individual areas smaller than 1/10th acre might include coefficients ranging from 106106 to 106106 (or worse! I've seen it). This can cause the solver to miscalculate dual values or even fail to converge.

Harvest scheduling models typically represent large areas of land (thousands or even millions of acres). Stand volumes are similarly often hundreds of units per acre, along with prices that are also usually hundreds of dollars per unit. Multiplying these things together, it is possible to have objective function values in the hundreds of millions of dollars (greatly exceeding the 

104 to 106104 range that many solvers recommend). If your unscaled objective function is in the range of 100 million to 1 billion, you can always divide the outputs used in the objective function by 1,000,000 to get coefficients within the desired range.

2. Floating-Point Precision Errors

Computers can’t represent every number exactly, especially decimals. This leads to rounding errors.

Example: A constraint like x+y=1x+y=1 might yield x=0.999999999x=0.999999999y=0.000000001y=0.000000001. Technically correct, but practically problematic—especially if your application requires clean, interpretable results. As a result, you should typically carry as many decimal places as possible, particularly in coefficients like area. For example, instead of an area record with 504.1, use 504.09775354.


3. Degeneracy

Degeneracy occurs when multiple basic feasible solutions exist at the same vertex. This can slow down or confuse the solver.

Example: In a network flow model (i.e., roads), several paths might have the same cost. The simplex method could cycle or take many iterations to find the optimal solution. Reprocess the network to eliminate redundancies by applying a shortest path algorithm.


4. Scaling Issues

When variables or constraints differ by several orders of magnitude, solvers may make poor pivoting decisions.

Example: One constraint is 0.0001x+0.0002y10.0001x+0.0002y1, another is 1000x+2000y50001000x+2000y5000. The solver might misinterpret the importance of each constraint due to the scale difference. A simple fix is to manually scale the constraints to be within the desired range 

(104 to 106104) by multiplying both sides of the inequality: 0.0001x+0.0002y1xy10000, xy5.


5. Infeasibility and Unboundedness Confusion

Sometimes, numerical noise makes it hard to tell if a model is truly infeasible or unbounded.

Example: Nearly parallel constraints might appear to intersect due to rounding, leading the solver to falsely report feasibility. These ones may be difficult to spot. As always, if you start experiencing infeasibility or difficulties solving, consider backtracking by removing constraints and then adding them back in one at a time to see which one(s) cause(s) problems.


How to Fix or Avoid These Issues

Here are some practical tips:

  • Normalize Your Model: Keep coefficients within a similar range. Avoid extreme values.
  • Validate Results: After solving, double-check constraint satisfaction and objective values—especially in critical applications.
  • Use Solver Preprocessing: Most solvers offer automatic scaling and presolve routines. However, they can't necessarily fix every scaling problem.
  • Adjust Tolerances: Unless you know what you're doing, you are better off leaving default tolerances alone. However, sometimes an adjustment can fine-tune feasibility and optimality tolerances to reduce rounding errors. Consult your server's online help and documentation.

Final Thoughts

Numerical difficulties in LP models are often overlooked but can have serious consequences. By understanding the root causes and applying smart modeling practices, you can build more robust, reliable optimization models.

Have you run into any of these issues in your own work? Let’s talk about it in the comments!


Monday, February 17, 2025

Constraints and Feasibility

All Constraints Are Equal...
       But Some Are More Equal Than Others

Apologies to George Orwell, but the way some people view constraints puzzles me. To an LP solver, all constraints are equally important in that a feasible solution must respect all of them simultaneously. However, as modelers we must recognize that in terms of real-world significance, all constraints are NOT equally important. 

Woodstock Constraints Are More Complicated Than Meets the Eye

Consider a simple Woodstock constraint with a 50-period planning horizon:

oaHABITAT >= 5000 1.._LENGTH

This statement represents not a single constraint but 50 of them with 50 accounting variables:

oaHABITAT[1] > = 5000
oaHABITAT[2] > = 5000
oaHABITAT[3] > = 5000

         ...

oaHABITAT[48] > = 5000
oaHABITAT[49] > = 5000
oaHABITAT[50] > = 5000

What we see as oaHABITAT in Woodstock is really an array of accounting variables, indexed by planning period. For each accounting variable, there is a target value to achieve and a planning period in which to do it. Two questions come to mind for just this single output: how important the target value is and are later planning periods as important as earlier planning periods? You should also be considering the other constraints in your model in the same fashion. 

How To Determine Relative Importance

Relative importance should be fairly easy to discern:

  1. Legal requirements
  2. Contractual requirements
  3. Policy requirements
  4. Financial requirements
  5. Desirable outcomes

If you are required by law to maintain some condition and you can reasonably model it, these are the constraints that you should verify first. Next, you may have some contractual requirements (say, a volume of wood products) that necessarily must be met. Organizational policy requirements should come next, followed by any financial obligations your organization may have. Finally, constraints that represent desirable rather than necessary outcomes should come last. If you formulate a model with all the constraints in place and it solves, great! But if it doesn't solve, then you need to determine where the problem is, by going back to your hierarchy and running scenarios! Don't just slap goals on everything and call it good!

Mitigating Infeasibility

Last time, I suggested that a goal formulation could be used to determine the minimum shortfalls in potential habitat. For example, in this case we could use:

*OBJECTIVE
_MIN _PENALTY(_ALL) 1.._LENGTH
*CONSTRAINTS
oaHABITAT >= 5000 1.._LENGTH _GOAL(G1,1)

If the solution meets or exceeds 5000, the goal penalty is zero, otherwise the penalty is 1 per hectare of shortfall. If we graph oaHABITAT over the planning horizon, we see this outcome:

Significant Shortfalls in Early Periods with Reversal

We know there is no solution where we can reduce the total shortfall area across the planning horizon. However, this is a situation the timing of the shortfall matters as much as the amount. By Period 10, the shortfall is gone but it returns in Period 16 for four periods. Perhaps, it would be better to have more shortfall early on and then no shortfalls at all. How could we effect that outcome?

We use discounting in financial models to reflect the time value of money: revenues in the future are worth less to us now than current revenues. If we turn that logic on its head, we could use lower penalties early on and higher penalties later to show that shortfalls in the future are more problematic. Unfortunately, Woodstock doesn't offer an easy way to automatically increase goal weights, but we can write constraints directly using a FOREACH loop:

FOREACH xx in (1..50)
oaHABITAT >= 5000 xx _GOAL(Gxx,xx)
ENDFOR

We have to write explicit planning periods, goal_IDs and weights, but luckily, we can just use an increasing series of numbers from 1 to 50. Now, every subsequent period increases the penalty on shortfalls. While the total amount of shortfall may increase, the period of shortfall should be as short as possible:

Higher Shortfalls Early, No Reversals

The second scenario (Series 2) increases the total shortfall over the planning horizon from 12,845 to 12,857, but it does so without reversals:

A Tradeoff Between Total Shortfalls and Reversals

Supposing that habitat is a legal requirement, and we know of a strategy to eliminate shortfalls within 10 planning periods, we should adjust the constraints to reflect these achievable habitat values and remove the goal formulation. Any additional constraints will have to maintain these habitat levels.

Summary

In this post we considered the relative importance of different kinds of constraints as well as the importance of timing of outcomes. Rather than relying on blanket goal formulations and default weights, we applied some logic to the situation to devise an improved solution. Unfortunately, not every situation will work to your favor. It may turn out that increasing weights over time has no effect on shortfalls, but at least by trying you know this to be true.

Contact Me!

If you'd like some help with a tricky formulation, or you'd like to engage in a custom training for your team to improve their analytical skills, give me a shout. And again, if there's something you'd like me to cover in a future blog post, let me know!

Thursday, February 6, 2025

Let's Talk About Infeasibility, Shall We?

I Think Many of You Are Dealing with It Improperly

In my work as a consultant, I'm often presented with models that aren't working as the client intended. That means I get to see how the model has been formulated and how the analysis has been done. And frankly, its a bit shocking sometimes what passes for a preferred alternative among a bunch of scenarios. Clearly, some of you don't really understand infeasibility and the proper way to deal with it.

What Do Feasibility and Infeasibility Mean?

In every constrained linear optimization model (i.e., LP, MIP), there must be a set of constraints in place that define a convex feasible region. Convexity just implies that there is a smooth gradient such that there are no local optima present. In the following example, the blue lines represent a convex feasible region for a minimization problem because there is only one minimum point (global optimum). The orange region is not convex because there are two local optima visible:
Feasible Regions for 2 Minimization Problems

If we have a convex set of constraints (red lines and direction of inequality) and an objective function (dashed line), we can search the corner points of the feasible region (pale green) and locate the minimum corner point, which is the global optimum. We have a feasible and optimal solution.

Locating the optimal solution

Suppose we add another constraint. Now, there is no feasible region because there is no overlapping area where the constraints can be met simultaneously. The example is patently infeasible:

Patently Infeasible Problem

If you look at the graphic, you might be wondering why someone would propose a constraint that so obviously cannot be met. Good question. However, some of you are doing this all the time. Years ago, I worked on a project for the US Forest Service. They had 9 structural stage conditions that they wanted constrained to be in fixed proportions of the forest. Think about that - how do you maintain a static structure in a dynamic forest? The youngest stands grow out of earlier structural stages into later ones. The only way to replace them is through harvesting. Alas, the earliest structural stage lasts 20 years, but it takes more than 120 years to reach the latest structural stage. You don't need an LP to determine that the desired outcome is patently infeasible.

Ah, But What About Goals?

Yes, goal formulations are very useful when trying to mitigate infeasibility, but you have to use them correctly. Many of you are not.

How Do Goals Work?

Let's say you have a constraint requiring 5000 acres of some type of habitat:

oaCaribou >= 5000 1.._LENGTH

If you know the characteristics of the habitat, you should be able to determine if at least 5000 acres currently exists in the forest. If it doesn't, your constraint is patently infeasible. The more interesting question is, can we improve the situation over time? If so, we need to know by how much and how long will it take? Finally, once we achieve the desired acreage, can we sustain it? 

From the models I have reviewed over the years, I know at least some of you are addressing this problem by applying goals. A goal applied to a constraint simply introduces slack and/or surplus variables to eliminate the infeasibility. For our example, the habitat constraint becomes this:

oaCaribou + Slack = 5000 1.._LENGTH

If oaCaribou is less than 5000 in any period, the variable Slack takes on the value of the shortfall. But since Slack is a free variable (not tied to land or activities), it could take on any value greater than zero and less than or equal to 5000. In other words, the model could harvest qualifying habitat and simply make Slack larger. To avoid this outcome, we need to penalize the use of Slack so that it only is used when there is no alternative. Said another way, we want the solver to enforce the constraint unless it is impossible, and then only to the degree required. If you think about it, this is a minimization problem. The correct way to deal with the goal is to minimize Slack, subject to the constraint.

*OBJECTIVE
_MIN Slack 1.._LENGTH
*CONSTRAINTS
oaCaribou + Slack = 5000 1.._LENGTH

If we solve this LP, we will know the total acres of shortfall across the entire planning horizon (objective function), as well as the shortfalls in each planning period (Slack). Now that you know where the shortfalls are, you can adjust the RHS values for each period down to where the constraint is just feasible. You know you can't improve the objective function which means the sum of the shortfalls cannot be improved. How do we know this? If there were a way to reduce the sum of shortfalls, then the solver would have found it. Now, there may well be alternative optimal solutions where the shortfalls are different in different planning periods, but the sum across all periods will never be less. We will return to this idea in a future blog post.

So Why Do So Many of You Leave Goals in Place?

I've reviewed models that are supposed to be preferred alternatives from a bunch of scenarios that were run, and they look like this:

*OBJECTIVE
_MAX oqTotal - _PENALTY(_ALL) 1.._LENGTH
*CONSTRAINTS
oaPL_OG - 0.045 * oaPL >= 0 1.._LENGTH _GOAL(G1,9999)
oaSB_OG - 0.045 * oaSB >= 0 1.._LENGTH _GOAL(G2,9999)
oaSW_OG - 0.045 * oaSW >= 0 1.._LENGTH _GOAL(G3,9999)
oaMX_OG - 0.045 * oaMX >= 0 1.._LENGTH _GOAL(G4,9999)
oaOC_OG - 0.045 * oaOC >= 0 1.._LENGTH _GOAL(G5,9999)
oaHW_OG - 0.045 * oaHW >= 0 1.._LENGTH _GOAL(G6,9999)

First off, there's our shining example of a patently infeasible problem, made that much worse by embedding the penalty function for the goals in the objective function, and by using arbitrary large goal weights. The use of goals implies that missing the target is generally achievable, but sometimes you miss. How is that working out? Let's look at the goal summary that Woodstock produces as part of the SCHEDULE section:

; Constraint: oaMX_OG - 0.045 * oaMX >= 0 1.._LENGTH
; Period    Period    Above    Below    
     5         5               1687.7        ; C1R11
     6         6               1687.7        ; C1R12
     7         7               1687.7        ; C1R13
     8         8               1684.3        ; C1R14
     9         9               1463.2        ; C1R15
    10        10                501.6        ; C1R16
    11        11                463.8        ; C1R17
    12        12               1687.7        ; C1R18
    13        13               1687.7        ; C1R19
    14        14               1687.7        ; C1R20
; Constraint: oaSW_OG - 0.045 * oaSW >= 0 1.._LENGTH
; Period    Period    Above    Below    
     6         6               8412.8        ; C9R6
     .         .                   .         .   .
     .         .                   .         .   .
     .         .                   .         .   .
    20        20               8412.8        ; C9R20   

Clearly, the constraints are only working for the first few periods before shortfalls appear. In the case of oaPL_OG, you are always 8412.8 ha short after period 5. So why not adjust the RHS values and accept that your policy objective is set too high?

And what about the objective function value, that is supposed to maximize total harvested volume? Well, that works out to be -3.38583928E11, or negative 338.5 billion. Is that in cubic meters? No, because it includes the sum of penalties across all the constraints, some of which is measured in hectares scaled by an arbitrary value of 9999. The objective function is meaningless other than to show that the sum of penalties completely swamps any actual harvest volume.

Please Stop Abusing _GOAL

If you work for a public lands agency and your policy directives are never achievable, why is your agency keeping them in place? Did you not do any analysis ahead of time to determine if such policies were feasible? If you are approving management plans based on model solutions that still incorporate goals, why are you doing that? If you work for a consultant or timber investment management organization, you probably have fewer policy directives to consider. Still, you need to be honest about your assumptions. If you apply goals to constraints on minimum product harvest volumes or cash-flow positivity constraints, you're simply masking the fact that you're violating these constraints. And if every constraint in the model has goals applied to them, do you really know if any of them would be feasible absent the goals?  If you're answering "I don't know" or simply "no" to any of these questions, I'd suggest you're not doing your job as well as you should. If constraints truly are infeasible, you should be honest about it and your model should reflect the true state of things. Otherwise, someone like me will call BS and do "the emperor has no clothes" thing. We will continue this discussion about constraints and infeasibility next time.

Contact Me! 

If you would be interested in a training session on how to properly do harvest scheduling analysis using goals and other techniques, let me know. And if you have any comments about this blog post, or ideas about future blog posts, add them to the comments section.

Thursday, December 5, 2024

Why are MIP models difficult to solve (or not)?

Introduction

I recently joined a conversation about why a mixed-integer programming (MIP) problem is so much harder to solve than a regular linear programming (LP) problem. Clearly, it is more problematic than just rounding the fractional quantities up or down to the nearest integer. However, it isn't just the variability in solution time that many find confounding - the solutions themselves are often counter-intuitive. To be honest, the most valuable solutions from modeling are the counter-intuitive ones because it means you've learned something new and unexpected from your model. And to me, that's the whole point of modeling.

MIP models do not have to be large to be difficult to solve. In fact, some of the most difficult MIP problems known are trivially small. For this discussion, however, let's stick to forestry and concepts we are familiar with.

Districting

8 Districts

Suppose I have 8 districts in my forest, and I want to know if activity is occurring in each district. I want my model to report that using a binary variable, where 0 = no activity, and 1 = activity. The problem is fairly easy to formulate as a MIP: set up binary variables that each represent harvest in a district. 
*VARIABLE
  vbCut(D1) _BINARYARRAY
  vbCut(D2) _BINARYARRAY
           .
           .
           .
  vbCut(D8) _BINARYARRAY

To make the binary variable useful, I need to relate it to an output in the model (harvest area). I need a constraint that will force the binary variable to equal 1 if I harvest any amount of area in the district. In the general case, I write the constraint as:

oaHARV(xx) - bigM * vbCut(xx) <= 0 1.._LENGTH

In this case, I use a theme-based output where district is one of the themes. The bigM variable has to be greater than any amount of harvest area in a planning period.  I don't know the amount of harvest in any period, but I do know that if I harvested every acre in a district, bigM would have to be that large. In reality, since the districts are different sizes, I can set bigM slightly bigger than the largest district:

FOREACH xx IN (_TH11)
oaHARV(xx) - 15,000 * vbCut(xx) <= 0 1.._LENGTH
ENDFOR

By itself, this doesn't do very much. The binary variables could take on values of 0 or 1 and it wouldn't matter. I need an additional constraint to ensure that binary variables only take on values of 1 when harvesting occurs in the district.

vbCut(*) <= 8 1.._LENGTH

Now, the model will correctly report the districts being operated in each period. There is no effect on the solution because the same choices exist as in the LP relaxation (binary requirements relaxed). Interestingly, the solver took longer to prove optimality for the LP than the MIP.

; LP relaxation
; Elapsed time for solver 0.36s
; GNU reported :
;
; Status:     OPTIMAL
; Objective:  OBJ1MIN = -309600006.1 (MINimum)
; Elapsed time for solver 0.23 s
; GNU reported :
;
; Status:     INTEGER OPTIMAL
; Objective:  OBJ1MIN = -309600006.1 (MINimum)

No Additional Constraints, Just Change RHS

When you change the RHS values in a regular LP model, there usually isn't a big difference in solution time. For example, if you limit the total acres of commercial thinning to 500 acres and then solve it with a limit of 450, you don't see a big difference in solution time. In a MIP formulation, such a change can effect a huge change in solution times. In the following examples, I reduce the number of open districts in each period from (vbCut(*) <= 8) to 7, 6, 4, 2 and finally 1 district per period, just by changing the RHS value. All were solved with the HiGHS solver to a 0.1% gap:



Notice that the constraint on open districts really only starts to bite when 3 or more districts become unavailable. Let's look at which districts are open in the first 10 planning periods for the 6, 4 and 2 districts scenarios:


Not every district was accessed in the 4-district scenario. Presumably, enough harvest volume was available in period 3 that only 3 districts were opened. So, what do you think happened in the 1-district scenario?


Were you anticipating any districts to never be opened? Clearly, the even-flow constraint became limiting when only 1 district could be opened at a time. By opening the magenta or red districts, the objective function value would be worse than leaving them unharvested. Even though the harvest choices within districts are continuous variables, it isn't as simple as choosing the largest or smallest districts to balance the harvest flows. (Neither of the districts left unharvested were the smallest nor the largest districts.)

Additional Constraints

Suppose that in addition to limiting the number of open districts, there was also a policy requirement that you cannot enter the same district in the next period. This type of requirement is common in places like Canada (caribou ranges) but I also used it in a project in California. Basically, I need to restrict temporally adjacent binary variables from summing to more than 1:

FOREACH xx IN (_TH11)
vbCut(xx) + vbCut(xx)[-1] <= 1 2.._LENGTH
ENDFOR

If vbCut(xx) = 1 in period 2, then vbCut(xx) in period (2-1) = 0, and this progression continues until the last planning period. In period 3, vbCut(xx) can be 1 again, but could also be 0, allowing vbCut(xx) to be 1 in period 4. Thus, we have our policy requirement.

If we apply this constraint to the 8, 4 and 2 districts scenarios, we get the following results:

Adding the sequential access constraints to the scenarios dramatically increased solution times but on a percentage basis, the biggest impact was on the easiest scenario to solve, and the lowest on the most difficult. In terms of the sequence of entries into districts, we observe these results:


8-District Scenario


4-District Scenario


2-District Scenario

You'll note that the constraints would have no impact on the single district scenario, since it never exhibited any sequential access to districts originally. Also, note that in the original 2-district scenario, there was only one instance where a district was entered sequentially. However, when you compare district entries from that scenario with the no-sequential entry scenario, there are multiple changes to assigned planning periods:


> Sequential OK    < Sequential Not OK

Discussion

I hope this post has been successful in dispelling any myths about MIP vs LP. With MIP, we see that:
  • It isn't just "LP with a few integer/binary variables". It is a different beast.
  • A problem can be made much more difficult to solve by just changing a RHS value.
  • Adding constraints can increase solution time by a great deal without greatly affecting the objective function value
  • Even small problems can be hard.
  • Integer solutions can be quite counter-intuitive

Contact Me!

If you'd like to explore all-or-nothing decisions in your planning models, give me a shout so we can discuss it further. It may require that you invest in a better solver, but it will certainly test your understanding of outputs and constraints.

What is the Big-M Method, and Why Should I Care?

The  Big-M method  is a common modeling technique used in  Mixed Integer Programming (MIP)  to represent  logical conditions  or  implicatio...