Tuesday, August 13, 2024

Benchmarking - A Method for Determining Appropriate Constraint Bounds

Introduction

Whenever I train new analysts in harvest scheduling and optimization, I make a point to include the basic assumptions of linear programming. The beauty of LP is that if a feasible solution exists, the solver will provide an optimal solution. One of the key points I make is that constraints will always cost you in terms of objective function value except in rare situations where the cost is zero. In other words, you can't constraint your way to an improved objective function value. So, you would think that people confronted with difficult policy constraints would spend the time to determine reasonable bounds before plowing ahead with analysis, and, dog forbid, employing goal formulations to avoid infeasibility. A recent project has taught me that I have more to teach.

What Is Possible?

Generally, allocations to different management emphases are more typical of public land management than private, but I have seen allocations to carbon projects or conservation easements handled similarly. My point is, however, that you should know what your forest is capable of. For example, if you have identified 3000 acres of land suitable for old growth designation, you should expect an infeasible solution if you constrain the model to allocate 4000 acres. That is patently infeasible (aka obvious). Yet, I've had that situation arise more than once in my career. But what if you're not able to explicitly identify qualifying stands, and you have more than one metric to consider (e.g., critical species, hydrological maturity, and so forth)? That is when you need to consider benchmarking.

Know Your Bounds

As a graduate student, I was very fortunate to attend a seminar given by Larry Davis from UC Berkeley where he introduced us to the concept of benchmarking. His opinion was that models can only go so far and that the final determination for a management plan was political rather than analytical. Models can only provide the boundaries within which a viable solution must exist. For example, suppose we have to consider timber production, late seral birds and bird hunting opportunities. Co-production of these metrics is possible to a degree, but how much of each is possible? Davis suggested that 3 model runs, each maximizing one of the metrics, provides critical information. Clearly, production of each metric can't be increased beyond the value of the respective objective function value. However, the lowest values of each metric in the non-optimized runs provide useful floor values as well. By taking the highest and lowest values of the three metrics, you now have a bounded space that can guide constraints in your analysis. For example, metric DIV never exceeds 4.5, and at worst, it never falls below 1.75. 


DIV1 - MAXMIN Objective


DEN2 - MAXMIN Objective


WFH3 - MAXMIN Objective

Depending on the number of metrics you need to consider, a benchmark exercise may require a lot matrix generations and solution time. Luckily, there is a way to speed things up by reducing the number of matrix generations.

Smart Benchmarking

In a benchmarking exercise, the idea is to find the biological capacity of the forest to produce the output of interest. If constraints are necessary (e.g., budgetary or strict policy constraints), they are imposed regardless of the objective function. And if the constraints are always the same, you can declare multiple objective functions and generate the matrix exactly once.

In the following example, I have 9 biodiversity indices to consider. I used a MAXMIN formulation because I thought it important to avoid solutions where the worst outcomes for each metric are crammed into one or two periods so that the rest of the periods appear better. (A more technical discussion of this phenomenon is presented here.) The only constraints are the need to cover any management costs from timber revenues (cash-flow positivity) and a requirement to reforest any backlog. 

*OBJECTIVE
 _MAXMIN oidxDIV 1.._LENGTH
 _MAXMIN oidxDEN 1.._LENGTH
 _MAXMIN oidxWFH 1.._LENGTH
 _MAXMIN oidxBEE 1.._LENGTH
 _MAXMIN oidxESB 1.._LENGTH
 _MAXMIN oidxLSB 1.._LENGTH
 _MAXMIN oidxRTV 1.._LENGTH
 _MAXMIN oidxAMP 1.._LENGTH
 _MAXMIN oidxUNG 1.._LENGTH
*CONSTRAINTS
 orHARV - ocHARV - ocSILV - ocFIXD >= 0 1.._LENGTH ;cash-flow +ve
 oaNULL <= 0 1 ;plant backlog

If you use the MOSEK solver, Woodstock will present a dialog after matrix generation where you can choose which objective to optimize. Once the solution has been written, simply create a scenario from the base model to store the results. Go back to your base model, select from the menu Run | Run Model Bat file, and when the dialog appears, choose a different objective. Continue creating scenarios and rerunning the batch file until all the benchmarks have been run. Usually, I wait until all the solutions are obtained before executing the schedules because I can do that automatically using the Run | Run Scenarios menu option. Once your scenarios have all been executed, use the Scenario Compare feature to determine highest and lowest values for each metric.

Other solvers may not support multiple objective functions directly. In this case, you can edit the MPS file that Woodstock generated to change the objective function ID on the first line from OBJ1MIN to OBJ2MIN:

NAME    RSPS 2023.2.1 (64 Bit)                              
ROWS
  N OBJ2MIN
  N OBJ1MIN
  N OBJ3MIN
  N OBJ4MIN
  N OBJ5MIN
  N OBJ6MIN
  N OBJ7MIN
  N OBJ8MIN
  N OBJ9MIN

Save the change but don't close the file. Use Run | Run Model Bat file as before and when complete save the scenario. You will need to change the objective function between each solution but the edit is fast and certainly quicker than generating a new matrix each time. Also, note that Woodstock limits you to 9 objective functions due to the naming convention of the MPS file.

Summary

This approach is useful for any analyst tasked with benchmarking, but it is particularly important for public agencies issuing policy requirements. Don't impose constraints without knowing whether they are technically feasible! You should be able to demonstrate how that it can be done using real-world data.

Looking For More Efficicency Ideas? Contact Me!

Give me a shout if you'd like to learn more about benchmarking analyses or any other modeling difficulties you may be having. No one has more real-world experience with Woodstock than me.

Why are MIP models difficult to solve (or not)?

Introduction I recently joined a conversation about why a mixed-integer programming (MIP) problem is so much harder to solve than a regular ...