Monday, November 11, 2024

Improve Model Efficiency - Part 5

Introduction

Previously, one of my blog posts was about how the outputs you create can cause performance issues in a Woodstock model. Specifically, the use of inventory-based outputs can seriously degrade matrix generation time because of the overhead they incur. Often, these outputs can be avoided through the use of REGIMES. 

I also wrote a blog post recently on conceptual models. This time, I want to continue considering conceptual models by going even more basic. Let's talk about conceptual models and what questions the model is supposed to answer. Poorly conceived models are doomed to be slow from the outset.

What Kind of Model is it?

In the hierarchical planning framework, forest planning models can be strategic, tactical or operational. Strategic planning models are focused on the concepts of sustainability, capital planning, silviculture investment, volume allocation and return on investment. The questions they answer are of the how, what and when variety, such as "when should we harvest?", "what stand should we harvest?" and "how should we manage these stands (thin or not)?", etc. Because it is strategic, many details of management are omitted or unavailable. And yet, clients often make choices that pretend to provide detail or certainty where none is warranted. By focusing on the questions at hand, you can pare back the conceptual model and in turn, improve model performance.

A Bad Conceptual Model

Years ago, a client approached me and said that he was told by a university professor (who shall remain nameless) that his stand-based Woodstock model was too large to solve because it relied on a Model II formulation. The professor said that a Model I formulation would be a better choice because it would not suffer the excessive size problem. Of course, this was nonsense! I explained that a properly formulated model with the same yield tables, actions and timing choices would yield the same solution, regardless of the formulation. Depending on the solver, a Model II formulation could be slower due to the higher number of transfer rows relative to a Model I formulation. Regardless, the problem had to be due to the way the client defined his actions. 

Timing Choices

It didn't take very long to start finding the culprits First, I checked the ACTIONS section:

*ACTION aCC Y Clearcut
*OPERABLE aCC
.MASK() _AGE >= 40

Next, I checked the CONTROL and LIFESPAN sections:

CONTROL
*LENGTH 40 ; 5-year periods

LIFESPAN
.MASK() 500

Nothing ever died in this model because the oldest permitted age class would be 2500 years! Every existing development type (DevType) older than 40 years generated a decision variable in every planning period, even though the existing yield tables stopped at age 150. This resulted in a bunch of decision variables representing very old stands and all with the same volume coefficients. Even if you could solve it, these decision variables represent poor choices and are never chosen.

Transitions

Because this was a stand-based model, I checked the TRANSITIONS section. Sure enough, they retained the StandID after the clearcut. 

TRANSITIONS
*CASE aCC
*SOURCE .MASK(_TH10(EX))
*TARGET .MASK(_TH10(RG)) 100

This prevented Woodstock from pooling acres into one of the regen DevTypes represented by a total of 12 ( yes, twelve!) yield tables. Instead of ending up with 12 DevTypes with multiple ages in each, they ended up with thousands and thousands of DevTypes that overwhelmed the matrix generator. The client asked about *COMPRESSTIME. I said it could eliminate some decision variables for each DevType, but the real problem was the excessive number of development types. By replacing the StandID theme with a generic ID (000000) associated with regen yield tables, the combinatorial explosion of DevTypes was averted. 

ACTIONS 
*ACTION aCC Y Clearcut
*OPERABLE aCC
.MASK() _AGE >= 40 AND _AGE <= 150

TRANSITIONS
*CASE aCC
*SOURCE .MASK(_TH10(EX))
*TARGET .MASK(_TH10(RG),_TH12(000000)) 100

The revised model ran in minutes and was trivial to solve. There was no need for *COMPRESSTIME.

What About the Model I Issue?

The reason the professor's Model I model ran and the Woodstock model didn't wasn't due to model formulation. The professor's model excluded a lot of choices because they were enumerated by hand. So even though the yield tables and DevTypes were the same in both models, the choices represented in the two models were different. After the changes were implemented and the Woodstock model ran, the Woodstock model yielded a slightly better solution. Not because Model II is a better formulation, but because it contained choices that the Model I model lacked. Thus, another reason why you should avoid (when possible) software switches like *COMPRESSTIME that blindly omit every 2nd, 3rd, 4th,... choice. 

Model Results Got You Stumped? Contact Me!

If you have a slow, cumbersome planning model, give me a call. I can review your model and make some suggestions for improvement. If the changes are extensive, we can design a training session around them. No one has more years of Woodstock experience.

No comments:

Post a Comment

Why are MIP models difficult to solve (or not)?

Introduction I recently joined a conversation about why a mixed-integer programming (MIP) problem is so much harder to solve than a regular ...