-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct Modeling Technique for Capacity Expansion #775
Comments
You're on the right track. Here's how I would write your current model: using SDDP, HiGHS
T = 5
c_initial_capacity = 80
c_demand = [80, 120, 150, 180, 200]
c_expansion_cost = 50
c_operational_cost = 3
c_max_additional_capacity = 50
c_penalty = 60
model = SDDP.LinearPolicyGraph(;
stages = T,
sense = :Min,
lower_bound = 0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variables(sp, begin
0 <= x_capacity <= 500, SDDP.State, (initial_value = c_initial_capacity)
0 <= u_expansion <= c_max_additional_capacity
u_production >= 0
u_unmet >= 0
end)
@constraints(sp, begin
x_capacity.out == x_capacity.in + u_expansion
u_production + u_unmet == c_demand[t]
u_production <= x_capacity.in
end)
@stageobjective(
sp,
c_operational_cost * u_production +
c_expansion_cost * u_expansion +
u_unmet * c_penalty,
)
end
SDDP.train(model; iteration_limit = 100)
println("Optimal Cost: ", SDDP.calculate_bound(model))
simulations =
SDDP.simulate(model, 1, [:x_capacity, :u_production, :u_expansion])
for (t, sp) in enumerate(simulations[1])
println(
"Year $t: Capacity_in = ",
value(sp[:x_capacity].in),
", Expansion = ",
value(sp[:u_expansion]),
", Capacity_out = ",
value(sp[:x_capacity].out),
", Production = ",
value(sp[:u_production]),
)
end
Note that Andy is on sabbatical this year.
Who are you working with? I think you're going to need to give up some things:
I have some high level questions that you should consider before spending too much time one this problem. (These questions aren't meant to lead you to a particular conclusion. Some, I am wary of. Others, I don't know the answer.) Considering a deterministic model of operations for a single year:
Question 6 is very important. As a rule of thumb, you should expect the SDDP model with uncertainty to take 10^3 - 10^5 times longer than it to solve. If it solves in less than a second, the SDDP will still likely take hours to solve. If it takes longer. Stop. Now consider a stochastic model of operations for a single year:
Now consider how the investments are chained:
For (3), you will need https://sddp.dev/stable/guides/access_previous_variables/#Access-a-decision-from-N-stages-ago, which would be something like this: using SDDP, HiGHS
c_T = 5 # Years
c_T_subperiods = 12 # Monthly
c_initial_capacity = 80
c_demand = [80, 120, 150, 180, 200]
c_expansion_cost = 50
c_operational_cost = 3
c_max_additional_capacity = 50
c_penalty = 60
model = SDDP.LinearPolicyGraph(;
stages = c_T_subperiods * c_T,
sense = :Min,
lower_bound = 0,
optimizer = HiGHS.Optimizer,
) do sp, t
@variables(sp, begin
# Capacity of the battery
0 <= x_capacity <= 500, SDDP.State, (initial_value = c_initial_capacity)
# The pipeline of future investments. We can use x_expansion[1].in in
# the current stage, and new investments get put in `x_expansion[end].out`.
0 <= x_expansion[1:c_T_subperiods] <= c_max_additional_capacity, SDDP.State, (initial_value = 0)
# Monthly production
u_production >= 0
# Monthly unmet demand
u_unmet >= 0
end)
if mod(t, c_T_subperiods) != 0
# We can expand only in Month 0
fix(x_expansion[end].out, 0; force = true)
end
@constraints(sp, begin
x_capacity.out == x_capacity.in + x_expansion[1].in
[i in 2:c_T_subperiods], x_expansion[i-1].out == x_expansion[i].in
u_production + u_unmet == c_demand[t]
u_production <= x_capacity.in
end)
@stageobjective(
sp,
c_operational_cost * u_production +
c_expansion_cost * x_expansion[end].out +
u_unmet * c_penalty,
)
end
SDDP.train(model; iteration_limit = 100)
println("Optimal Cost: ", SDDP.calculate_bound(model))
simulations =
SDDP.simulate(model, 1, [:x_capacity, :u_production, :x_expansion])
for (t, sp) in enumerate(simulations[1])
println(
"Year $t: Capacity_in = ",
value(sp[:x_capacity].in),
", Expansion = ",
value(sp[:x_expansion][end].out),
", Capacity_out = ",
value(sp[:x_capacity].out),
", Production = ",
value(sp[:u_production]),
)
end |
Dear @odow, I appreciate your taking the time to answer all my questions in detail!
Thanks for the warning. I copied it from an example but planned to use Gurobi for the extended model anyway.
Thanks for fixing the model notations. Since it was my first try, I didn't stick with the naming conventions, but I have adapted them now.
Great, thanks! I am now implementing the problem in hourly resolution. I will share the code below.
Thanks for the remark. The research stay seems complicated then, but I was considering it for spring or summer. I will contact Andy after making more progress with the model and being sure that I will proceed with it.
I am doing my PhD at Energy Economics Group at the Technical University of Vienna, and my supervisors are Lukas Kranzl and Reinhard Haas. As the name suggests, we are not an OR group, which is why I am considering seeking external supervision through a research stay.
DH systems consist of three main parts: heat generation (supply), heat distribution (grid/network), and the buildings (demand). I wanted to keep the hourly resolution for operational problem as we do in LP/MILP models since my focus here is how to decarbonise the DH supply of a city (especially the investment decisions). For instance, one of the heat supply options, large-scale heat pumps, have varying efficiencies based on the hourly temperature value of the heat source they use. However, I will discuss this with my supervisor to check how critical it is to have hourly modelling for the whole year.
using SDDP, Gurobi
# Problem Data
T = 8761 # Total number of years
c_initial_capacity = 80
c_demand = [80, 120, 150, 180, 200]
c_repeated_demand = repeat(c_demand, 1752) # create a dummy demand vector of 8760
c_expansion_cost = 50
c_operational_cost = 3
c_max_additional_capacity = 50
# Cost of unmet demand
c_penalty = 60
# SDDP Model
model = SDDP.LinearPolicyGraph(;
stages=T,
sense=:Min,
lower_bound=0,
optimizer=Gurobi.Optimizer,
) do sp, t
# Variables
@variables(sp, begin
# State variables
0 <= x_capacity <= 500, SDDP.State, (initial_value = c_initial_capacity)
# Control variables
0 <= u_expansion <= c_max_additional_capacity
u_production >= 0
u_unmet >= 0
end)
# Constraints
if t == 1
@constraint(sp, x_capacity.out == x_capacity.in + u_expansion)
@stageobjective(sp, c_expansion_cost * u_expansion)
else
@constraints(sp, begin
x_capacity.out == x_capacity.in + u_expansion
# allow unmet demand with a penalty to ensure relatively complete recourse
u_production + u_unmet == c_repeated_demand[t-1]
# production == c_demand[t]
u_production <= x_capacity.in
x_capacity.out == x_capacity.in + u_expansion
end)
# Stage-objective
@stageobjective(
sp,
c_operational_cost * u_production +
c_expansion_cost * u_expansion +
u_unmet * c_penalty,
)
end
end
# Solve the model
SDDP.train(model; iteration_limit=100)
# Retrieve results
println("Optimal Cost: ", SDDP.calculate_bound(model))
# Simulation
simulations = SDDP.simulate(model, 1, [:x_capacity, :u_production, :u_expansion, :u_unmet])
for t in 1:T
sp = simulations[1][t]
println("Year $t: Capacity_in = ", value(sp[:x_capacity].in), ", Expansion = ", value(sp[:u_expansion]),
", Capacity_out = ", value(sp[:x_capacity].out), ", Production = ", value(sp[:u_production]))
end
# check if there's any unmet demand
res = simulations[1]
sum(value(res[t][:u_unmet]) for t in 1:T)
If you mean the number of nodes, it's 8761 (1 investment, 8760 operational). But regarding the decision variables and constraints, in a full-scale model, I would have 5-6 heat generators --> requires their own state and control variables + expansion and capacity constraints + there would also be some constraints for defining the operation of specific technologies, such as minimum power they can run (as the ratio of their installed capacity)
If you mean data for setting uncertainties for the future, of course, we do not. However, it is already common in the literature to set certain deterministic energy price scenarios for the future and run them separately. So, we can also argue our own assumptions.
I never worked with representative days for DH, but I will check the literature. One advantage is that, as I explained above, I will only model the heat generation. So, compared to the power system or hydropower models, we do not need to model connected reservoirs or transmission lines within different zones. But I will definitely consider simplified approaches.
The storage I will consider will be only thermal energy storage/s. My plan is to keep electricity markets out of the system boundary and get the electricity prices exogenously to reduce complexity.
Normally, yes. I think we can compromise on that but I need to think on this in detail.
It took around 9 mins with Gurobi and 10 mins with HiGHS to complete 100 iterations for the code I shared above. Here is the full output:
Does this mean 6 mins is already too long? In this case, should I directly focus on how to reduce the number of nodes through simplifications such as representative days/hours? Or should I think of switching to an LP model? Again, I appreciate your effort to answer my questions. I will answer the remaining questions about the uncertainties after thinking about them in more detail and trying to formulate the conceptual stochastic model. Thanks and best regards! |
Which? I have been trying to remove all traces of GLPK because people keep using it!
No. I mean write a single JuMP model and measure the number of variables and constraints.
I guess my point is: can a stochastic model of hourly resolution over 25 years tell you much if you are using deterministic energy prices?
I think you've run out of memory. It shouldn't take that long. Your computer has built 8761 different optimization problems! My suggestion would be to reduce the number of SDDP stages, but keep the hourly resolution within each stage. So model each stage as a week, and have the 168 hourly blocks within each stage. It means that the agent will have perfect foresight of the upcoming week, but that's not the end of the world. |
Hey Oscar, Many thanks for your swift response!
I do not remember exactly which one, as I went through many examples. But I will let you know if see it again.
Oh, ok, got it. This really depends on how much I can model with SDDP. If I do not have a problem with the computational time, I can model heat generators in detail, probably also using integer/binary variables. But, in case of problems with computational time, I can simplify the modelling.
I fully agree with this. I just wanted to share that this is done in the literature; it is not what I want to do. I would like to have stochastic demand and stochastic (selected) energy carrier prices. I had the idea after writing the previous answer that having the uncertainty of demand on the annual level as a starting point.
I did not check memory usage, but I ran the model on our Linux server with 250 GB memory (200 GB is usually always free). But I will rerun it by monitoring the memory usage and getting back to you. But, yes, I can imagine that; I always had a memory problem with memory usage when I was trying to solve inventory control problems with big lead times using the value iteration algorithm.
This sounds awesome! I just didn't see any examples of doing this. Can you please share one if you have any? Based on the examples I worked on, I thought every time step had to be a node. In my problem, we do not need the uncertainty at the hourly level at all. I just wanted to solve the operational problem at the hourly level but not with uncertainty at that level. I can't thank you enough, especially for this last point! With all the help you provided, you can be a coauthor in my conference presentation if you wish. Best, |
So, I have just rerun the last model code I shared above (the one with 8761 nodes with no stochasticity), and it again took around 9 mins to train with 100 iterations. This time, I tracked the memory usage, and there was always almost 200 GB of free memory on the server. I will check the computational performance improvement part tomorrow to see if I can do anything. |
Note that the bounds had converged after 9 iterations. So this took 30 seconds to solve. But I just don't think it is possible to meaningfully solve a problem with such a large number of stages. We could perhaps solve weekly stages over 25 years = 1300 nodes. |
Hey @odow, Sorry for my late response. I am on vacation this week but still want to develop a basic stochastic model to present the concept at the conference, which will take place next week.
I fully agree with this. As I mentioned above, even yearly stages would work for now. I just couldn't find an example of how to implement it this way. I would appreciate it if you could share one if you are aware of any.
Regarding this from your previous response. I already checked this paper implementing "load blocks": https://www.epoc.org.nz/papers/HolePhilpottDowsonv2.pdf Best, |
For the yearly stages with a lag on investment, see: https://sddp.dev/stable/guides/access_previous_variables/#Access-a-decision-from-N-stages-ago using SDDP, Gurobi
T = 25 # Time horizon [years]
lag = 5 # Time it takes to build new capacity [years].
model = SDDP.LinearPolicyGraph(;
stages = T,
sense = :Min,
lower_bound = 0,
optimizer = Gurobi.Optimizer,
) do sp, t
@variables(sp, begin
x_capacity >= 0, SDDP.State, (initial_value = 100)
# x_pipeline[end].in is new capacity that comes online at the start of
# this stage.
# x_pipeline[1].out is new capacity that we decide to build during
# this stage.
0 <= x_pipeline[1:lag], SDDP.State, (initial_value = 0)
end)
@constraints(sp, begin
x_capacity.out == x_capacity.in + x_pipeline[end].in
[i in 2:lag], x_pipeline[i].out == x_pipeline[i-1].in
end)
if mod1(t, 5) != 1
# New build decisions can be made only every 5 years
fix(x_pipeline[1].out, 0; force = true)
end
end The JADE code is https://github.com/EPOC-NZ/JADE.jl. But it is probably a bit too complicated for you to be useful. |
Closing because this discussion seems to have come to a conclusion. Please comment if you have other questions and I can re-open. |
Dear @odow,
I got to know about SDDP.jl thanks to a course by Prof. Andy Philpott. I am experimenting with SDDP.jl to create a stochastic capacity expansion model for district heating (DH), which I will use for my next PhD thesis paper. The idea is to model investments and operational optimization for multiple years using SDDP.jl.
The concept I plan is to model 5-year time steps for investment decisions, each investment in capacity being available in the next period (5 years). But, at the same time, I want to optimize the dispatch of available capacities on an hourly level for the year I am modelling.
I went through several examples in the documentation, slides of Prof. Philpot, and also the papers, such as:
I understood that they all model the investments by adding the investment node at the beginning with only investment-related constraints and then having an in/finite horizon modelling of the operation. For instance, the policy graph below from the slides (also the same as in the first paper) assumes an investment node at the beginning and then infinite horizon operational optimization.
What I would like to achieve is to start with an existing capacity and keep building on it. In this case, should I start the loop from the investment node? Or, should I manually define a policy graph with the first node being the investment node and then the following 8760 nodes being subproblems on dispatch and repeat this for the number of 5-year time steps I would like to model? Do you have any recommendations on this? I would also appreciate it if you know of any similar examples. In the next step, I will introduce stochastic parameters (probably energy prices and demand). In the future, I would also like to implement different investment lead times for various types of DH generation technologies, but that's a topic of another question :)
I will also contact Prof. Philpot for collaboration on this research and a possible research stay. We may also have a chance to collaborate with you if you are interested. Now, I am just trying to develop a representative model for a conference to present a stochastic capacity expansion model for DH, which is lacking in the literature.
I also share the super simple model I've developed in the last few days below. This is my first time working with Julia and SDDP, so please pardon any trivial errors I made. I would also appreciate comments on using the penalty cost for unmet demand to avoid violating the relatively complete recourse requirement (is it a good practice, or do you recommend modelling another way)?
Thanks and best regards!
The text was updated successfully, but these errors were encountered: