-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User question: stochastic lead times #590
Comments
I'd use this trick: using SDDP
import HiGHS
import Distributions
T = 10
model = SDDP.LinearPolicyGraph(
stages = 20,
sense = :Max,
upper_bound = 1000,
optimizer = HiGHS.Optimizer,
) do sp, t
@variables(sp, begin
x_inventory >= 0, SDDP.State, (initial_value = 0)
x_orders[1:T+1], SDDP.State, (initial_value = 0)
0 <= u_buy <= 10
u_sell >= 0
end)
fix(x_orders[T+1].out, 0)
@stageobjective(sp, u_sell)
@constraints(sp, begin
c_orders[i=1:T], x_orders[i+1].in + 1 * u_buy == x_orders[i].out
x_inventory.out == x_inventory.in - u_sell + x_orders[1].in
end)
Ω = 1:T
P = Distributions.pdf.(Distributions.Geometric(1 / 5), 0:T-1)
P ./= sum(P)
SDDP.parameterize(sp, Ω, P) do ω
# Rewrite the constraint c_orders[i=1:T] to
# x_orders[i+1].in + 1 * u_buy == x_orders[i].out
# if ω == i and
# x_orders[i+1].in + 0 * u_buy == x_orders[i].out
# if ω != i.
for i in Ω
set_normalized_coefficient(c_orders[i], u_buy, ω == i ? 1 : 0)
end
end
end
SDDP.train(model; iteration_limit = 10) |
Very slick! Thank you @odow, I didn't know about the |
See https://odow.github.io/SDDP.jl/stable/guides/add_noise_in_the_constraint_matrix. But yeah. One problem with SDDP.jl is that there's a bit of art in knowing how to represent something, and it isn't always obvious to new users. And it's hard to teach because each situation is subtly different. |
Indeed, thanks for taking the time to show me this. |
Closing because this seems resolved. Please re-open if you have further questions |
Very nice trick @odow! |
PRs accepted 😄 |
Hi,
I'm trying to figure out how to adapt the example in the docs here: https://odow.github.io/SDDP.jl/stable/guides/access_previous_variables/#Access-a-decision-from-N-stages-ago to the situation where the lead time is variable, which is important for modeling my problem. I am not sure what's the best way to go about it, I tried with the code below, which also implements a "conveyor belt" style of advancement from a lead time at
pipeline[i]
to the "ready to go" point atpipeline[1]
. In this example I consider the lead time to follow a truncated Geometric distribution, but because I can't use one JuMP variable to subset another, I'm unsure what is a good alternative. I'm pasting what I have below (i.e. what I wish I could run). Does anyone have any suggestions? Thanks!I'm on SDDP v1.1.4.
The text was updated successfully, but these errors were encountered: