-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Domain of incoming states in the Lagrangian dual #818
Comments
Okay. Here's the example that clarified my understanding of this: julia> using JuMP, HiGHS
julia> function solve_Lagrangian(L, z)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, x)
@variable(model, y)
@constraint(model, y >= -x)
@constraint(model, y >= 1 + 2(x - 1))
@objective(model, Min, y - L * (x - z))
optimize!(model)
@assert is_solved_and_feasible(model)
return objective_value(model), value(x), value(y)
end
solve_Lagrangian (generic function with 1 method)
julia> function solve_Lagrangian_bounds(L, z)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, 0 <= x <= 1)
@variable(model, y)
@constraint(model, y >= -x)
@constraint(model, y >= 1 + 2(x - 1))
@objective(model, Min, y - L * (x - z))
optimize!(model)
@assert is_solved_and_feasible(model)
return objective_value(model), value(x), value(y)
end
solve_Lagrangian_bounds (generic function with 1 method)
julia> function solve_Lagrangian_bounds_and_int(L, z)
model = Model(HiGHS.Optimizer)
set_silent(model)
@variable(model, 0 <= x <= 1, Bin)
@variable(model, y)
@constraint(model, y >= -x)
@constraint(model, y >= 1 + 2(x - 1))
@objective(model, Min, y - L * (x - z))
optimize!(model)
@assert is_solved_and_feasible(model)
return objective_value(model), value(x), value(y)
end
solve_Lagrangian_bounds_and_int (generic function with 1 method)
julia> solve_Lagrangian(-1, 0.0)
(0.0, -0.0, 0.0)
julia> solve_Lagrangian(0, 0.0)
(-0.3333333333333333, 0.3333333333333333, -0.3333333333333333)
julia> solve_Lagrangian(2, 0.0)
(-1.0, 0.5, 0.0)
julia> solve_Lagrangian_bounds(-1, 0.0)
(0.0, 0.0, -0.0)
julia> solve_Lagrangian_bounds(0, 0.0)
(-0.33333333333333337, 0.3333333333333333, -0.33333333333333337)
julia> solve_Lagrangian_bounds(2, 0.0)
(-1.0, 1.0, 1.0)
julia> solve_Lagrangian_bounds_and_int(-1, 0.0)
(0.0, 0.0, -0.0)
julia> solve_Lagrangian_bounds_and_int(0, 0.0)
(0.0, 0.0, -0.0)
julia> solve_Lagrangian_bounds_and_int(1, 0.0)
(0.0, 0.0, 0.0)
julia> solve_Lagrangian_bounds_and_int(2, 0.0)
(-1.0, 1.0, 1.0) The key result is: julia> solve_Lagrangian_bounds_and_int(1, 0.0)
(0.0, 0.0, 0.0) this cuts off part of the LP relaxation and is tight between the two integer points. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I was reading @ChrisFuelOR's paper: https://optimization-online.org/wp-content/uploads/2024/08/Manuscript_Fuellner_Optimization_Online.pdf
Here's the most interesting section:
SDDP.jl
When we dualize the incoming state bounds, we do not add new bonds:
SDDP.jl/src/plugins/duality_handlers.jl
Lines 178 to 183 in aec55e2
Our notation differs, but this is equivalent to the last choice of$\mathbb{R}$ . I hadn't really thought about it before, but we can do much better if we restrict the domain to the union of the variable bounds in the previous nodes.
There's two subtleties that we need to account for:
The text was updated successfully, but these errors were encountered: