You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a test function that accepts a sampling scheme. Defaults to in-sample Monte Carlo.
I've called this simulate. There is an interesting idea for a train-validation-test stopping rule: train the policy and every so often simulate that policy on a validation dataset. Stop once the policy fails to improve on the validation dataset. Discussion in Todo List #3.
Only add the cut if it improves by more than some ϵ
Add cuts to other nodes in graph with same children
We can pre-process the initial graph object to obtain the list. Instead of just nodes with identical children, we could also add on a subset of the children. It also needs a train option to enable/disable.
Be smarter about adding cuts to other nodes in the graph. We could detect that we've solved the majority of subproblems and we only need to solve a few more to be able to add a cut. That's an improvement for complicated graphs though.
Numerical stability improvements:
Project state variables to their bounds on forward pass.
Analyze coefficients for user-warning. Things to check
Large coefficients > 1e7 log10(maximum(abs.(coefficients))) > 7
Small coefficients < 1e-2 -6 < log10(minimum(abs.(coefficients))) < -2
High dynamic range log10(maximum(abs.(coefficients)) - minimum(abs.(coefficients))) > 8
Throw away cuts that are numerically almost identical.
What about forcing the state iterates on the forward pass to be some minimum distance away from the previous iterate? Unlike 2-stage benders, we necessarily don't want our iterates to converge. We want to explore the state-space, or at the very least, add cuts that are not near to each other to prevent numerical issues.
Any updates on the Asamov & Powell's quadratic regularization implementation?
Nope. We could discuss off-line perhaps. I tend to think that regularization is dumb. It doesn't make sense to regularize to a previous sample path if the uncertainty is different. It only makes sense if the state variables have very low variance.
SDDP.jl is just not designed for high-dimensional problems.
Features
There is an interesting idea for a train-validation-test stopping rule: train the policy and every so often simulate that policy on a validation dataset. Stop once the policy fails to improve on the validation dataset.Discussion in Todo List #3.Implement iteration schemes:Dynamic sequencing protocol https://d-nb.info/1046905090/34 (Wolf's thesis actually has a lot of great stuff aimed at L-shaped nested Benders)(Covered by Proposal: Iteration Schemes #116)@state
macro`I had a lot of issues with this. There seems to be some local scoping/macro hygiene issue passing local variables in Kokako scope through to JuMP. Related issues: Fix hygiene jump-dev/JuMP.jl#1497, Fix hygiene in objective macro jump-dev/JuMP.jl#1517I went the JuMPExtension route. I owe Benoit big-time.Performance improvements
train
option to enable/disable.log10(maximum(abs.(coefficients))) > 7
-6 < log10(minimum(abs.(coefficients))) < -2
log10(maximum(abs.(coefficients)) - minimum(abs.(coefficients))) > 8
Public facing
Logging.The text was updated successfully, but these errors were encountered: