MadDiff implements forward and reverse mode implicit differentiation for MadSuite solvers. MadDiff leverages MadNLP's modular KKT and linear solver infrastructure, supporting LP, QP, and NLP using KKT systems from MadNLP, MadIPM, MadNCL, and HybridKKT.
Warning
MadDiff is a work-in-progress and requires installing forks of several dependencies. Proceed with caution and verify correctness before use.
Note
The NLPModels API requires that your AbstractNLPModel implementation includes the ParametricNLPModels API. Currently, this is automated only for the case when using MadNLP through JuMP, but support for ExaModels, ADNLPModels, and NLPModelsJuMP is planned.
nlp = ... # must implement ParametricNLPModels API
solver = MadNLP.MadNLPSolver(nlp)
solution = MadNLP.solve!(solver)
diff = MadDiff.MadDiffSolver(solver)
dL_dx, dL_dy, dL_dzl, dL_dzu = ... # loss sensitivity vectors
rev = MadDiff.reverse_differentiate!(diff; dL_dx, dL_dy, dL_dzl, dL_dzu)
rev.grad_p # gradient of the loss with respect to the parametersMadDiff aims to be a drop-in replacement for DiffOpt with MadNLP. Simply switch DiffOpt.diff_optimizer(MadNLP.Optimizer) for MadDiff.diff_optimizer(MadNLP.Optimizer) and enjoy the speedup!
using JuMP, DiffOpt
using MadDiff, MadNLP
model = Model(MadDiff.diff_optimizer(MadNLP.Optimizer)) # use MadDiff.diff_optimizer
@variable(model, x)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, x >= 2p)
@objective(model, Min, x^2)
optimize!(model)
DiffOpt.empty_input_sensitivities!(model)
MOI.set(model, DiffOpt.ForwardConstraintSet(), ParameterRef(p), MOI.Parameter(1.0))
DiffOpt.forward_differentiate!(model)
dx = MOI.get(model, DiffOpt.ForwardVariablePrimal(), x)
DiffOpt.empty_input_sensitivities!(model)
MOI.set(model, DiffOpt.ReverseVariablePrimal(), x, 1.0)
DiffOpt.reverse_differentiate!(model)
dp = MOI.get(model, DiffOpt.ReverseConstraintSet(), ParameterRef(p)).value