Drake
Drake C++ Documentation
drake::solvers Namespace Reference

Namespaces

 test
 

Classes

struct  AddRotationMatrixBoxSphereIntersectionReturn
 Some of the newly added variables in function AddRotationMatrixBoxSphereIntersectionMilpConstraints. More...
 
class  AugmentedLagrangianNonsmooth
 Compute the augmented Lagrangian (AL) of a given mathematical program. More...
 
class  AugmentedLagrangianSmooth
 Compute the augmented Lagrangian (AL) of a given mathematical program. More...
 
class  Binding
 A binding on constraint type C is a mapping of the decision variables onto the inputs of C. More...
 
struct  Bound
 Stores the lower and upper bound of a variable. More...
 
class  BoundingBoxConstraint
 Implements a constraint of the form \( lb <= x <= ub \). More...
 
class  ClarabelSolver
 An interface to wrap Clarabel https://github.com/oxfordcontrol/Clarabel.cpp. More...
 
struct  ClarabelSolverDetails
 The Clarabel solver details after calling the Solve() function. More...
 
class  ClpSolver
 A wrapper to call CLP using Drake's MathematicalProgram. More...
 
struct  ClpSolverDetails
 The CLP solver details after calling Solve() function. More...
 
class  Constraint
 A constraint is a function + lower and upper bounds. More...
 
class  Cost
 Provides an abstract base for all costs. More...
 
class  CsdpSolver
 Wrap CSDP solver such that it can solve a drake::solvers::MathematicalProgram. More...
 
struct  CsdpSolverDetails
 The CSDP solver details after calling Solve() function. More...
 
class  EqualityConstrainedQPSolver
 Solves a quadratic program with equality constraint. More...
 
class  EvaluatorBase
 Provides an abstract interface to represent an expression, mapping a fixed or dynamic number of inputs to a fixed number of outputs, that may be evaluated on a scalar type of double or AutoDiffXd. More...
 
class  EvaluatorConstraint
 A constraint that may be specified using another (potentially nonlinear) evaluator. More...
 
class  EvaluatorCost
 A cost that may be specified using another (potentially nonlinear) evaluator. More...
 
class  ExponentialConeConstraint
 An exponential cone constraint is a special type of convex cone constraint. More...
 
class  ExpressionConstraint
 Impose a generic (potentially nonlinear) constraint represented as a vector of symbolic Expression. More...
 
class  ExpressionCost
 Impose a generic (potentially nonlinear) cost represented as a symbolic Expression. More...
 
class  FunctionEvaluator
 An evaluator that may be specified using a callable object. More...
 
class  GurobiSolver
 An implementation of SolverInterface for the commercially-licensed Gurobi solver (https://www.gurobi.com/). More...
 
struct  GurobiSolverDetails
 The Gurobi solver details after calling Solve() function. More...
 
class  IpoptSolver
 A wrapper to call Ipopt using Drake's MathematicalProgram. More...
 
struct  IpoptSolverDetails
 The Ipopt solver details after calling Solve() function. More...
 
class  L1NormCost
 Implements a cost of the form ‖Ax + b‖₁. More...
 
class  L2NormCost
 Implements a cost of the form ‖Ax + b‖₂. More...
 
class  LinearComplementarityConstraint
 Implements a constraint of the form: More...
 
class  LinearConstraint
 Implements a constraint of the form \( lb <= Ax <= ub \). More...
 
class  LinearCost
 Implements a cost of the form

\[ a'x + b \]

. More...

 
class  LinearEqualityConstraint
 Implements a constraint of the form \( Ax = b \). More...
 
class  LinearMatrixInequalityConstraint
 Impose the matrix inequality constraint on variable x

\[ F_0 + x_1 F_1 + ... + x_n F_n \text{ is p.s.d} \]

where p.s.d stands for positive semidefinite. More...

 
class  LinearSystemSolver
 Finds the least-square solution to the linear system A * x = b. More...
 
class  LInfNormCost
 Implements a cost of the form ‖Ax + b‖∞. More...
 
struct  LogarithmicSos2NewBinaryVariables
 The size of the new binary variables in the compile time, for Special Ordered Set of type 2 (SOS2) constraint. More...
 
struct  LogarithmicSos2NewBinaryVariables< Eigen::Dynamic >
 
class  LorentzConeConstraint
 Constraining the linear expression \( z=Ax+b \) lies within the Lorentz cone. More...
 
class  MathematicalProgram
 MathematicalProgram stores the decision variables, the constraints and costs of an optimization problem. More...
 
class  MathematicalProgramResult
 The result returned by MathematicalProgram::Solve(). More...
 
class  MinimumValueLowerBoundConstraint
 Constrain min(v) >= lb where v=f(x). More...
 
class  MinimumValueUpperBoundConstraint
 Constrain min(v) <= ub where v=f(x). More...
 
class  MixedIntegerBranchAndBound
 Given a mixed-integer optimization problem (MIP) (or more accurately, mixed binary problem), solve this problem through branch-and-bound process. More...
 
class  MixedIntegerBranchAndBoundNode
 A node in the branch-and-bound (bnb) tree. More...
 
class  MixedIntegerRotationConstraintGenerator
 We relax the non-convex SO(3) constraint on rotation matrix R to mixed-integer linear constraints. More...
 
class  MobyLCPSolver
 A class for solving Linear Complementarity Problems (LCPs). More...
 
class  MobyLcpSolverId
 Non-template class for MobyLcpSolver<T> constants. More...
 
class  MosekSolver
 An implementation of SolverInterface for the commercially-licensed MOSEK (TM) solver (https://www.mosek.com/). More...
 
struct  MosekSolverDetails
 The MOSEK solver details after calling Solve() function. More...
 
struct  NewSymmetricVariableNames
 
struct  NewVariableNames
 
struct  NewVariableNames< Eigen::Dynamic >
 
struct  NewVariableNames< Rows, Cols >
 
struct  NewVariableNames< Size >
 The type of the names for the newly added variables. More...
 
class  NloptSolver
 
struct  NloptSolverDetails
 The NLopt solver details after calling Solve() function. More...
 
class  OsqpSolver
 A wrapper to call OSQP using Drake's MathematicalProgram. More...
 
struct  OsqpSolverDetails
 The OSQP solver details after calling Solve() function. More...
 
class  PerspectiveQuadraticCost
 If \( z = Ax + b,\) implements a cost of the form:

\[ (z_1^2 + z_2^2 + ... + z_{n-1}^2) / z_0. \]

Note that this cost is convex when we additionally constrain z_0 > 0. More...

 
class  PolynomialConstraint
 A constraint on the values of multivariate polynomials. More...
 
class  PolynomialCost
 Implements a cost of the form P(x, y...) where P is a multivariate polynomial in x, y, ... More...
 
class  PolynomialEvaluator
 Implements an evaluator of the form P(x, y...) where P is a multivariate polynomial in x, y, ... More...
 
class  PositiveSemidefiniteConstraint
 Implements a positive semidefinite constraint on a symmetric matrix S

\[\text{ S is p.s.d }\]

namely, all eigen values of S are non-negative. More...

 
class  QuadraticConstraint
 lb ≤ .5 xᵀQx + bᵀx ≤ ub Without loss of generality, the class stores a symmetric matrix Q. More...
 
class  QuadraticCost
 Implements a cost of the form

\[ .5 x'Qx + b'x + c \]

. More...

 
class  RotatedLorentzConeConstraint
 Constraining that the linear expression \( z=Ax+b \) lies within rotated Lorentz cone. More...
 
class  ScsSolver
 
struct  ScsSolverDetails
 The SCS solver details after calling Solve() function. More...
 
struct  SemidefiniteRelaxationOptions
 Configuration options for the MakeSemidefiniteRelaxation. More...
 
class  SnoptSolver
 An implementation of SolverInterface for the commercially-licensed SNOPT solver (https://ccom.ucsd.edu/~optimizers/solvers/snopt/). More...
 
struct  SnoptSolverDetails
 The SNOPT solver details after calling Solve() function. More...
 
class  SolverBase
 Abstract base class used by implementations of individual solvers. More...
 
class  SolverId
 Identifies a SolverInterface implementation. More...
 
class  SolverInterface
 Interface used by implementations of individual solvers. More...
 
class  SolverOptions
 Stores options for multiple solvers. More...
 
class  SolverTypeConverter
 Converts between SolverType and SolverId. More...
 
class  UnrevisedLemkeSolver
 A class for the Unrevised Implementation of Lemke Algorithm's for solving Linear Complementarity Problems (LCPs). More...
 
class  UnrevisedLemkeSolverId
 Non-template class for UnrevisedLemkeSolver<T> constants. More...
 
class  VisualizationCallback
 Defines a simple evaluator with no outputs that takes a callback function pointer. More...
 

Typedefs

using DecisionVariable = symbolic::Variable
 
template<int rows, int cols>
using MatrixDecisionVariable = Eigen::Matrix< symbolic::Variable, rows, cols >
 
template<int rows>
using VectorDecisionVariable = MatrixDecisionVariable< rows, 1 >
 
using MatrixXDecisionVariable = MatrixDecisionVariable< Eigen::Dynamic, Eigen::Dynamic >
 
using VectorXDecisionVariable = VectorDecisionVariable< Eigen::Dynamic >
 
using VariableRefList = std::list< Eigen::Ref< const VectorXDecisionVariable > >
 
template<int rows, int cols>
using MatrixIndeterminate = Eigen::Matrix< symbolic::Variable, rows, cols >
 MatrixIndeterminate<int, int> is used as an alias for Eigen::Matrix<symbolic::Variable, int, int>. More...
 
template<int rows>
using VectorIndeterminate = MatrixIndeterminate< rows, 1 >
 VectorIndeterminate<int> is used as an alias for Eigen::Matrix<symbolic::Variable, int, 1>. More...
 
using MatrixXIndeterminate = MatrixIndeterminate< Eigen::Dynamic, Eigen::Dynamic >
 MatrixXIndeterminate is used as an alias for Eigen::Matrix<symbolic::Variable, Eigen::Dynamic, Eigen::Dynamic>. More...
 
using VectorXIndeterminate = VectorIndeterminate< Eigen::Dynamic >
 VectorXIndeterminate is used as an alias for Eigen::Matrix<symbolic::Variable, Eigen::Dynamic, 1>. More...
 
using IndeterminatesRefList = std::list< Eigen::Ref< const VectorXIndeterminate > >
 
using MinimumValuePenaltyFunction = std::function< void(double x, double *penalty, double *dpenalty_dx)>
 Computes the penalty function φ(x) and its derivatives dφ(x)/dx. More...
 
using ProgramAttributes = std::unordered_set< ProgramAttribute, DefaultHash >
 
typedef uint32_t RollPitchYawLimits
 

Enumerations

enum  CommonSolverOption { kPrintFileName, kPrintToConsole, kStandaloneReproductionFileName, kMaxThreads }
 Some options can be applied to not one solver, but many solvers (for example, many solvers support printing out the progress in each iteration). More...
 
enum  IntervalBinning { kLogarithmic, kLinear }
 For a continuous variable whose range is cut into small intervals, we will use binary variables to represent which interval the continuous variable is in. More...
 
enum  ProgramAttribute {
  kGenericCost, kGenericConstraint, kQuadraticCost, kQuadraticConstraint,
  kLinearCost, kLinearConstraint, kLinearEqualityConstraint, kLinearComplementarityConstraint,
  kLorentzConeConstraint, kRotatedLorentzConeConstraint, kPositiveSemidefiniteConstraint, kExponentialConeConstraint,
  kL2NormCost, kBinaryVariable, kCallback
}
 
enum  ProgramType {
  kLP, kQP, kSOCP, kSDP,
  kGP, kCGP, kMILP, kMIQP,
  kMISOCP, kMISDP, kQuadraticCostConicConstraint, kNLP,
  kLCP, kUnknown
}
 A coarse categorization of the optimization problem based on the type of constraints/costs/variables. More...
 
enum  RollPitchYawLimitOptions {
  kNoLimits = 0, kRPYError = 1 << 0, kRoll_NegPI_2_to_PI_2 = 1 << 1, kRoll_0_to_PI = 1 << 2,
  kPitch_NegPI_2_to_PI_2 = 1 << 3, kPitch_0_to_PI = 1 << 4, kYaw_NegPI_2_to_PI_2 = 1 << 5, kYaw_0_to_PI = 1 << 6,
  kRoll_0_to_PI_2 = (1 << 1) | (1 << 2), kPitch_0_to_PI_2 = (1 << 3) | (1 << 4), kYaw_0_to_PI_2 = (1 << 5) | (1 << 6)
}
 
enum  RemoveFreeVariableMethod { kTwoSlackVariables = 1, kNullspace = 2, kLorentzConeSlack = 3 }
 SDPA format doesn't accept free variables, namely the problem it solves is in this form P1. More...
 
enum  SolutionResult {
  kSolutionFound = 0, kInvalidInput = -1, kInfeasibleConstraints = -2, kUnbounded = -3,
  kSolverSpecificError = -4, kInfeasibleOrUnbounded = -5, kIterationLimit = -6, kDualInfeasible = -7,
  kSolutionResultNotSet = -8
}
 
enum  SolverType {
  kClp, kCsdp, kEqualityConstrainedQP, kGurobi,
  kIpopt, kLinearSystem, kMobyLCP, kMosek,
  kNlopt, kOsqp, kSnopt, kScs,
  kUnrevisedLemke
}
 This type only exists for backwards compatibility, and should not be used in new code. More...
 

Functions

void AggregateLinearCosts (const std::vector< Binding< LinearCost >> &linear_costs, Eigen::SparseVector< double > *linear_coeff, VectorX< symbolic::Variable > *vars, double *constant_cost)
 Given many linear costs, aggregate them into. More...
 
void AggregateQuadraticAndLinearCosts (const std::vector< Binding< QuadraticCost >> &quadratic_costs, const std::vector< Binding< LinearCost >> &linear_costs, Eigen::SparseMatrix< double > *Q_lower, VectorX< symbolic::Variable > *quadratic_vars, Eigen::SparseVector< double > *linear_coeff, VectorX< symbolic::Variable > *linear_vars, double *constant_cost)
 Given many linear and quadratic costs, aggregate them into. More...
 
std::unordered_map< symbolic::Variable, BoundAggregateBoundingBoxConstraints (const std::vector< Binding< BoundingBoxConstraint >> &bounding_box_constraints)
 Aggregates many bounding box constraints, returns the intersection (the tightest bounds) of these constraints. More...
 
void AggregateBoundingBoxConstraints (const MathematicalProgram &prog, Eigen::VectorXd *lower, Eigen::VectorXd *upper)
 Aggregates all the BoundingBoxConstraints inside prog, returns the intersection of the bounding box constraints as the lower and upper bound for each variable in prog. More...
 
void AggregateBoundingBoxConstraints (const MathematicalProgram &prog, std::vector< double > *lower, std::vector< double > *upper)
 Overloads AggregateBoundingBoxConstraints, but the type of lower and upper are std::vector<double>. More...
 
void AggregateDuplicateVariables (const Eigen::SparseMatrix< double > &A, const VectorX< symbolic::Variable > &vars, Eigen::SparseMatrix< double > *A_new, VectorX< symbolic::Variable > *vars_new)
 For linear expression A * vars where vars might contain duplicated entries, rewrite this linear expression as A_new * vars_new where vars_new doesn't contain duplicated entries. More...
 
template<typename C >
std::ostream & operator<< (std::ostream &os, const Binding< C > &binding)
 Print out the Binding. More...
 
SolverId ChooseBestSolver (const MathematicalProgram &prog)
 Choose the best solver given the formulation in the optimization program and the availability of the solvers. More...
 
const std::set< SolverId > & GetKnownSolvers ()
 Returns the set of solvers known to ChooseBestSolver. More...
 
std::unique_ptr< SolverInterfaceMakeSolver (const SolverId &id)
 Given the solver ID, create the solver with the matching ID. More...
 
std::unique_ptr< SolverInterfaceMakeFirstAvailableSolver (const std::vector< SolverId > &solver_ids)
 Makes the first available and enabled solver. More...
 
std::vector< SolverIdGetAvailableSolvers (ProgramType prog_type)
 Returns the list of available and enabled solvers that definitely accept all programs of the given program type. More...
 
std::string_view to_string (CommonSolverOption)
 Returns the short, unadorned name of the option, e.g., kPrintFileName. More...
 
std::ostream & operator<< (std::ostream &, CommonSolverOption)
 (Deprecated.) More...
 
std::shared_ptr< QuadraticCostMakeQuadraticErrorCost (const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::VectorXd > &x_desired)
 Creates a cost term of the form (x-x_desired)'Q(x-x_desired). More...
 
std::shared_ptr< QuadraticCostMake2NormSquaredCost (const Eigen::Ref< const Eigen::MatrixXd > &A, const Eigen::Ref< const Eigen::VectorXd > &b)
 Creates a quadratic cost of the form |Ax-b|²=(Ax-b)ᵀ(Ax-b) More...
 
template<typename FF >
std::shared_ptr< CostMakeFunctionCost (FF &&f)
 Converts an input of type F to a nonlinear cost. More...
 
VectorXDecisionVariable ConcatenateVariableRefList (const VariableRefList &var_list)
 Concatenates each element in var_list into a single Eigen vector of decision variables, returns this concatenated vector. More...
 
std::ostream & operator<< (std::ostream &os, const EvaluatorBase &e)
 Print out the evaluator. More...
 
ProgramType GetProgramType (const MathematicalProgram &prog)
 Returns the type of the optimization program (LP, QP, etc), based on the properties of its cost/constraints/variables. More...
 
VectorXIndeterminate ConcatenateIndeterminatesRefList (const IndeterminatesRefList &var_list)
 Concatenates each element in var_list into a single Eigen vector of indeterminates, returns this concatenated vector. More...
 
Eigen::Matrix< int, -1, -1, Eigen::RowMajor > EnumerateIntegerSolutions (const Eigen::Ref< const Eigen::MatrixXi > &A, const Eigen::Ref< const Eigen::VectorXi > &b, const Eigen::Ref< const Eigen::VectorXi > &lower_bound, const Eigen::Ref< const Eigen::VectorXi > &upper_bound)
 Finds all integer solutions x to the linear inequalities. More...
 
Binding< LinearConstraintCreateLogicalAndConstraint (const symbolic::Expression &b1, const symbolic::Expression &b2, const symbolic::Expression &b1_and_b2)
 Adds linear constraints, such that when b1, b2, b1_and_b2 satisfy the constraints, and b1, b2 take binary values, it is guaranteed that b1_and_b2 = b1 ∧ b2 (b1 and b2). More...
 
Binding< LinearConstraintCreateLogicalOrConstraint (const symbolic::Expression &b1, const symbolic::Expression &b2, const symbolic::Expression &b1_or_b2)
 Adds linear constraints, such that when b1, b2, b1_or_b2 satisfy the constraints, and b1, b2 take binary values, it is guaranteed that b1_or_b2 = b1 ∨ b2 (b1 or b2). More...
 
Binding< LinearConstraintCreateLogicalXorConstraint (const symbolic::Expression &b1, const symbolic::Expression &b2, const symbolic::Expression &b1_xor_b2)
 Add linear constraints, such that when b1, b2, b1_xor_b2 satisfy the constraints, and b1, b2 take binary values, it is guaranteed that b1_xor_b2 = b1 ⊕ b2 (b1 exclusive xor b2). More...
 
Binding< LinearConstraintCreateBinaryCodeMatchConstraint (const VectorX< symbolic::Expression > &code, const Eigen::Ref< const Eigen::VectorXi > &expected, const symbolic::Expression &match)
 Create linear constraints such that, when these constraints are satisfied, match = 1 if and only if code == expected, otherwise match = 0. More...
 
std::ostream & operator<< (std::ostream &os, const MathematicalProgram &prog)
 
double GetVariableValue (const symbolic::Variable &var, const std::optional< std::unordered_map< symbolic::Variable::Id, int >> &variable_index, const Eigen::Ref< const Eigen::VectorXd > &variable_values)
 Retrieve the value of a single variable var from variable_values. More...
 
template<typename Derived >
std::enable_if_t< std::is_same_v< typename Derived::Scalar, symbolic::Variable >, MatrixLikewise< double, Derived > > GetVariableValue (const Eigen::MatrixBase< Derived > &var, const std::optional< std::unordered_map< symbolic::Variable::Id, int >> &variable_index, const Eigen::Ref< const Eigen::VectorXd > &variable_values)
 Overload GetVariableValue() function, but for an Eigen matrix of decision variables. More...
 
void ExponentiallySmoothedHingeLoss (double x, double *penalty, double *dpenalty_dx)
 A hinge loss function smoothed by exponential function. More...
 
void QuadraticallySmoothedHingeLoss (double x, double *penalty, double *dpenalty_dx)
 A linear hinge loss, smoothed with a quadratic loss near the origin. More...
 
constexpr int CeilLog2 (int n)
 Return ⌈log₂(n)⌉, namely the minimal integer no smaller than log₂(n), with base 2. More...
 
template<typename Derived >
std::enable_if_t< drake::is_eigen_vector_of< Derived, symbolic::Expression >::value, typename LogarithmicSos2NewBinaryVariables< Derived::RowsAtCompileTime >::type > AddLogarithmicSos2Constraint (MathematicalProgram *prog, const Eigen::MatrixBase< Derived > &lambda, const std::string &binary_variable_name="y")
 Adds the special ordered set 2 (SOS2) constraint,. More...
 
void AddLogarithmicSos2Constraint (MathematicalProgram *prog, const Eigen::Ref< const VectorX< symbolic::Expression >> &lambda, const Eigen::Ref< const VectorX< symbolic::Expression >> &y)
 Adds the special ordered set 2 (SOS2) constraint,. More...
 
void AddSos2Constraint (MathematicalProgram *prog, const Eigen::Ref< const VectorX< symbolic::Expression >> &lambda, const Eigen::Ref< const VectorX< symbolic::Expression >> &y)
 Adds the special ordered set 2 (SOS2) constraint. More...
 
void AddLogarithmicSos1Constraint (MathematicalProgram *prog, const Eigen::Ref< const VectorX< symbolic::Expression >> &lambda, const Eigen::Ref< const VectorXDecisionVariable > &y, const Eigen::Ref< const Eigen::MatrixXi > &binary_encoding)
 Adds the special ordered set of type 1 (SOS1) constraint. More...
 
std::pair< VectorX< symbolic::Variable >, VectorX< symbolic::Variable > > AddLogarithmicSos1Constraint (MathematicalProgram *prog, int num_lambda)
 Adds the special ordered set of type 1 (SOS1) constraint. More...
 
std::string to_string (IntervalBinning interval_binning)
 
std::ostream & operator<< (std::ostream &os, const IntervalBinning &binning)
 
template<typename DerivedPhiX , typename DerivedPhiY , typename DerivedBx , typename DerivedBy >
std::enable_if_t< is_eigen_vector_of< DerivedPhiX, double >::value &&is_eigen_vector_of< DerivedPhiY, double >::value &&is_eigen_vector_of< DerivedBx, symbolic::Expression >::value &&is_eigen_vector_of< DerivedBy, symbolic::Expression >::value, MatrixDecisionVariable< DerivedPhiX::RowsAtCompileTime, DerivedPhiY::RowsAtCompileTime > > AddBilinearProductMcCormickEnvelopeSos2 (MathematicalProgram *prog, const symbolic::Variable &x, const symbolic::Variable &y, const symbolic::Expression &w, const DerivedPhiX &phi_x, const DerivedPhiY &phi_y, const DerivedBx &Bx, const DerivedBy &By, IntervalBinning binning)
 Add constraints to the optimization program, such that the bilinear product x * y is approximated by w, using Special Ordered Set of Type 2 (sos2) constraint. More...
 
void AddBilinearProductMcCormickEnvelopeMultipleChoice (MathematicalProgram *prog, const symbolic::Variable &x, const symbolic::Variable &y, const symbolic::Expression &w, const Eigen::Ref< const Eigen::VectorXd > &phi_x, const Eigen::Ref< const Eigen::VectorXd > &phi_y, const Eigen::Ref< const VectorX< symbolic::Expression >> &Bx, const Eigen::Ref< const VectorX< symbolic::Expression >> &By)
 Add constraints to the optimization program, such that the bilinear product x * y is approximated by w, using Mixed Integer constraint with "Multiple Choice" model. More...
 
std::string to_string (MixedIntegerRotationConstraintGenerator::Approach type)
 
std::ostream & operator<< (std::ostream &os, const MixedIntegerRotationConstraintGenerator::Approach &type)
 
AddRotationMatrixBoxSphereIntersectionReturn AddRotationMatrixBoxSphereIntersectionMilpConstraints (const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &R, int num_intervals_per_half_axis, MathematicalProgram *prog)
 Adds binary variables that constrain the value of the column and row vectors of R, in order to add the following (in some cases non-convex) constraints as an MILP. More...
 
std::pair< Eigen::MatrixXd, Eigen::MatrixXd > DecomposeNonConvexQuadraticForm (const Eigen::Ref< const Eigen::MatrixXd > &Q)
 For a non-convex homogeneous quadratic form xᵀQx, where Q is not necessarily a positive semidefinite matrix, we decompose it as a difference between two convex homogeneous quadratic forms xᵀQx = xᵀQ₁x - xᵀQ₂x, Q₁, Q₂ are positive semidefinite. More...
 
std::tuple< Binding< LinearConstraint >, std::vector< Binding< RotatedLorentzConeConstraint > >, VectorXDecisionVariableAddRelaxNonConvexQuadraticConstraintInTrustRegion (MathematicalProgram *prog, const Eigen::Ref< const VectorXDecisionVariable > &x, const Eigen::Ref< const Eigen::MatrixXd > &Q1, const Eigen::Ref< const Eigen::MatrixXd > &Q2, const Eigen::Ref< const VectorXDecisionVariable > &y, const Eigen::Ref< const Eigen::VectorXd > &p, double lower_bound, double upper_bound, const Eigen::Ref< const Eigen::VectorXd > &linearization_point, double trust_region_gap)
 For a non-convex quadratic constraint lb ≤ xᵀQ₁x - xᵀQ₂x + pᵀy ≤ ub where Q₁, Q₂ are both positive semidefinite matrices. More...
 
bool AreRequiredAttributesSupported (const ProgramAttributes &required, const ProgramAttributes &supported, std::string *unsupported_message=nullptr)
 Returns true iff required is a subset of supported. More...
 
std::string to_string (const ProgramAttribute &)
 
std::ostream & operator<< (std::ostream &, const ProgramAttribute &)
 
std::string to_string (const ProgramAttributes &)
 
std::ostream & operator<< (std::ostream &, const ProgramAttributes &)
 
std::string to_string (const ProgramType &)
 
std::ostream & operator<< (std::ostream &, const ProgramType &)
 
MatrixDecisionVariable< 3, 3 > NewRotationMatrixVars (MathematicalProgram *prog, const std::string &name="R")
 Allocates a 3x3 matrix of decision variables with the trivial bounding box constraint ensuring all elements are [-1,1], and the linear constraint imposing -1 <= trace(R) <= 3. More...
 
void AddBoundingBoxConstraintsImpliedByRollPitchYawLimits (MathematicalProgram *prog, const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &R, RollPitchYawLimits limits=kNoLimits)
 Applies very conservative limits on the entries of R for the cases when rotations can be limited (for instance, if you want to search over rotations, but there is an obvious symmetry in the problem so that e.g. More...
 
void AddRotationMatrixSpectrahedralSdpConstraint (MathematicalProgram *prog, const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &R)
 Adds constraint (10) from https://arxiv.org/pdf/1403.4914.pdf , which exactly represents the convex hull of all rotation matrices in 3D. More...
 
void AddRotationMatrixOrthonormalSocpConstraint (MathematicalProgram *prog, const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &R)
 Adds a set of convex constraints which approximate the set of orthogonal matrices, O(3). More...
 
std::string to_string (const RemoveFreeVariableMethod &)
 
bool GenerateSDPA (const MathematicalProgram &prog, const std::string &file_name, RemoveFreeVariableMethod method=RemoveFreeVariableMethod::kNullspace)
 SDPA is a format to record an SDP problem. More...
 
std::unique_ptr< MathematicalProgramMakeSemidefiniteRelaxation (const MathematicalProgram &prog, const SemidefiniteRelaxationOptions &options={})
 Constructs a new MathematicalProgram which represents the semidefinite programming convex relaxation of the (likely nonconvex) program prog. More...
 
std::unique_ptr< MathematicalProgramMakeSemidefiniteRelaxation (const MathematicalProgram &prog, const std::vector< symbolic::Variables > &variable_groups, const SemidefiniteRelaxationOptions &options={})
 A version of MakeSemidefiniteRelaxation that allows for specifying the sparsity of the relaxation. More...
 
std::string to_string (SolutionResult solution_result)
 
std::ostream & operator<< (std::ostream &os, SolutionResult solution_result)
 
MathematicalProgramResult Solve (const MathematicalProgram &prog, const std::optional< Eigen::VectorXd > &initial_guess, const std::optional< SolverOptions > &solver_options)
 Solves an optimization program, with optional initial guess and solver options. More...
 
MathematicalProgramResult Solve (const MathematicalProgram &prog, const Eigen::Ref< const Eigen::VectorXd > &initial_guess)
 Solves an optimization program with a given initial guess. More...
 
MathematicalProgramResult Solve (const MathematicalProgram &prog)
 
std::vector< MathematicalProgramResultSolveInParallel (const std::vector< const MathematicalProgram * > &progs, const std::vector< const Eigen::VectorXd * > *initial_guesses, const std::vector< const SolverOptions * > *solver_options, const std::vector< std::optional< SolverId >> *solver_ids, Parallelism parallelism=Parallelism::Max(), bool dynamic_schedule=false)
 Solves progs[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given, by invoking the solver at solver_ids[i] if provided. More...
 
std::vector< MathematicalProgramResultSolveInParallel (const std::vector< const MathematicalProgram * > &progs, const std::vector< const Eigen::VectorXd * > *initial_guesses=nullptr, const SolverOptions *solver_options=nullptr, const std::optional< SolverId > &solver_id=std::nullopt, Parallelism parallelism=Parallelism::Max(), bool dynamic_schedule=false)
 Provides the same functionality as SolveInParallel, but allows for specifying a single solver id and solver option that is used when solving all programs. More...
 
bool operator== (const SolverId &, const SolverId &)
 
bool operator!= (const SolverId &, const SolverId &)
 
std::ostream & operator<< (std::ostream &, const SolverId &)
 
std::string to_string (const SolverOptions &)
 
std::ostream & operator<< (std::ostream &, const SolverOptions &)
 
drake::VectorX< symbolic::MonomialConstructMonomialBasis (const drake::symbolic::Polynomial &p)
 Given input polynomial p, outputs a set M of monomials with the following guarantee: if p = f1*f1 + f2*f2 + ... More...
 

Typedef Documentation

◆ DecisionVariable

◆ IndeterminatesRefList

using IndeterminatesRefList = std::list<Eigen::Ref<const VectorXIndeterminate> >

◆ MatrixDecisionVariable

using MatrixDecisionVariable = Eigen::Matrix<symbolic::Variable, rows, cols>

◆ MatrixIndeterminate

using MatrixIndeterminate = Eigen::Matrix<symbolic::Variable, rows, cols>

MatrixIndeterminate<int, int> is used as an alias for Eigen::Matrix<symbolic::Variable, int, int>.

After resolving aliases, a compiler does not distinguish between these two. All indeterminates are a variable of type symbolic::Variable::Type::CONTINUOUS (by default).

Template Parameters
rowsThe number of rows in the new matrix containing indeterminates.
colsThe number of columns in the new matrix containing indeterminates.

◆ MatrixXDecisionVariable

using MatrixXDecisionVariable = MatrixDecisionVariable<Eigen::Dynamic, Eigen::Dynamic>

◆ MatrixXIndeterminate

using MatrixXIndeterminate = MatrixIndeterminate<Eigen::Dynamic, Eigen::Dynamic>

MatrixXIndeterminate is used as an alias for Eigen::Matrix<symbolic::Variable, Eigen::Dynamic, Eigen::Dynamic>.

See also
MatrixIndeterminate<int, int>

◆ MinimumValuePenaltyFunction

using MinimumValuePenaltyFunction = std::function<void(double x, double* penalty, double* dpenalty_dx)>

Computes the penalty function φ(x) and its derivatives dφ(x)/dx.

Valid penalty functions must meet the following criteria:

  1. φ(x) ≥ 0 ∀ x ∈ ℝ.
  2. dφ(x)/dx ≤ 0 ∀ x ∈ ℝ.
  3. φ(x) = 0 ∀ x ≥ 0.
  4. dφ(x)/dx < 0 ∀ x < 0.

If dpenalty_dx is nullptr, the function should only compute φ(x).

◆ ProgramAttributes

using ProgramAttributes = std::unordered_set<ProgramAttribute, DefaultHash>

◆ RollPitchYawLimits

typedef uint32_t RollPitchYawLimits

◆ VariableRefList

using VariableRefList = std::list<Eigen::Ref<const VectorXDecisionVariable> >

◆ VectorDecisionVariable

◆ VectorIndeterminate

VectorIndeterminate<int> is used as an alias for Eigen::Matrix<symbolic::Variable, int, 1>.

Template Parameters
rowsThe number of rows in the new matrix containing indeterminates.
See also
MatrixIndeterminate<int, int>

◆ VectorXDecisionVariable

◆ VectorXIndeterminate

using VectorXIndeterminate = VectorIndeterminate<Eigen::Dynamic>

VectorXIndeterminate is used as an alias for Eigen::Matrix<symbolic::Variable, Eigen::Dynamic, 1>.

See also
MatrixIndeterminate<int, int>

Enumeration Type Documentation

◆ CommonSolverOption

enum CommonSolverOption
strong

Some options can be applied to not one solver, but many solvers (for example, many solvers support printing out the progress in each iteration).

CommonSolverOption contain the names of these supported options. The user can use these options as "key" in SolverOption::SetOption(). If the solver doesn't support the option, the option is ignored.

Enumerator
kPrintFileName 

Many solvers support printing the progress of each iteration to a file.

The user can call SolverOptions::SetOption(kPrintFileName, "filename.log") to enable this. To disable, set the option to the empty string "", which indicates that no file should be written.

kPrintToConsole 

Many solvers support printing the progress of each iteration to the console.

The user can call SolverOptions::SetOption(kPrintToConsole, 1) to enable this, or use 0 to turn off printing to the console.

kStandaloneReproductionFileName 

Some solvers support writing a standalone (e.g., it does not depend on Drake) minimal reproduction of the problem to a file.

This is especially useful for sending bug reports upstream to the developers of the solver. The user can call SolverOptions::SetOption(kStandaloneReproductionFileName, "filename.txt") to enable this. To disable, set the option to the empty string "", which indicates that no file should be written.

kMaxThreads 

Some solvers are multi-threaded.

The user can request the maximum number of threads used by the solver with this int option. When not set, the value defaults to Parallelism.Max().num_threads(), which can be controlled via the DRAKE_NUM_THREADS environment variable.

Precondition
The number of threads must be greater than 0.
Note
Setting this value higher than the actual hardware concurrency may result in a degraded performance. It is recommended to set this value lower than or equal to Parallelism.Max().num_threads().
A solver may choose to use fewer threads than the value specified.
This options does NOT disable multi-threading in BLAS/LAPACK which is used by many solvers under the hood. Therefore, some internal operations of the solvers may still be multi-core.

◆ IntervalBinning

enum IntervalBinning
strong

For a continuous variable whose range is cut into small intervals, we will use binary variables to represent which interval the continuous variable is in.

We support two representations, either using logarithmic number of binary variables, or linear number of binary variables. For more details,

See also
AddLogarithmicSos2Constraint and AddSos2Constraint
Enumerator
kLogarithmic 
kLinear 

◆ ProgramAttribute

enum ProgramAttribute
strong
Enumerator
kGenericCost 

A generic cost, doesn't belong to any specific cost type below.

kGenericConstraint 

A generic constraint, doesn't belong to any specific constraint type below.

kQuadraticCost 

A quadratic function as the cost.

kQuadraticConstraint 

A constraint on a quadratic function.

kLinearCost 

A linear function as the cost.

kLinearConstraint 

A constraint on a linear function.

kLinearEqualityConstraint 

An equality constraint on a linear function.

kLinearComplementarityConstraint 

A linear complementarity constraint in the form 0 ≤ z ⊥ Mz+q ≥ 0.

kLorentzConeConstraint 

A Lorentz cone constraint.

kRotatedLorentzConeConstraint 

A rotated Lorentz cone constraint.

kPositiveSemidefiniteConstraint 

A positive semidefinite constraint.

kExponentialConeConstraint 

An exponential cone constraint.

kL2NormCost 

An L2 norm |Ax+b|.

kBinaryVariable 

Variable taking binary value {0, 1}.

kCallback 

Supports callback during solving the problem.

◆ ProgramType

enum ProgramType
strong

A coarse categorization of the optimization problem based on the type of constraints/costs/variables.

Notice that Drake chooses the solver based on a finer category; for example we have a specific solver for equality-constrained convex QP.

Enumerator
kLP 

Linear Programming, with a linear cost and linear constraints.

kQP 

Quadratic Programming, with a convex quadratic cost and linear constraints.

kSOCP 

Second-order Cone Programming, with a linear cost and second-order cone constraints.

kSDP 

Semidefinite Programming, with a linear cost and positive semidefinite matrix constraints.

kGP 

Geometric Programming, with a linear cost and exponential cone constraints.

kCGP 

Conic Geometric Programming, this is a superset that unifies GP and SDP.

Refer to http://people.lids.mit.edu/pari/cgp_preprint.pdf for more details.

kMILP 

Mixed-integer Linear Programming.

LP with some variables taking binary values.

kMIQP 

Mixed-integer Quadratic Programming.

QP with some variables taking binary values.

kMISOCP 

Mixed-integer Second-order Cone Programming.

SOCP with some variables taking binary values.

kMISDP 

Mixed-integer Semidefinite Programming.

SDP with some variables taking binary values.

kQuadraticCostConicConstraint 

convex quadratic cost with nonlinear conic constraints.

kNLP 

nonlinear programming.

Programs with generic costs or constraints.

kLCP 

Linear Complementarity Programs.

Programs with linear complementary constraints and no cost.

kUnknown 

Does not fall into any of the types above.

◆ RemoveFreeVariableMethod

SDPA format doesn't accept free variables, namely the problem it solves is in this form P1.

max tr(C * X)
s.t tr(Aᵢ*X) = aᵢ
    X ≽ 0.

Notice that the decision variable X has to be in the proper cone X ≽ 0, and it doesn't accept free variable (without the conic constraint). On the other hand, most real-world applications require free variables, namely problems in this form P2

max tr(C * X) + dᵀs
s.t tr(Aᵢ*X) + bᵢᵀs = aᵢ
    X ≽ 0
    s is free.

In order to remove the free variables, we consider three approaches.

  1. Replace a free variable s with two variables s = p - q, p ≥ 0, q ≥ 0.
  2. First write the dual of the problem P2 as D2

    min aᵀy
    s.t ∑ᵢ yᵢAᵢ - C = Z
        Z ≽ 0
        Bᵀ * y = d,
    

    where bᵢᵀ is the i'th row of B. The last constraint Bᵀ * y = d means y = ŷ + Nt, where Bᵀ * ŷ = d, and N is the null space of Bᵀ. Hence, D2 is equivalent to the following problem, D3

    min aᵀNt + aᵀŷ
    s.t ∑ᵢ tᵢFᵢ - (C -∑ᵢ ŷᵢAᵢ) = Z
        Z ≽ 0,
    

    where Fᵢ = ∑ⱼ NⱼᵢAⱼ. D3 is the dual of the following primal problem P3 without free variables

    max tr((C-∑ᵢ ŷᵢAᵢ)*X̂) + aᵀŷ
    s.t tr(FᵢX̂) = (Nᵀa)(i)
        X̂ ≽ 0.
    

    Then (X, s) = (X̂, B⁻¹(a - tr(Aᵢ X̂))) is the solution to the original problem P2.

  3. Add a slack variable t, with the Lorentz cone constraint t ≥ sqrt(sᵀs).
Enumerator
kTwoSlackVariables 

Approach 1, replace a free variable s as s = y⁺ - y⁻, y⁺ ≥ 0, y⁻ ≥ 0.

kNullspace 

Approach 2, reformulate the dual problem by considering the nullspace of the linear constraint in the dual.

kLorentzConeSlack 

Approach 3, add a slack variable t with the lorentz cone constraint t ≥ sqrt(sᵀs).

◆ RollPitchYawLimitOptions

Enumerator
kNoLimits 
kRPYError 

Do not use, to avoid & vs. && typos.

kRoll_NegPI_2_to_PI_2 
kRoll_0_to_PI 
kPitch_NegPI_2_to_PI_2 
kPitch_0_to_PI 
kYaw_NegPI_2_to_PI_2 
kYaw_0_to_PI 
kRoll_0_to_PI_2 
kPitch_0_to_PI_2 
kYaw_0_to_PI_2 

◆ SolutionResult

Enumerator
kSolutionFound 

Found the optimal solution.

kInvalidInput 

Invalid input.

kInfeasibleConstraints 

The primal is infeasible.

kUnbounded 

The primal is unbounded.

kSolverSpecificError 

Solver-specific error.

(Try MathematicalProgramResult::get_solver_details() or enabling verbose solver output.)

kInfeasibleOrUnbounded 

The primal is either infeasible or unbounded.

kIterationLimit 

Reaches the iteration limits.

kDualInfeasible 

Dual problem is infeasible.

In this case we cannot infer the status of the primal problem.

kSolutionResultNotSet 

The initial (invalid) solution result.

This value should be overwritten by the solver during Solve().

◆ SolverType

enum SolverType
strong

This type only exists for backwards compatibility, and should not be used in new code.

Enumerator
kClp 
kCsdp 
kEqualityConstrainedQP 
kGurobi 
kIpopt 
kLinearSystem 
kMobyLCP 
kMosek 
kNlopt 
kOsqp 
kSnopt 
kScs 
kUnrevisedLemke 

Function Documentation

◆ AddBilinearProductMcCormickEnvelopeMultipleChoice()

void drake::solvers::AddBilinearProductMcCormickEnvelopeMultipleChoice ( MathematicalProgram prog,
const symbolic::Variable x,
const symbolic::Variable y,
const symbolic::Expression w,
const Eigen::Ref< const Eigen::VectorXd > &  phi_x,
const Eigen::Ref< const Eigen::VectorXd > &  phi_y,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  Bx,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  By 
)

Add constraints to the optimization program, such that the bilinear product x * y is approximated by w, using Mixed Integer constraint with "Multiple Choice" model.

To do so, we assume that the range of x is [x_min, x_max], and the range of y is [y_min, y_max]. We first consider two arrays φˣ, φʸ, satisfying

x_min = φˣ₀ < φˣ₁ < ... < φˣₘ = x_max
y_min = φʸ₀ < φʸ₁ < ... < φʸₙ = y_max

, and divide the range of x into intervals [φˣ₀, φˣ₁], [φˣ₁, φˣ₂], ... , [φˣₘ₋₁, φˣₘ] and the range of y into intervals [φʸ₀, φʸ₁], [φʸ₁, φʸ₂], ... , [φʸₙ₋₁, φʸₙ]. The xy plane is thus cut into rectangles, with each rectangle as [φˣᵢ, φˣᵢ₊₁] x [φʸⱼ, φʸⱼ₊₁]. The convex hull of the surface z = x * y for x, y in each rectangle is a tetrahedron. We then approximate the bilinear product x * y with w, such that (x, y, w) is in one of the tetrahedrons.

Parameters
progThe optimization problem to which the constraints will be added.
xA variable in the bilinear product.
yA variable in the bilinear product.
wThe expression that approximates the bilinear product x * y.
phi_xφˣ in the documentation above. Will be used to cut the range of x into small intervals.
phi_yφʸ in the documentation above. Will be used to cut the range of y into small intervals.
BxThe binary-valued expression indicating which interval x is in. Bx(i) = 1 => φˣᵢ ≤ x ≤ φˣᵢ₊₁.
ByThe binary-valued expression indicating which interval y is in. By(i) = 1 => φʸⱼ ≤ y ≤ φʸⱼ₊₁.

One formulation of the constraint is

x = ∑ᵢⱼ x̂ᵢⱼ
y = ∑ᵢⱼ ŷᵢⱼ
Bˣʸᵢⱼ = Bˣᵢ ∧ Bʸⱼ
∑ᵢⱼ Bˣʸᵢⱼ = 1
φˣᵢ Bˣʸᵢⱼ ≤ x̂ᵢⱼ ≤ φˣᵢ₊₁ Bˣʸᵢⱼ
φʸⱼ Bˣʸᵢⱼ ≤ ŷᵢⱼ ≤ φʸⱼ₊₁ Bˣʸᵢⱼ
w ≥ ∑ᵢⱼ (x̂ᵢⱼ φʸⱼ + φˣᵢ ŷᵢⱼ - φˣᵢ φʸⱼ Bˣʸᵢⱼ)
w ≥ ∑ᵢⱼ (x̂ᵢⱼ φʸⱼ₊₁ + φˣᵢ₊₁ ŷᵢⱼ - φˣᵢ₊₁ φʸⱼ₊₁ Bˣʸᵢⱼ)
w ≤ ∑ᵢⱼ (x̂ᵢⱼ φʸⱼ + φˣᵢ₊₁ ŷᵢⱼ - φˣᵢ₊₁ φʸⱼ Bˣʸᵢⱼ)
w ≤ ∑ᵢⱼ (x̂ᵢⱼ φʸⱼ₊₁ + φˣᵢ ŷᵢⱼ - φˣᵢ φʸⱼ₊₁ Bˣʸᵢⱼ)

The "logical and" constraint Bˣʸᵢⱼ = Bˣᵢ ∧ Bʸⱼ can be imposed as

Bˣʸᵢⱼ ≥ Bˣᵢ + Bʸⱼ - 1
Bˣʸᵢⱼ ≤ Bˣᵢ
Bˣʸᵢⱼ ≤ Bʸⱼ
0 ≤ Bˣʸᵢⱼ ≤ 1

This formulation will introduce slack variables x̂, ŷ and Bˣʸ, in total 3 * m * n variables.

In order to reduce the number of slack variables, we can further simplify these constraints, by defining two vectors x̅ ∈ ℝⁿ, y̅ ∈ ℝᵐ as

x̅ⱼ = ∑ᵢ x̂ᵢⱼ
y̅ᵢ = ∑ⱼ ŷᵢⱼ

and the constraints above can be re-formulated using and as

x = ∑ⱼ x̅ⱼ
y = ∑ᵢ y̅ᵢ
Bˣʸᵢⱼ = Bˣᵢ ∧ Bʸⱼ
∑ᵢⱼ Bˣʸᵢⱼ = 1
∑ᵢ φˣᵢ Bˣʸᵢⱼ ≤ x̅ⱼ ≤ ∑ᵢ φˣᵢ₊₁ Bˣʸᵢⱼ
∑ⱼ φʸⱼ Bˣʸᵢⱼ ≤ y̅ᵢ ≤ ∑ⱼ φʸⱼ₊₁ Bˣʸᵢⱼ
w ≥ ∑ⱼ( x̅ⱼ φʸⱼ ) + ∑ᵢ( φˣᵢ y̅ᵢ ) - ∑ᵢⱼ( φˣᵢ φʸⱼ Bˣʸᵢⱼ )
w ≥ ∑ⱼ( x̅ⱼ φʸⱼ₊₁ ) + ∑ᵢ( φˣᵢ₊₁ y̅ᵢ ) - ∑ᵢⱼ( φˣᵢ₊₁ φʸⱼ₊₁ Bˣʸᵢⱼ )
w ≤ ∑ⱼ( x̅ⱼ φʸⱼ ) + ∑ᵢ( φˣᵢ₊₁ y̅ⱼ ) - ∑ᵢⱼ( φˣᵢ₊₁ φʸⱼ Bˣʸᵢⱼ )
w ≤ ∑ⱼ( x̅ⱼ φʸⱼ₊₁ ) + ∑ᵢ( φˣᵢ y̅ᵢ ) - ∑ᵢⱼ( φˣᵢ φʸⱼ₊₁ Bˣʸᵢⱼ ).

In this formulation, we introduce new continuous variables , , Bˣʸ. The total number of new variables is m + n + m * n.

In section 3.3 of Mixed-Integer Models for Nonseparable Piecewise Linear Optimization: Unifying Framework and Extensions by Juan P Vielma, Shabbir Ahmed and George Nemhauser, this formulation is called "Multiple Choice Model".

Note
We DO NOT add the constraint Bx(i) ∈ {0, 1}, By(j) ∈ {0, 1} in this function. It is the user's responsibility to ensure that these binary constraints are enforced. The users can also add cutting planes ∑ᵢBx(i) = 1, ∑ⱼBy(j) = 1. Without these two cutting planes, (x, y, w) is still in the McCormick envelope of z = x * y, but these two cutting planes "might" improve the computation speed in the mixed-integer solver.

◆ AddBilinearProductMcCormickEnvelopeSos2()

std::enable_if_t< is_eigen_vector_of<DerivedPhiX, double>::value && is_eigen_vector_of<DerivedPhiY, double>::value && is_eigen_vector_of<DerivedBx, symbolic::Expression>::value && is_eigen_vector_of<DerivedBy, symbolic::Expression>::value, MatrixDecisionVariable<DerivedPhiX::RowsAtCompileTime, DerivedPhiY::RowsAtCompileTime> > drake::solvers::AddBilinearProductMcCormickEnvelopeSos2 ( MathematicalProgram prog,
const symbolic::Variable x,
const symbolic::Variable y,
const symbolic::Expression w,
const DerivedPhiX &  phi_x,
const DerivedPhiY &  phi_y,
const DerivedBx &  Bx,
const DerivedBy &  By,
IntervalBinning  binning 
)

Add constraints to the optimization program, such that the bilinear product x * y is approximated by w, using Special Ordered Set of Type 2 (sos2) constraint.

To do so, we assume that the range of x is [x_min, x_max], and the range of y is [y_min, y_max]. We first consider two arrays φˣ, φʸ, satisfying

x_min = φˣ₀ < φˣ₁ < ... < φˣₘ = x_max
y_min = φʸ₀ < φʸ₁ < ... < φʸₙ = y_max

, and divide the range of x into intervals [φˣ₀, φˣ₁], [φˣ₁, φˣ₂], ... , [φˣₘ₋₁, φˣₘ] and the range of y into intervals [φʸ₀, φʸ₁], [φʸ₁, φʸ₂], ... , [φʸₙ₋₁, φʸₙ]. The xy plane is thus cut into rectangles, with each rectangle as [φˣᵢ, φˣᵢ₊₁] x [φʸⱼ, φʸⱼ₊₁]. The convex hull of the surface z = x * y for x, y in each rectangle is a tetrahedron. We then approximate the bilinear product x * y with w, such that (x, y, w) is in one of the tetrahedrons.

We use two different encoding schemes on the binary variables, to determine which interval is active. We can choose either linear or logarithmic binning. When using linear binning, for a variable with N intervals, we use N binary variables, and B(i) = 1 indicates the variable is in the i'th interval. When using logarithmic binning, we use ⌈log₂(N)⌉ binary variables. If these binary variables represent integer M in the reflected Gray code, then the continuous variable is in the M'th interval.

Parameters
progThe program to which the bilinear product constraint is added
xThe decision variable.
yThe decision variable.
wThe expression to approximate x * y
phi_xThe end points of the intervals for x.
phi_yThe end points of the intervals for y.
BxThe binary variables for the interval in which x stays encoded as described above.
ByThe binary variables for the interval in which y stays encoded as described above.
binningDetermine whether to use linear binning or logarithmic binning.
Returns
lambda The auxiliary continuous variables.

The constraints we impose are

x = (φˣ)ᵀ * ∑ⱼ λᵢⱼ
y = (φʸ)ᵀ * ∑ᵢ λᵢⱼ
w = ∑ᵢⱼ φˣᵢ * φʸⱼ * λᵢⱼ
Both ∑ⱼ λᵢⱼ = λ.rowwise().sum() and ∑ᵢ λᵢⱼ = λ.colwise().sum() satisfy SOS2
constraint.

If x ∈ [φx(M), φx(M+1)] and y ∈ [φy(N), φy(N+1)], then only λ(M, N), λ(M + 1, N), λ(M, N + 1) and λ(M+1, N+1) can be strictly positive, all other λ(i, j) are zero.

Note
We DO NOT add the constraint Bx(i) ∈ {0, 1}, By(j) ∈ {0, 1} in this function. It is the user's responsibility to ensure that these constraints are enforced.

◆ AddBoundingBoxConstraintsImpliedByRollPitchYawLimits()

void drake::solvers::AddBoundingBoxConstraintsImpliedByRollPitchYawLimits ( MathematicalProgram prog,
const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &  R,
RollPitchYawLimits  limits = kNoLimits 
)

Applies very conservative limits on the entries of R for the cases when rotations can be limited (for instance, if you want to search over rotations, but there is an obvious symmetry in the problem so that e.g.

0 < pitch < PI need not be considered). A matrix so constrained may still contain rotations outside of this envelope. Note: For simple rotational symmetry over PI, prefer kPitch_NegPI_2_to_PI_2 (over 0_to_PI) because it adds one more constraint (when combined with constraints on roll and yaw). Note: The Roll-Pitch-Yaw angles follow the convention in RollPitchYaw, namely extrinsic rotations about Space-fixed x-y-z axes, respectively.

◆ AddLogarithmicSos1Constraint() [1/2]

void drake::solvers::AddLogarithmicSos1Constraint ( MathematicalProgram prog,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  lambda,
const Eigen::Ref< const VectorXDecisionVariable > &  y,
const Eigen::Ref< const Eigen::MatrixXi > &  binary_encoding 
)

Adds the special ordered set of type 1 (SOS1) constraint.

Namely

  λ(0) + ... + λ(n-1) = 1
  λ(i) ≥ 0 ∀i
  ∃ j ∈ {0, 1, ..., n-1}, s.t λ(j) = 1

where one and only one of λ(i) is 1, all other λ(j) are 0. We will need to add ⌈log₂(n)⌉ binary variables, where n is the number of rows in λ. For more information, please refer to Modeling Disjunctive Constraints with a Logarithmic Number of Binary Variables and Constraints by J. Vielma and G. Nemhauser, 2011.

Parameters
progThe program to which the SOS1 constraint is added.
lambdalambda is in SOS1.
yThe binary variables indicating which λ is positive. For a given assignment on the binary variable y, if (y(0), ..., y(⌈log₂(n)⌉) represents integer M in binary_encoding, then only λ(M) is positive. Namely, if (y(0), ..., y(⌈log₂(n)⌉) equals to binary_encoding.row(M), then λ(M) = 1
binary_encodingA n x ⌈log₂(n)⌉ matrix. binary_encoding.row(i) represents integer i. No two rows of binary_encoding can be the same.
Exceptions
std::exceptionif binary_encoding has a non-binary entry (0, 1).

◆ AddLogarithmicSos1Constraint() [2/2]

std::pair<VectorX<symbolic::Variable>, VectorX<symbolic::Variable> > drake::solvers::AddLogarithmicSos1Constraint ( MathematicalProgram prog,
int  num_lambda 
)

Adds the special ordered set of type 1 (SOS1) constraint.

Namely

  λ(0) + ... + λ(n-1) = 1
  λ(i) ≥ 0 ∀i
  ∃ j ∈ {0, 1, ..., n-1}, s.t λ(j) = 1

where one and only one of λ(i) is 1, all other λ(j) are 0. We will need to add ⌈log₂(n)⌉ binary variables, where n is the number of rows in λ. For more information, please refer to Modeling Disjunctive Constraints with a Logarithmic Number of Binary Variables and Constraints by J. Vielma and G. Nemhauser, 2011.

Parameters
progThe program to which the SOS1 constraint is added.
num_lambdan in the documentation above.
Returns
(lambda, y) lambda is λ in the documentation above. Notice that λ are declared as continuous variables, but they only admit binary solutions. y are binary variables of size ⌈log₂(n)⌉. When this sos1 constraint is satisfied, suppose that λ(i)=1 and λ(j)=0 ∀ j≠i, then y is the Reflected Gray code of i. For example, suppose n = 8, i = 5, then y is a vector of size ⌈log₂(n)⌉ = 3, and the value of y is (1, 1, 0) which equals to 5 according to reflected Gray code.

◆ AddLogarithmicSos2Constraint() [1/2]

std::enable_if_t< drake::is_eigen_vector_of<Derived, symbolic::Expression>::value, typename LogarithmicSos2NewBinaryVariables< Derived::RowsAtCompileTime>::type> drake::solvers::AddLogarithmicSos2Constraint ( MathematicalProgram prog,
const Eigen::MatrixBase< Derived > &  lambda,
const std::string &  binary_variable_name = "y" 
)

Adds the special ordered set 2 (SOS2) constraint,.

  λ(0) + ... + λ(n) = 1
  ∀i. λ(i) ≥ 0
  ∃ j ∈ {0, 1, ..., n-1}, s.t λ(i) = 0 if i ≠ j and i ≠ j + 1

Namely at most two entries in λ can be strictly positive, and these two entries have to be adjacent. All other λ should be zero. Moreover, the non-zero λ satisfies λ(j) + λ(j + 1) = 1. We will need to add ⌈log₂(n - 1)⌉ binary variables, where n is the number of rows in λ. For more information, please refer to Modeling Disjunctive Constraints with a Logarithmic Number of Binary Variables and Constraints by J. Vielma and G. Nemhauser, 2011.

Parameters
progAdd the SOS2 constraint to this mathematical program.
lambdaAt most two entries in λ can be strictly positive, and these two entries have to be adjacent. All other entries are zero.
Returns
y The newly added binary variables. The assignment of the binary variable y implies which two λ can be strictly positive. With a binary assignment on y, and suppose the integer M corresponds to (y(0), y(1), ..., y(⌈log₂(n - 1)⌉)) in Gray code, then only λ(M) and λ(M + 1) can be non-zero. For example, if the assignment of y = (1, 1), in Gray code, (1, 1) represents integer 2, so only λ(2) and λ(3) can be strictly positive.

◆ AddLogarithmicSos2Constraint() [2/2]

void drake::solvers::AddLogarithmicSos2Constraint ( MathematicalProgram prog,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  lambda,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  y 
)

Adds the special ordered set 2 (SOS2) constraint,.

See also
AddLogarithmicSos2Constraint.

◆ AddRotationMatrixBoxSphereIntersectionMilpConstraints()

AddRotationMatrixBoxSphereIntersectionReturn drake::solvers::AddRotationMatrixBoxSphereIntersectionMilpConstraints ( const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &  R,
int  num_intervals_per_half_axis,
MathematicalProgram prog 
)

Adds binary variables that constrain the value of the column and row vectors of R, in order to add the following (in some cases non-convex) constraints as an MILP.

Specifically, for column vectors Ri, we constrain:

  • forall i, |Ri| = 1 ± envelope,
  • forall i,j. i ≠ j, Ri.dot(Rj) = 0 ± envelope,
  • R2 = R0.cross(R1) ± envelope, and again for R0=R1.cross(R2), and R1=R2.cross(R0).

Then all of the same constraints are also added to R^T. The size of the envelope decreases quickly as num_binary_variables_per_half_axis is is increased.

Note
Creates 9*2*num_binary_variables_per_half_axis binary variables named "BRpos*(*,*)" and "BRneg*(*,*)", and the same number of continuous variables named "CRpos*(*,*)" and "CRneg*(*,*)".
The particular representation/algorithm here was developed in an attempt:
  • to enable efficient reuse of the variables between the constraints between multiple rows/columns (e.g. the constraints on Rᵀ use the same variables as the constraints on R), and
  • to facilitate branch-and-bound solution techniques – binary regions are layered so that constraining one region establishes constraints on large portions of SO(3), and confers hopefully "useful" constraints the on other binary variables.
Parameters
RThe rotation matrix
num_intervals_per_half_axisnumber of intervals for a half axis.
progThe mathematical program to which the constraints are added.
Note
This method uses the same approach as MixedIntegerRotationConstraintGenerator with kBoxSphereIntersection, namely the feasible sets to both relaxation are the same. But they use different sets of binary variables, and thus the computation speed can be different inside optimization solvers.

◆ AddRotationMatrixOrthonormalSocpConstraint()

void drake::solvers::AddRotationMatrixOrthonormalSocpConstraint ( MathematicalProgram prog,
const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &  R 
)

Adds a set of convex constraints which approximate the set of orthogonal matrices, O(3).

Adds the bilinear constraints that the each column Ri has length <= 1 and that Ri'Rj approx 0 via -2 + |Ri|^2 + |Rj|^2 <= 2Ri'Rj <= 2 - |Ri|^2 - |Rj|^2 (for all i!=j), using a second-order-cone relaxation. Additionally, the same constraints are applied to all of the rows.

◆ AddRotationMatrixSpectrahedralSdpConstraint()

void drake::solvers::AddRotationMatrixSpectrahedralSdpConstraint ( MathematicalProgram prog,
const Eigen::Ref< const MatrixDecisionVariable< 3, 3 >> &  R 
)

Adds constraint (10) from https://arxiv.org/pdf/1403.4914.pdf , which exactly represents the convex hull of all rotation matrices in 3D.

◆ AddSos2Constraint()

void drake::solvers::AddSos2Constraint ( MathematicalProgram prog,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  lambda,
const Eigen::Ref< const VectorX< symbolic::Expression >> &  y 
)

Adds the special ordered set 2 (SOS2) constraint.

y(i) takes binary values (either 0 or 1).

  y(i) = 1 => λ(i) + λ(i + 1) = 1.
See also
AddLogarithmicSos2Constraint for a complete explanation on SOS2 constraint.
Parameters
progThe optimization program to which the SOS2 constraint is added.
lambdaAt most two entries in λ can be strictly positive, and these two entries have to be adjacent. All other entries are zero. Moreover, these two entries should sum up to 1.
yy(i) takes binary value, and determines which two entries in λ can be strictly positive. Throw a runtime error if y.rows() != lambda.rows() - 1.

◆ AggregateBoundingBoxConstraints() [1/3]

std::unordered_map<symbolic::Variable, Bound> drake::solvers::AggregateBoundingBoxConstraints ( const std::vector< Binding< BoundingBoxConstraint >> &  bounding_box_constraints)

Aggregates many bounding box constraints, returns the intersection (the tightest bounds) of these constraints.

Parameters
bounding_box_constraintsThe constraints to be aggregated.
Return values
aggregated_boundsaggregated_bounds[var.get_id()] returns the (lower, upper) bounds of that variable as the tightest bounds of bounding_box_constraints.

◆ AggregateBoundingBoxConstraints() [2/3]

void drake::solvers::AggregateBoundingBoxConstraints ( const MathematicalProgram prog,
Eigen::VectorXd *  lower,
Eigen::VectorXd *  upper 
)

Aggregates all the BoundingBoxConstraints inside prog, returns the intersection of the bounding box constraints as the lower and upper bound for each variable in prog.

Parameters
progThe optimization program containing decision variables and BoundingBoxConstraints.
[out]lower(*lower)[i] is the lower bound of prog.decision_variable(i).
[out]upper(*upper)[i] is the upper bound of prog.decision_variable(i).

◆ AggregateBoundingBoxConstraints() [3/3]

void drake::solvers::AggregateBoundingBoxConstraints ( const MathematicalProgram prog,
std::vector< double > *  lower,
std::vector< double > *  upper 
)

Overloads AggregateBoundingBoxConstraints, but the type of lower and upper are std::vector<double>.

◆ AggregateDuplicateVariables()

void drake::solvers::AggregateDuplicateVariables ( const Eigen::SparseMatrix< double > &  A,
const VectorX< symbolic::Variable > &  vars,
Eigen::SparseMatrix< double > *  A_new,
VectorX< symbolic::Variable > *  vars_new 
)

For linear expression A * vars where vars might contain duplicated entries, rewrite this linear expression as A_new * vars_new where vars_new doesn't contain duplicated entries.

◆ AggregateLinearCosts()

void drake::solvers::AggregateLinearCosts ( const std::vector< Binding< LinearCost >> &  linear_costs,
Eigen::SparseVector< double > *  linear_coeff,
VectorX< symbolic::Variable > *  vars,
double constant_cost 
)

Given many linear costs, aggregate them into.

aᵀ*x + b,

Parameters
linear_coststhe linear costs to be aggregated.
linear_coeff[out]a in the documentation above.
vars[out]x in the documentation above.
constant_cost[out]b in the documentation above.

◆ AggregateQuadraticAndLinearCosts()

void drake::solvers::AggregateQuadraticAndLinearCosts ( const std::vector< Binding< QuadraticCost >> &  quadratic_costs,
const std::vector< Binding< LinearCost >> &  linear_costs,
Eigen::SparseMatrix< double > *  Q_lower,
VectorX< symbolic::Variable > *  quadratic_vars,
Eigen::SparseVector< double > *  linear_coeff,
VectorX< symbolic::Variable > *  linear_vars,
double constant_cost 
)

Given many linear and quadratic costs, aggregate them into.

0.5*x₁ᵀQx₁ + bᵀx₂ + c where x₁ and x₂ don't need to be the same.

Parameters
quadratic_costsThe quadratic costs to be aggregated.
linear_costsThe linear costs to be aggregated.
Q_lower[out]The lower triangular part of the matrix Q.
quadratic_vars[out]x₁ in the documentation above.
linear_coeff[out]b in the documentation above.
linear_vars[out]x₂ in the documentation above.
constant_cost[out]c in the documentation above.

◆ AreRequiredAttributesSupported()

bool drake::solvers::AreRequiredAttributesSupported ( const ProgramAttributes required,
const ProgramAttributes supported,
std::string *  unsupported_message = nullptr 
)

Returns true iff required is a subset of supported.

Parameters
[out]unsupported_message(Optional) When provided, if this function returns false, the message will be set to a phrase describing the unsupported attributes; or if this function returns true, the message will be set to the empty string.

◆ CeilLog2()

constexpr int drake::solvers::CeilLog2 ( int  n)

Return ⌈log₂(n)⌉, namely the minimal integer no smaller than log₂(n), with base 2.

Parameters
nA positive integer.
Returns
The minimal integer no smaller than log₂(n).

◆ ChooseBestSolver()

SolverId drake::solvers::ChooseBestSolver ( const MathematicalProgram prog)

Choose the best solver given the formulation in the optimization program and the availability of the solvers.

Exceptions
std::exceptionif there is no available solver for prog.

◆ ConcatenateIndeterminatesRefList()

VectorXIndeterminate drake::solvers::ConcatenateIndeterminatesRefList ( const IndeterminatesRefList var_list)

Concatenates each element in var_list into a single Eigen vector of indeterminates, returns this concatenated vector.

◆ ConcatenateVariableRefList()

VectorXDecisionVariable drake::solvers::ConcatenateVariableRefList ( const VariableRefList var_list)

Concatenates each element in var_list into a single Eigen vector of decision variables, returns this concatenated vector.

◆ ConstructMonomialBasis()

drake::VectorX<symbolic::Monomial> drake::solvers::ConstructMonomialBasis ( const drake::symbolic::Polynomial p)

Given input polynomial p, outputs a set M of monomials with the following guarantee: if p = f1*f1 + f2*f2 + ...

  • fn*fn for some (unknown) polynomials f1, f2, ..., fn, then the span of M contains f1, f2, ..., fn, Given M, one can then find the polynomials fi using semidefinite programming; see, e.g., Chapter 3 of Semidefinite Optimization and Convex Algebraic Geometry by G. Blekherman, P. Parrilo, R. Thomas.
    Parameters
    pA polynomial
    Returns
    A vector whose entries are the elements of M

◆ CreateBinaryCodeMatchConstraint()

Binding<LinearConstraint> drake::solvers::CreateBinaryCodeMatchConstraint ( const VectorX< symbolic::Expression > &  code,
const Eigen::Ref< const Eigen::VectorXi > &  expected,
const symbolic::Expression match 
)

Create linear constraints such that, when these constraints are satisfied, match = 1 if and only if code == expected, otherwise match = 0.

Parameters
codecode(i) should only take binary values.
expectedThe expected matched value for code.
matchan expression that takes binary value, representing if code == expected
Returns
the linear constraints.

This function is useful integer optimization, for example, if we have a constraint match = ((b1 == 0) && (b2 == 1) && (b3 == 1)), we can call the function CreateBinaryCodeMatchConstraint({b1, b2, b3}, {0, 1, 1}, match) to create the constraint.

◆ CreateLogicalAndConstraint()

Binding<LinearConstraint> drake::solvers::CreateLogicalAndConstraint ( const symbolic::Expression b1,
const symbolic::Expression b2,
const symbolic::Expression b1_and_b2 
)

Adds linear constraints, such that when b1, b2, b1_and_b2 satisfy the constraints, and b1, b2 take binary values, it is guaranteed that b1_and_b2 = b1 ∧ b2 (b1 and b2).

The constraints are

  b1_and_b2 >= b1 + b2 - 1
  b1_and_b2 <= b1
  b1_and_b2 <= b2
  0 <= b1_and_b2 <= 1
Parameters
b1An expression that should only take a binary value.
b2An expression that should only take a binary value.
b1_and_b2Should be the logical and between b1 and b2.
Returns
The newly added constraints, such that when b1, b2, b1_and_b2 satisfy the constraints, it is guaranteed that b1_and_b2 = b1 ∧ b2.
Precondition
b1, b2, b1_and_b2 are all linear expressions.

◆ CreateLogicalOrConstraint()

Binding<LinearConstraint> drake::solvers::CreateLogicalOrConstraint ( const symbolic::Expression b1,
const symbolic::Expression b2,
const symbolic::Expression b1_or_b2 
)

Adds linear constraints, such that when b1, b2, b1_or_b2 satisfy the constraints, and b1, b2 take binary values, it is guaranteed that b1_or_b2 = b1 ∨ b2 (b1 or b2).

The constraints are

  b1_or_b2 <= b1 + b2
  b1_or_b2 >= b1
  b1_or_b2 >= b2
  0 <= b1_or_b2 <= 1
Parameters
b1An expression that should only take a binary value.
b2An expression that should only take a binary value.
b1_or_b2Should be the logical or between b1 and b2.
Returns
The newly added constraints, such that when b1, b2, b1_or_b2 satisfy the constraints, it is guaranteed that b1_or_b2 = b1 ∨ b2.
Precondition
b1, b2, b1_or_b2 are all linear expressions.

◆ CreateLogicalXorConstraint()

Binding<LinearConstraint> drake::solvers::CreateLogicalXorConstraint ( const symbolic::Expression b1,
const symbolic::Expression b2,
const symbolic::Expression b1_xor_b2 
)

Add linear constraints, such that when b1, b2, b1_xor_b2 satisfy the constraints, and b1, b2 take binary values, it is guaranteed that b1_xor_b2 = b1 ⊕ b2 (b1 exclusive xor b2).

The constraints are

  b1_xor_b2 <= b1 + b2
  b1_xor_b2 >= b1 - b2
  b1_xor_b2 >= b2 - b1
  b1_xor_b2 <= 2 - b1 - b2
  0 <= b1_xor_b2 <= 1
Parameters
b1An expression that should only take a binary value.
b2An expression that should only take a binary value.
b1_xor_b2Should be the logical exclusive or between b1 and b2.
Returns
The newly added constraints, such that when b1, b2, b1_xor_b2 satisfy the constraints, it is guaranteed that b1_xor_b2 = b1 ⊕ b2.
Precondition
b1, b2, b1_xor_b2 are all linear expressions.

◆ DecomposeNonConvexQuadraticForm()

std::pair<Eigen::MatrixXd, Eigen::MatrixXd> drake::solvers::DecomposeNonConvexQuadraticForm ( const Eigen::Ref< const Eigen::MatrixXd > &  Q)

For a non-convex homogeneous quadratic form xᵀQx, where Q is not necessarily a positive semidefinite matrix, we decompose it as a difference between two convex homogeneous quadratic forms xᵀQx = xᵀQ₁x - xᵀQ₂x, Q₁, Q₂ are positive semidefinite.

To find the optimal Q₁ and Q₂, we solve the following semidefinite programming problem min s s.t s >= trace(Q₁) s >= trace(Q₂) Q₁ - Q₂ = (Q + Qᵀ) / 2 Q₁, Q₂ are positive semidefinite The decomposition Q = Q₁ - Q₂ can be used later, to solve the non-convex optimization problem involving a quadratic form xᵀQx. For more information, please refer to the papers on difference of convex decomposition, for example Undominated d.c Decompositions of Quadratic Functions and Applications to Branch-and-Bound Approaches By I.M.Bomze and M. Locatelli Computational Optimization and Applications, 2004 DC Decomposition of Nonconvex Polynomials with Algebraic Techniques By A. A. Ahmadi and G. Hall Mathematical Programming, 2015

Parameters
QA square matrix.
Exceptions
std::exceptionif Q is not square.
Returns
The optimal decomposition (Q₁, Q₂)

◆ EnumerateIntegerSolutions()

Eigen::Matrix<int, -1, -1, Eigen::RowMajor> drake::solvers::EnumerateIntegerSolutions ( const Eigen::Ref< const Eigen::MatrixXi > &  A,
const Eigen::Ref< const Eigen::VectorXi > &  b,
const Eigen::Ref< const Eigen::VectorXi > &  lower_bound,
const Eigen::Ref< const Eigen::VectorXi > &  upper_bound 
)

Finds all integer solutions x to the linear inequalities.

                   Ax <= b,
                   x <= upper_bound,
                   x >= lower_bound.
Parameters
AAn (m x n) integer matrix.
bAn (m x 1) integer vector.
upper_boundA (n x 1) integer vector.
lower_boundA (n x 1) integer vector.
Returns
A (p x n) matrix whose rows are the solutions.

◆ ExponentiallySmoothedHingeLoss()

void drake::solvers::ExponentiallySmoothedHingeLoss ( double  x,
double penalty,
double dpenalty_dx 
)

A hinge loss function smoothed by exponential function.

This loss function is differentiable everywhere. The formulation is described in section II.C of [2]. The penalty is

       ⎧ 0            if x ≥ 0
φ(x) = ⎨
       ⎩ -x exp(1/x)  if x < 0.

[2] "Whole-body Motion Planning with Centroidal Dynamics and Full Kinematics" by Hongkai Dai, Andres Valenzuela and Russ Tedrake, IEEE-RAS International Conference on Humanoid Robots, 2014.

◆ GenerateSDPA()

bool drake::solvers::GenerateSDPA ( const MathematicalProgram prog,
const std::string &  file_name,
RemoveFreeVariableMethod  method = RemoveFreeVariableMethod::kNullspace 
)

SDPA is a format to record an SDP problem.

max tr(C*X)
s.t tr(Aᵢ*X) = gᵢ
    X ≽ 0

or the dual of the problem

min gᵀy
s.t ∑ᵢ yᵢAᵢ - C ≽ 0

where X is a symmetric block diagonal matrix. The format is described in http://plato.asu.edu/ftp/sdpa_format.txt. Many solvers, such as CSDP, DSDP, SDPA, sedumi and SDPT3, accept an SDPA format file as the input. This function reads a MathematicalProgram that can be formulated as above, and write an SDPA file.

Parameters
proga program that contains an optimization program.
file_nameThe name of the file, note that the extension will be added automatically.
methodIf prog contains free variables (i.e., variables without bounds), then we need to remove these free variables to write the program in the SDPA format. Please refer to RemoveFreeVariableMethod for details on how to remove the free variables.
Default: is RemoveFreeVariableMethod::kNullspace.
Return values
is_success.Returns true if we can generate the SDPA file. The failure could be
  1. prog cannot be captured by the formulation above.
  2. prog cannot create a file with the given name, etc.

◆ GetAvailableSolvers()

std::vector<SolverId> drake::solvers::GetAvailableSolvers ( ProgramType  prog_type)

Returns the list of available and enabled solvers that definitely accept all programs of the given program type.

The order of the returned SolverIds reflects an approximate order of preference, from most preferred (front) to least preferred (back). Because we are analyzing only based on the program type rather than a specific program, it's possible that solvers later in the list would perform better in certain situations. To obtain the truly best solver, using ChooseBestSolver() instead.

Note
If a solver only accepts a subset of the program type, then that solver is not included in the returned results. For example EqualityConstrainedQPSolver doesn't accept programs with inequality linear constraints, so it doesn't show up in the return of GetAvailableSolvers(ProgramType::kQP).

◆ GetKnownSolvers()

const std::set<SolverId>& drake::solvers::GetKnownSolvers ( )

Returns the set of solvers known to ChooseBestSolver.

◆ GetProgramType()

ProgramType drake::solvers::GetProgramType ( const MathematicalProgram prog)

Returns the type of the optimization program (LP, QP, etc), based on the properties of its cost/constraints/variables.

Each mathematical program should be characterized by a unique type. If a program can be characterized as either type A or type B (for example, a program with linear constraint and linear costs can be characterized as either an LP or an SDP), then we choose the type corresponding to a smaller set of programs (LP in this case).

◆ GetVariableValue() [1/2]

double drake::solvers::GetVariableValue ( const symbolic::Variable var,
const std::optional< std::unordered_map< symbolic::Variable::Id, int >> &  variable_index,
const Eigen::Ref< const Eigen::VectorXd > &  variable_values 
)

Retrieve the value of a single variable var from variable_values.

Parameters
varThe variable whose value is going to be retrieved. var.get_id() must be a key in variable_index.
variable_indexmaps the variable ID to its index in variable_values.
variable_valuesThe values of all variables.
Returns
variable_values(variable_index[var.get_id()]) if var.get_id() is a valid key of variable_index.
Exceptions
std::exceptionif var.get_id() is not a valid key of variable_index.
Precondition
All the mapped value in variable_index is in the range [0, variable_values.rows())

◆ GetVariableValue() [2/2]

std::enable_if_t< std::is_same_v<typename Derived::Scalar, symbolic::Variable>, MatrixLikewise<double, Derived> > drake::solvers::GetVariableValue ( const Eigen::MatrixBase< Derived > &  var,
const std::optional< std::unordered_map< symbolic::Variable::Id, int >> &  variable_index,
const Eigen::Ref< const Eigen::VectorXd > &  variable_values 
)

Overload GetVariableValue() function, but for an Eigen matrix of decision variables.

◆ MakeFirstAvailableSolver()

std::unique_ptr<SolverInterface> drake::solvers::MakeFirstAvailableSolver ( const std::vector< SolverId > &  solver_ids)

Makes the first available and enabled solver.

If no solvers are available, throws a std::exception.

◆ MakeSemidefiniteRelaxation() [1/2]

std::unique_ptr<MathematicalProgram> drake::solvers::MakeSemidefiniteRelaxation ( const MathematicalProgram prog,
const SemidefiniteRelaxationOptions options = {} 
)

Constructs a new MathematicalProgram which represents the semidefinite programming convex relaxation of the (likely nonconvex) program prog.

This method currently supports only linear and quadratic costs and constraints, but may be extended in the future with broader support.

See https://underactuated.mit.edu/optimization.html#sdp_relaxation for references and examples.

Note: Currently, programs using LinearEqualityConstraint will give tighter relaxations than programs using LinearConstraint or BoundingBoxConstraint, even if lower_bound == upper_bound. Prefer LinearEqualityConstraint.

Exceptions
std::exceptionif prog has costs and constraints which are not linear nor quadratic.

◆ MakeSemidefiniteRelaxation() [2/2]

std::unique_ptr<MathematicalProgram> drake::solvers::MakeSemidefiniteRelaxation ( const MathematicalProgram prog,
const std::vector< symbolic::Variables > &  variable_groups,
const SemidefiniteRelaxationOptions options = {} 
)

A version of MakeSemidefiniteRelaxation that allows for specifying the sparsity of the relaxation.

For each group in variable_groups, the costs and constraints whose variables are a subset of the group will be jointly relaxed into a single, dense semidefinite program in the same manner as MakeSemidefiniteRelaxation(prog).

Each of these semidefinite relaxations are aggregated into a single program, and their semidefinite variables are made to agree where the variable groups overlap.

The returned program will always have the same number of PSD variables as variable groups.

Costs and constraints whose variables are not a subset of any of the groups are not relaxed and are simply added to the aggregated program. If these costs and constraints are non-convex, then this method will throw.

As an example, consider the following program. min x₂ᵀ * Q * x₂ subject to x₁ + x₂ ≤ 1 x₂ + x₃ ≤ 2 x₁ + x₃ ≤ 3

And suppose we call MakeSemidefiniteRelaxation(prog, std::vector<Variables>{{x₁, x₂}, {x₂,x₃}}).

The resulting relaxation would have two semidefinite variables, namely: [U₁, U₂, x₁] [W₁, W₂, x₂] [U₂, U₃, x₂], [W₂, W₃, x₃] [x₁ᵀ, x₂ᵀ, 1] [x₂ᵀ, x₃ᵀ, 1]

The first semidefinite variable would be associated to the semidefinite relaxation of the subprogram: min x₁ᵀ * Q * x₁ subject to x₁ + x₂ ≤ 1 And the implied constraints from x₁ + x₂ ≤ 1 would be added to the first semidefinite variable. These implied constraints are additional constraints that can be placed on the matrix [U₁, U₂, x₁] [U₂, U₃, x₂] [x₁ᵀ, x₂ᵀ, 1] which are redundant in the non-convex program, but are not redundant in the semidefinite relaxation. See https://underactuated.mit.edu/optimization.html#sdp_relaxation for references and examples.

The second semidefinite variable would be associated to the semidefinite relaxation of the subprogram: min x₂ᵀ * Q * x₂ subject to x₂ + x₃ ≤ 2 And the implied constraints from x₂ + x₃ ≤ 2 would be added to the second semidefinite variable.

Since the constraint x₁ + x₃ ≤ 3 is not a subset of any of the variable groups, it will be added to the overall relaxation, but will not be used to generate implied constraints on any semidefinite variable.

The total relaxation would also include an equality constraint that U₃ == W₁ so that the quadratic relaxation of x₂ is consistent between the two semidefinite variables.

Note: 1) Costs are only associated to a single variable group, so that the resulting aggregated program has a relaxed cost with the same scaling. 2) The homogenization variable "1" is re-used in every semidefinite variable.

Exceptions
std::exceptionif there is a non-convex cost or constraint whose variables do not intersect with any of the variable groups.

◆ MakeSolver()

std::unique_ptr<SolverInterface> drake::solvers::MakeSolver ( const SolverId id)

Given the solver ID, create the solver with the matching ID.

Exceptions
std::exceptionif there is no matching solver.

◆ NewRotationMatrixVars()

MatrixDecisionVariable<3, 3> drake::solvers::NewRotationMatrixVars ( MathematicalProgram prog,
const std::string &  name = "R" 
)

Allocates a 3x3 matrix of decision variables with the trivial bounding box constraint ensuring all elements are [-1,1], and the linear constraint imposing -1 <= trace(R) <= 3.

◆ operator!=()

bool drake::solvers::operator!= ( const SolverId ,
const SolverId  
)

◆ operator<<() [1/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  os,
SolutionResult  solution_result 
)

◆ operator<<() [2/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  ,
const ProgramAttribute  
)

◆ operator<<() [3/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  ,
const SolverId  
)

◆ operator<<() [4/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  ,
const ProgramAttributes  
)

◆ operator<<() [5/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  ,
CommonSolverOption   
)

(Deprecated.)

Deprecated:
Use to_string(), instead.
This will be removed from Drake on or after 2025-05-01.

◆ operator<<() [6/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  ,
const ProgramType  
)

◆ operator<<() [7/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  os,
const Binding< C > &  binding 
)

Print out the Binding.

◆ operator<<() [8/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  os,
const MixedIntegerRotationConstraintGenerator::Approach type 
)

◆ operator<<() [9/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  os,
const IntervalBinning binning 
)

◆ operator<<() [10/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  ,
const SolverOptions  
)

◆ operator<<() [11/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  os,
const EvaluatorBase e 
)

Print out the evaluator.

◆ operator<<() [12/12]

std::ostream& drake::solvers::operator<< ( std::ostream &  os,
const MathematicalProgram prog 
)

◆ operator==()

bool drake::solvers::operator== ( const SolverId ,
const SolverId  
)

◆ QuadraticallySmoothedHingeLoss()

void drake::solvers::QuadraticallySmoothedHingeLoss ( double  x,
double penalty,
double dpenalty_dx 
)

A linear hinge loss, smoothed with a quadratic loss near the origin.

The formulation is in equation (6) of [1]. The penalty is

       ⎧  0        if x ≥ 0
φ(x) = ⎨  x²/2     if -1 < x < 0
       ⎩  -0.5 - x if x ≤ -1.

[1] "Loss Functions for Preference Levels: Regression with Discrete Ordered Labels." by Jason Rennie and Nathan Srebro, Proceedings of IJCAI multidisciplinary workshop on Advances in Preference Handling.

◆ Solve() [1/3]

MathematicalProgramResult drake::solvers::Solve ( const MathematicalProgram prog,
const std::optional< Eigen::VectorXd > &  initial_guess,
const std::optional< SolverOptions > &  solver_options 
)

Solves an optimization program, with optional initial guess and solver options.

This function first chooses the best solver depending on the availability of the solver and the program formulation; it then constructs that solver and call the Solve function of that solver. The optimization result is stored in the return argument.

Parameters
progContains the formulation of the program, and possibly solver options.
initial_guessThe initial guess for the decision variables.
solver_optionsThe options in addition to those stored in prog. For each option entry (like print out), there are 4 ways to set that option, and the priority given to the solver options is as follows (from lowest / least, to highest / most):
  1. common option set on the MathematicalProgram itself
  2. common option passed as an argument to Solve
  3. solver-specific option set on the MathematicalProgram itself
  4. solver-specific option passed as an argument to Solve
Returns
result The result of solving the program through the solver.

◆ Solve() [2/3]

MathematicalProgramResult drake::solvers::Solve ( const MathematicalProgram prog,
const Eigen::Ref< const Eigen::VectorXd > &  initial_guess 
)

Solves an optimization program with a given initial guess.

◆ Solve() [3/3]

MathematicalProgramResult drake::solvers::Solve ( const MathematicalProgram prog)

◆ SolveInParallel() [1/2]

std::vector<MathematicalProgramResult> drake::solvers::SolveInParallel ( const std::vector< const MathematicalProgram * > &  progs,
const std::vector< const Eigen::VectorXd * > *  initial_guesses,
const std::vector< const SolverOptions * > *  solver_options,
const std::vector< std::optional< SolverId >> *  solver_ids,
Parallelism  parallelism = Parallelism::Max(),
bool  dynamic_schedule = false 
)

Solves progs[i] into result[i], optionally using initial_guess[i] and solver_options[i] if given, by invoking the solver at solver_ids[i] if provided.

If solver_ids[i] is nullopt then the best available solver is selected for progs[i] depending on the availability of the solver and the problem formulation. If solver_ids == nullptr then this is done for every progs[i].

Uses at most parallelism cores, with static scheduling by default.

Parameters
dynamic_scheduleIf dynamic_schedule is false then static scheduling is used and so each core will solve approximately 1/parallelism of the programs. This is most efficient when all the programs take approximately the same amount of time to solve. If dynamic_schedule is true, then dynamic scheduling is used and all the programs are queued into a single pool and each core will take the next program off the queue when it becomes available. This is best when each program takes a dramatically different amount of time to solve.
Note
When using a proprietary solver (e.g. Mosek) your organization may have limited license seats. It is recommended that the number of parallel solves does not exceed the total number of license seats.
Only programs which are thread safe are solved concurrently. Programs that are not thread safe will be solved sequentially in a thread safe manner.
Exceptions
std::exceptionif initial_guess and solver_options are provided and not the same size as progs.
std::exceptionif any of the progs are nullptr.
std::exceptionif any of the programs cannot be solved.

◆ SolveInParallel() [2/2]

std::vector<MathematicalProgramResult> drake::solvers::SolveInParallel ( const std::vector< const MathematicalProgram * > &  progs,
const std::vector< const Eigen::VectorXd * > *  initial_guesses = nullptr,
const SolverOptions solver_options = nullptr,
const std::optional< SolverId > &  solver_id = std::nullopt,
Parallelism  parallelism = Parallelism::Max(),
bool  dynamic_schedule = false 
)

Provides the same functionality as SolveInParallel, but allows for specifying a single solver id and solver option that is used when solving all programs.

Exceptions
std::exceptionif the provided solver cannot solve all of progs.
std::exceptionif initial_guesses are provided and not the same size as progs.
std::exceptionif any of the progs are nullptr.

◆ to_string() [1/9]

std::string drake::solvers::to_string ( SolutionResult  solution_result)

◆ to_string() [2/9]

std::string drake::solvers::to_string ( const ProgramAttribute )

◆ to_string() [3/9]

std::string drake::solvers::to_string ( const ProgramAttributes )

◆ to_string() [4/9]

std::string_view drake::solvers::to_string ( CommonSolverOption  )

Returns the short, unadorned name of the option, e.g., kPrintFileName.

◆ to_string() [5/9]

std::string drake::solvers::to_string ( const ProgramType )

◆ to_string() [6/9]

std::string drake::solvers::to_string ( MixedIntegerRotationConstraintGenerator::Approach  type)

◆ to_string() [7/9]

std::string drake::solvers::to_string ( IntervalBinning  interval_binning)

◆ to_string() [8/9]

std::string drake::solvers::to_string ( const SolverOptions )

◆ to_string() [9/9]

std::string drake::solvers::to_string ( const RemoveFreeVariableMethod )