Drake
drake::systems::controllers Namespace Reference

## Namespaces

plan_eval

qp_inverse_dynamics

rbt

## Classes

class  CartesianSetpoint
This is used to compute target spatial acceleration, which is the input to the inverse dynamics controller. More...

struct  DynamicProgrammingOptions
Consolidates the many possible options to be passed to the dynamic programming algorithms. More...

class  InverseDynamics
Solves inverse dynamics with no consideration for joint actuator force limits. More...

class  InverseDynamicsController
A state feedback controller that uses a PidController to generate desired accelerations, which are then converted into torques using InverseDynamics. More...

class  LinearModelPredictiveController
Implements a basic Model Predictive Controller that linearizes the system about an equilibrium condition and regulates to the same point by solving an optimal control problem over a finite time horizon. More...

class  PidControlledSystem
A system that encapsulates a PidController and a controlled System (a.k.a the "plant"). More...

class  PidController
Implements the PID controller. More...

class  StateFeedbackControllerInterface
Interface for state feedback controllers. More...

class  VectorSetpoint

class  ZMPPlanner
Given a desired two dimensional (X and Y) zero-moment point (ZMP) trajectory parameterized as a piecewise polynomial, an optimal center of mass (CoM) trajectory is planned using a linear inverted pendulum model (LIPM). More...

struct  ZMPTestTraj
A structure for storing trajectories from simulating a linear inverted pendulum model (LIPM) using the policy from a ZMPPlanner. More...

## Functions

std::pair< std::unique_ptr< BarycentricMeshSystem< double > >, Eigen::RowVectorXd > FittedValueIteration (Simulator< double > *simulator, const std::function< double(const Context< double > &context)> &cost_function, const math::BarycentricMesh< double >::MeshGrid &state_grid, const math::BarycentricMesh< double >::MeshGrid &input_grid, double timestep, const DynamicProgrammingOptions &options=DynamicProgrammingOptions())
Implements Fitted Value Iteration on a (triangulated) Barycentric Mesh, which designs a state-feedback policy to minimize the infinite-horizon cost ∑ γⁿ g(x[n],u[n]), where γ is the discount factor in options. More...

Eigen::VectorXd LinearProgrammingApproximateDynamicProgramming (Simulator< double > *simulator, const std::function< double(const Context< double > &context)> &cost_function, const std::function< symbolic::Expression( const Eigen::Ref< const Eigen::VectorXd > &state, const VectorX< symbolic::Variable > &parameters)> &linearly_parameterized_cost_to_go_function, int num_parameters, const Eigen::Ref< const Eigen::MatrixXd > &state_samples, const Eigen::Ref< const Eigen::MatrixXd > &input_samples, double timestep, const DynamicProgrammingOptions &options=DynamicProgrammingOptions())
Implements the Linear Programming approach to approximate dynamic programming. More...

LinearQuadraticRegulatorResult LinearQuadraticRegulator (const Eigen::Ref< const Eigen::MatrixXd > &A, const Eigen::Ref< const Eigen::MatrixXd > &B, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const Eigen::Ref< const Eigen::MatrixXd > &N=Eigen::Matrix< double, 0, 0 >::Zero())
Computes the optimal feedback controller, u=-Kx, and the optimal cost-to-go J = x'Sx for the problem: More...

LinearQuadraticRegulatorResult DiscreteTimeLinearQuadraticRegulator (const Eigen::Ref< const Eigen::MatrixXd > &A, const Eigen::Ref< const Eigen::MatrixXd > &B, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R)
Computes the optimal feedback controller, u=-Kx, and the optimal cost-to-go J = x'Sx for the problem: More...

std::unique_ptr< systems::LinearSystem< double > > LinearQuadraticRegulator (const LinearSystem< double > &system, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const Eigen::Ref< const Eigen::MatrixXd > &N=Eigen::Matrix< double, 0, 0 >::Zero())
Creates a system that implements the optimal time-invariant linear quadratic regulator (LQR). More...

std::unique_ptr< systems::AffineSystem< double > > LinearQuadraticRegulator (const System< double > &system, const Context< double > &context, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const Eigen::Ref< const Eigen::MatrixXd > &N=Eigen::Matrix< double, 0, 0 >::Zero())
Linearizes the System around the specified Context, computes the optimal time-invariant linear quadratic regulator (LQR), and returns a System which implements that regulator in the original System's coordinates. More...

template<typename Scalar >
std::ostream & operator<< (std::ostream &out, const CartesianSetpoint< Scalar > &setpoint)

template<typename Scalar >
std::ostream & operator<< (std::ostream &out, const VectorSetpoint< Scalar > &setpoint)

GTEST_TEST (testQPInverseDynamicsController, testPoseSetpoint)

ZMPTestTraj SimulateZMPPolicy (const ZMPPlanner &zmp_planner, const Eigen::Vector4d &x0, double dt, double extra_time_at_the_end)
Forward simulation using the linear policy from zmp_planner starting from the initial condition x0 using explicit Euler integration. More...

std::vector< PiecewisePolynomial< double > > GenerateDesiredZMPTrajs (const std::vector< Eigen::Vector2d > &footsteps, double double_support_duration, double single_support_duration)
Generates desired ZMP trajectories as piecewise polynomials given the desired footsteps. More...

## ◆ FittedValueIteration()

 std::pair< std::unique_ptr< BarycentricMeshSystem< double > >, Eigen::RowVectorXd > FittedValueIteration ( Simulator< double > * simulator, const std::function< double(const Context< double > &context)> & cost_function, const math::BarycentricMesh< double >::MeshGrid & state_grid, const math::BarycentricMesh< double >::MeshGrid & input_grid, double timestep, const DynamicProgrammingOptions & options = DynamicProgrammingOptions() )

Implements Fitted Value Iteration on a (triangulated) Barycentric Mesh, which designs a state-feedback policy to minimize the infinite-horizon cost ∑ γⁿ g(x[n],u[n]), where γ is the discount factor in options.

For background, and a description of this algorithm, see http://underactuated.csail.mit.edu/underactuated.html?chapter=dp . It currently requires that the system to be optimized has only continuous state and it is assumed to be time invariant. This code makes a discrete-time approximation (using timestep) for the value iteration update.

Parameters
 simulator contains the reference to the System being optimized and to a Context for that system, which may contain non-default Parameters, etc. The simulator is run for timestep seconds from every point on the mesh in order to approximate the dynamics; all of the simulation parameters (integrator, etc) are relevant during that evaluation. cost_function is the continuous-time instantaneous cost. This implementation of the discrete-time formulation above uses the approximation g(x,u) = timestep*cost_function(x,u). state_grid defines the mesh on the state space used to represent the cost-to-go function and the resulting policy. input_grid defines the discrete action space used in the value iteration update. timestep a time in seconds used for the discrete-time approximation. options optional DynamicProgrammingOptions structure.
Returns
a std::pair containing the resulting policy, implemented as a BarycentricMeshSystem, and the RowVectorXd J that defines the expected cost-to-go on a BarycentricMesh using state_grid. The policy has a single vector input (which is the continuous state of the system passed in through simulator) and a single vector output (which is the input of the system passed in through simulator).

## ◆ GenerateDesiredZMPTrajs()

 std::vector< trajectories::PiecewisePolynomial< double > > GenerateDesiredZMPTrajs ( const std::vector< Eigen::Vector2d > & footsteps, double double_support_duration, double single_support_duration )

Generates desired ZMP trajectories as piecewise polynomials given the desired footsteps.

The knot points are generated as follows:

T: 0, knots: fs[0]
T: ss, knots: fs[0]
T: ss + ds, knots: fs[1]
T: 2 * ss + ds, knots: fs[1]
T: 2 * ss + 2 * ds, knots: fs[2]
T: 3 * ss + 2 * ds, knots: fs[2]
...


ss stands for single_support_duration, and ds for double_support_duration.

Parameters
 footsteps,X Y pair double_support_duration,Duration for double support. single_support_duration,Duration for single support.
Returns
Three trajectories: 0 is zero-order-hold, 1 is linear, 2 is cubic.

## ◆ GTEST_TEST()

 drake::systems::controllers::GTEST_TEST ( testQPInverseDynamicsController , testPoseSetpoint )

## ◆ LinearProgrammingApproximateDynamicProgramming()

 Eigen::VectorXd LinearProgrammingApproximateDynamicProgramming ( Simulator< double > * simulator, const std::function< double(const Context< double > &context)> & cost_function, const std::function< symbolic::Expression(const Eigen::Ref< const Eigen::VectorXd > &state, const VectorX< symbolic::Variable > ¶meters)> & linearly_parameterized_cost_to_go_function, int num_parameters, const Eigen::Ref< const Eigen::MatrixXd > & state_samples, const Eigen::Ref< const Eigen::MatrixXd > & input_samples, double timestep, const DynamicProgrammingOptions & options = DynamicProgrammingOptions() )

Implements the Linear Programming approach to approximate dynamic programming.

It optimizes the linear program

maximize ∑ Jₚ(x). subject to ∀x, ∀u, Jₚ(x) ≤ g(x,u) + γJₚ(f(x,u)),

where g(x,u) is the one-step cost, Jₚ(x) is a (linearly) parameterized cost-to-go function with parameter vector p, and γ is the discount factor in options.

For background, and a description of this algorithm, see http://underactuated.csail.mit.edu/underactuated.html?chapter=dp . It currently requires that the system to be optimized has only continuous state and it is assumed to be time invariant. This code makes a discrete-time approximation (using timestep) for the value iteration update.

Parameters
 simulator contains the reference to the System being optimized and to a Context for that system, which may contain non-default Parameters, etc. The simulator is run for timestep seconds from every pair of input/state sample points in order to approximate the dynamics; all of the simulation parameters (integrator, etc) are relevant during that evaluation. cost_function is the continuous-time instantaneous cost. This implementation of the discrete-time formulation above uses the approximation g(x,u) = timestep*cost_function(x,u). linearly_parameterized_cost_to_go_function must define a function to approximate the cost-to-go, which takes the state vector as the first input and the parameter vector as the second input. This can be any function of the form Jₚ(x) = ∑ pᵢ φᵢ(x). This algorithm will pass in a VectorX of symbolic::Variable in order to set up the linear program. state_samples is a list of sample states (one per column) at which to apply the optimization constraints and the objective. input_samples is a list of inputs (one per column) which are evaluated at every sample point. timestep a time in seconds used for the discrete-time approximation. options optional DynamicProgrammingOptions structure.
Returns
params the VectorXd of parameters that optimizes the supplied cost-to-go function.

## ◆ operator<<() [1/2]

 std::ostream& drake::systems::controllers::operator<< ( std::ostream & out, const CartesianSetpoint< Scalar > & setpoint )
inline

## ◆ operator<<() [2/2]

 std::ostream& drake::systems::controllers::operator<< ( std::ostream & out, const VectorSetpoint< Scalar > & setpoint )

## ◆ SimulateZMPPolicy()

 ZMPTestTraj SimulateZMPPolicy ( const ZMPPlanner & zmp_planner, const Eigen::Vector4d & x0, double dt, double extra_time_at_the_end )

Forward simulation using the linear policy from zmp_planner starting from the initial condition x0 using explicit Euler integration.

Parameters
 zmp_planner,Contains planned center of mass trajectory, value function, and linear policy. x0,Initial condition. dt,Time step. extra_time_at_the_end,Simulate extra_time_at_the_end seconds past the end of the trajectories for convergence.
Returns
ZMPTestTraj that contains all the information.