Implementations of controllers that operate as Systems in a block diagram.
Algorithms that synthesize controllers are located in Feedback Control Design.
Classes | |
| class | DifferentialInverseKinematicsController |
| Differential Inverse Kinematics controller that tracks desired poses / velocities for multiple operational points. More... | |
| class | DifferentialInverseKinematicsSystem |
| The DifferentialInverseKinematicsSystem takes as input desired cartesian poses (or cartesian velocities) for an arbitrary number of "goal" frames on the robot, and produces a generalized velocity command as output to move the goal frames toward the desired state. More... | |
| class | InverseDynamics< T > |
| Solves inverse dynamics with no consideration for joint actuator force limits. More... | |
| class | InverseDynamicsController< T > |
A state feedback controller that uses a PidController to generate desired accelerations, which are then converted into MultibodyPlant actuation inputs using InverseDynamics (with mode = InverseDynamics::kInverseDynamics). More... | |
| class | JointStiffnessController< T > |
| Implements a joint-space stiffness controller of the form. More... | |
| class | JointImpedanceController |
| Drake does not yet offer a joint impedance controller, which would use feedback to shape the stiffness, damping, and inertia of the closed-loop system. More... | |
| class | PidControlledSystem< T > |
| A system that encapsulates a PidController and a controlled System (a.k.a the "plant"). More... | |
| class | PidController< T > |
| Implements the PID controller. More... | |
Functions | |
| Eigen::VectorXd | LinearProgrammingApproximateDynamicProgramming (Simulator< double > *simulator, const std::function< double(const Context< double > &context)> &cost_function, const std::function< symbolic::Expression(const Eigen::Ref< const Eigen::VectorXd > &state, const VectorX< symbolic::Variable > ¶meters)> &linearly_parameterized_cost_to_go_function, int num_parameters, const Eigen::Ref< const Eigen::MatrixXd > &state_samples, const Eigen::Ref< const Eigen::MatrixXd > &input_samples, double time_step, const DynamicProgrammingOptions &options=DynamicProgrammingOptions()) |
| Implements the Linear Programming approach to approximate dynamic programming. More... | |
| std::unique_ptr< System< double > > | MakeFiniteHorizonLinearQuadraticRegulator (const System< double > &system, const Context< double > &context, double t0, double tf, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const FiniteHorizonLinearQuadraticRegulatorOptions &options=FiniteHorizonLinearQuadraticRegulatorOptions()) |
| Variant of FiniteHorizonLinearQuadraticRegulator that returns a System implementing the regulator (controller) as a System, with a single "plant_state" input for the estimated plant state, and a single "control" output for the regulator control output. More... | |
| std::unique_ptr< LinearSystem< double > > | LinearQuadraticRegulator (const LinearSystem< double > &system, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const Eigen::Ref< const Eigen::MatrixXd > &N=Eigen::Matrix< double, 0, 0 >::Zero()) |
| Creates a system that implements the optimal time-invariant linear quadratic regulator (LQR). More... | |
| std::unique_ptr< AffineSystem< double > > | LinearQuadraticRegulator (const System< double > &system, const Context< double > &context, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const Eigen::Ref< const Eigen::MatrixXd > &N=Eigen::Matrix< double, 0, 0 >::Zero(), int input_port_index=0) |
| Linearizes the System around the specified Context, computes the optimal time-invariant linear quadratic regulator (LQR), and returns a System which implements that regulator in the original System's coordinates. More... | |
| Eigen::MatrixXd | ControllabilityMatrix (const LinearSystem< double > &sys) |
| Returns the controllability matrix: R = [B, AB, ..., A^{n-1}B]. More... | |
| bool | IsControllable (const LinearSystem< double > &sys, std::optional< double > threshold=std::nullopt) |
| Returns true iff the controllability matrix is full row rank. More... | |
| bool | IsStabilizable (const LinearSystem< double > &sys, std::optional< double > threshold=std::nullopt) |
| Returns true iff the system is stabilizable. More... | |
| Eigen::MatrixXd drake::systems::ControllabilityMatrix | ( | const LinearSystem< double > & | sys | ) |
Returns the controllability matrix: R = [B, AB, ..., A^{n-1}B].
| bool drake::systems::IsControllable | ( | const LinearSystem< double > & | sys, |
| std::optional< double > | threshold = std::nullopt |
||
| ) |
Returns true iff the controllability matrix is full row rank.
| bool drake::systems::IsStabilizable | ( | const LinearSystem< double > & | sys, |
| std::optional< double > | threshold = std::nullopt |
||
| ) |
Returns true iff the system is stabilizable.
| Eigen::VectorXd drake::systems::controllers::LinearProgrammingApproximateDynamicProgramming | ( | Simulator< double > * | simulator, |
| const std::function< double(const Context< double > &context)> & | cost_function, | ||
| const std::function< symbolic::Expression(const Eigen::Ref< const Eigen::VectorXd > &state, const VectorX< symbolic::Variable > ¶meters)> & | linearly_parameterized_cost_to_go_function, | ||
| int | num_parameters, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | state_samples, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | input_samples, | ||
| double | time_step, | ||
| const DynamicProgrammingOptions & | options = DynamicProgrammingOptions() |
||
| ) |
Implements the Linear Programming approach to approximate dynamic programming.
It optimizes the linear program
maximize ∑ Jₚ(x). subject to ∀x, ∀u, Jₚ(x) ≤ g(x,u) + γJₚ(f(x,u)),
where g(x,u) is the one-step cost, Jₚ(x) is a (linearly) parameterized cost-to-go function with parameter vector p, and γ is the discount factor in options.
For background, and a description of this algorithm, see http://underactuated.csail.mit.edu/underactuated.html?chapter=dp . It currently requires that the system to be optimized has only continuous state and it is assumed to be time invariant. This code makes a discrete-time approximation (using time_step) for the value iteration update.
| simulator | contains the reference to the System being optimized and to a Context for that system, which may contain non-default Parameters, etc. The simulator is run for time_step seconds from every pair of input/state sample points in order to approximate the dynamics; all of the simulation parameters (integrator, etc) are relevant during that evaluation. |
| cost_function | is the continuous-time instantaneous cost. This implementation of the discrete-time formulation above uses the approximation g(x,u) = time_step*cost_function(x,u). |
| linearly_parameterized_cost_to_go_function | must define a function to approximate the cost-to-go, which takes the state vector as the first input and the parameter vector as the second input. This can be any function of the form Jₚ(x) = ∑ pᵢ φᵢ(x). This algorithm will pass in a VectorX of symbolic::Variable in order to set up the linear program. |
| state_samples | is a list of sample states (one per column) at which to apply the optimization constraints and the objective. |
| input_samples | is a list of inputs (one per column) which are evaluated at every sample point. |
| time_step | a time in seconds used for the discrete-time approximation. |
| options | optional DynamicProgrammingOptions structure. |
| std::unique_ptr<LinearSystem<double> > drake::systems::controllers::LinearQuadraticRegulator | ( | const LinearSystem< double > & | system, |
| const Eigen::Ref< const Eigen::MatrixXd > & | Q, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | R, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | N = Eigen::Matrix< double, 0, 0 >::Zero() |
||
| ) |
Creates a system that implements the optimal time-invariant linear quadratic regulator (LQR).
If system is a continuous-time system, then solves the continuous-time LQR problem:
\[ \min_u \int_0^\infty x^T(t)Qx(t) + u^T(t)Ru(t) + + 2x^T(t)Nu(t) dt. \]
If system is a discrete-time system, then solves the discrete-time LQR problem:
\[ \min_u \sum_0^\infty x^T[n]Qx[n] + u^T[n]Ru[n] + 2x^T[n]Nu[n]. \]
| system | The System to be controlled. |
| Q | A symmetric positive semi-definite cost matrix of size num_states x num_states. |
| R | A symmetric positive definite cost matrix of size num_inputs x num_inputs. |
| N | A cost matrix of size num_states x num_inputs. |
| std::exception | if R is not positive definite. |
| std::unique_ptr<AffineSystem<double> > drake::systems::controllers::LinearQuadraticRegulator | ( | const System< double > & | system, |
| const Context< double > & | context, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | Q, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | R, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | N = Eigen::Matrix< double, 0, 0 >::Zero(), |
||
| int | input_port_index = 0 |
||
| ) |
Linearizes the System around the specified Context, computes the optimal time-invariant linear quadratic regulator (LQR), and returns a System which implements that regulator in the original System's coordinates.
If system is a continuous-time system, then solves the continuous-time LQR problem:
\[ \min_u \int_0^\infty (x-x_0)^TQ(x-x_0) + (u-u_0)^TR(u-u_0) + 2 (x-x_0)^TN(u-u_0) dt. \]
If system is a discrete-time system, then solves the discrete-time LQR problem:
\[ \min_u \sum_0^\infty (x-x_0)^TQ(x-x_0) + (u-u_0)^TR(u-u_0) + 2(x-x_0)^TN(u-u_0), \]
where \( x_0 \) is the nominal state and \( u_0 \) is the nominal input. The system is considered discrete if it has a single discrete state vector and a single unique periodic update event declared.
| system | The System to be controlled. |
| context | Defines the desired state and control input to regulate the system to. Note that this state/input must be an equilibrium point of the system. See drake::systems::Linearize for more details. |
| Q | A symmetric positive semi-definite cost matrix of size num_states x num_states. |
| R | A symmetric positive definite cost matrix of size num_inputs x num_inputs. |
| N | A cost matrix of size num_states x num_inputs. If the matrix is zero-sized, N will be treated as a num_states x num_inputs zero matrix. |
| input_port_index | The index of the input port to linearize around. |
| std::exception | if R is not positive definite. |
| std::unique_ptr<System<double> > drake::systems::controllers::MakeFiniteHorizonLinearQuadraticRegulator | ( | const System< double > & | system, |
| const Context< double > & | context, | ||
| double | t0, | ||
| double | tf, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | Q, | ||
| const Eigen::Ref< const Eigen::MatrixXd > & | R, | ||
| const FiniteHorizonLinearQuadraticRegulatorOptions & | options = FiniteHorizonLinearQuadraticRegulatorOptions() |
||
| ) |
Variant of FiniteHorizonLinearQuadraticRegulator that returns a System implementing the regulator (controller) as a System, with a single "plant_state" input for the estimated plant state, and a single "control" output for the regulator control output.