A collection of algorithms for synthesizing feedback control.
A number of control design algorithms can also be found in Controllers (if the design produces a systems::System).
Functions | |
std::pair< std::unique_ptr< BarycentricMeshSystem< double > >, Eigen::RowVectorXd > | FittedValueIteration (Simulator< double > *simulator, const std::function< double(const Context< double > &context)> &cost_function, const math::BarycentricMesh< double >::MeshGrid &state_grid, const math::BarycentricMesh< double >::MeshGrid &input_grid, double time_step, const DynamicProgrammingOptions &options=DynamicProgrammingOptions()) |
Implements Fitted Value Iteration on a (triangulated) Barycentric Mesh, which designs a state-feedback policy to minimize the infinite-horizon cost ∑ γⁿ g(x[n],u[n]), where γ is the discount factor in options . More... | |
FiniteHorizonLinearQuadraticRegulatorResult | FiniteHorizonLinearQuadraticRegulator (const System< double > &system, const Context< double > &context, double t0, double tf, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const FiniteHorizonLinearQuadraticRegulatorOptions &options=FiniteHorizonLinearQuadraticRegulatorOptions()) |
Solves the differential Riccati equation to compute the optimal controller and optimal cost-to-go for the finite-horizon linear quadratic regulator: More... | |
LinearQuadraticRegulatorResult | LinearQuadraticRegulator (const Eigen::Ref< const Eigen::MatrixXd > &A, const Eigen::Ref< const Eigen::MatrixXd > &B, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R, const Eigen::Ref< const Eigen::MatrixXd > &N=Eigen::Matrix< double, 0, 0 >::Zero(), const Eigen::Ref< const Eigen::MatrixXd > &F=Eigen::Matrix< double, 0, 0 >::Zero()) |
Computes the optimal feedback controller, u=-Kx, and the optimal cost-to-go J = x'Sx for the problem: More... | |
LinearQuadraticRegulatorResult | DiscreteTimeLinearQuadraticRegulator (const Eigen::Ref< const Eigen::MatrixXd > &A, const Eigen::Ref< const Eigen::MatrixXd > &B, const Eigen::Ref< const Eigen::MatrixXd > &Q, const Eigen::Ref< const Eigen::MatrixXd > &R) |
Computes the optimal feedback controller, u=-Kx, and the optimal cost-to-go J = x'Sx for the problem: More... | |
LinearQuadraticRegulatorResult drake::systems::controllers::DiscreteTimeLinearQuadraticRegulator | ( | const Eigen::Ref< const Eigen::MatrixXd > & | A, |
const Eigen::Ref< const Eigen::MatrixXd > & | B, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | Q, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | R | ||
) |
Computes the optimal feedback controller, u=-Kx, and the optimal cost-to-go J = x'Sx for the problem:
\[ x[n+1] = Ax[n] + Bu[n] \]
\[ \min_u \sum_0^\infty x'Qx + u'Ru \]
A | The state-space dynamics matrix of size num_states x num_states. |
B | The state-space input matrix of size num_states x num_inputs. |
Q | A symmetric positive semi-definite cost matrix of size num_states x num_states. |
R | A symmetric positive definite cost matrix of size num_inputs x num_inputs. |
std::exception | if R is not positive definite. |
FiniteHorizonLinearQuadraticRegulatorResult drake::systems::controllers::FiniteHorizonLinearQuadraticRegulator | ( | const System< double > & | system, |
const Context< double > & | context, | ||
double | t0, | ||
double | tf, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | Q, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | R, | ||
const FiniteHorizonLinearQuadraticRegulatorOptions & | options = FiniteHorizonLinearQuadraticRegulatorOptions() |
||
) |
Solves the differential Riccati equation to compute the optimal controller and optimal cost-to-go for the finite-horizon linear quadratic regulator:
\[\min_u (x(t_f)-x_d(t_f))'Q_f(x(t_f)-x_d(t_f)) + \int_{t_0}^{t_f} (x(t)-x_d(t))'Q(x(t)-x_d(t)) dt + \int_{t_0}^{t_f} (u(t)-u_d(t))'R(u(t)-u_d(t)) dt + \int_{t_0}^{t_f} 2(x(t)-x_d(t))'N(u(t)-u_d(t)) dt \\ \text{s.t. } \dot{x} - \dot{x}_0(t) = A(t)(x(t) - x_0(t)) + B(t)(u(t) - u_0(t)) + c(t) \]
where A(t), B(t), and c(t) are taken from the gradients of the continuous-time dynamics ẋ = f(t,x,u), as A(t) = dfdx(t, x0(t), u0(t)), B(t) = dfdu(t, x0(t), u0(t)), and c(t) = f(t, x0(t), u0(t)) - ẋ0(t). x0(t) and u0(t) can be specified in options
, otherwise are taken to be constant trajectories with values given by context
.
system | a System<double> representing the plant. |
context | a Context<double> used to pass the default input, state, and parameters. Note: Use options to specify time-varying nominal state and/or input trajectories. |
t0 | is the initial time. |
tf | is the final time (with tf > t0). |
Q | is nxn positive semi-definite. |
R | is mxm positive definite. |
options | is the optional FiniteHorizonLinearQuadraticRegulatorOptions. |
system
must be a System<double> with (only) n continuous state variables and m inputs. It must be convertible to System<AutoDiffXd>.std::pair<std::unique_ptr<BarycentricMeshSystem<double> >, Eigen::RowVectorXd> drake::systems::controllers::FittedValueIteration | ( | Simulator< double > * | simulator, |
const std::function< double(const Context< double > &context)> & | cost_function, | ||
const math::BarycentricMesh< double >::MeshGrid & | state_grid, | ||
const math::BarycentricMesh< double >::MeshGrid & | input_grid, | ||
double | time_step, | ||
const DynamicProgrammingOptions & | options = DynamicProgrammingOptions() |
||
) |
Implements Fitted Value Iteration on a (triangulated) Barycentric Mesh, which designs a state-feedback policy to minimize the infinite-horizon cost ∑ γⁿ g(x[n],u[n]), where γ is the discount factor in options
.
For background, and a description of this algorithm, see http://underactuated.csail.mit.edu/underactuated.html?chapter=dp . It currently requires that the system to be optimized has only continuous state and it is assumed to be time invariant. This code makes a discrete-time approximation (using time_step
) for the value iteration update.
simulator | contains the reference to the System being optimized and to a Context for that system, which may contain non-default Parameters, etc. The simulator is run for time_step seconds from every point on the mesh in order to approximate the dynamics; all of the simulation parameters (integrator, etc) are relevant during that evaluation. |
cost_function | is the continuous-time instantaneous cost. This implementation of the discrete-time formulation above uses the approximation g(x,u) = time_step*cost_function(x,u). |
state_grid | defines the mesh on the state space used to represent the cost-to-go function and the resulting policy. |
input_grid | defines the discrete action space used in the value iteration update. |
time_step | a time in seconds used for the discrete-time approximation. |
options | optional DynamicProgrammingOptions structure. |
state_grid
. The policy has a single vector input (which is the continuous state of the system passed in through simulator
) and a single vector output (which is the input of the system passed in through simulator
). LinearQuadraticRegulatorResult drake::systems::controllers::LinearQuadraticRegulator | ( | const Eigen::Ref< const Eigen::MatrixXd > & | A, |
const Eigen::Ref< const Eigen::MatrixXd > & | B, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | Q, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | R, | ||
const Eigen::Ref< const Eigen::MatrixXd > & | N = Eigen::Matrix< double, 0, 0 >::Zero() , |
||
const Eigen::Ref< const Eigen::MatrixXd > & | F = Eigen::Matrix< double, 0, 0 >::Zero() |
||
) |
Computes the optimal feedback controller, u=-Kx, and the optimal cost-to-go J = x'Sx for the problem:
\[ \min_u \int_0^\infty x'Qx + u'Ru + 2x'Nu dt \]
\[ \dot{x} = Ax + Bu \]
\[ Fx = 0 \]
A | The state-space dynamics matrix of size num_states x num_states. |
B | The state-space input matrix of size num_states x num_inputs. |
Q | A symmetric positive semi-definite cost matrix of size num_states x num_states. |
R | A symmetric positive definite cost matrix of size num_inputs x num_inputs. |
N | A cost matrix of size num_states x num_inputs. If N.rows() == 0, N will be treated as a num_states x num_inputs zero matrix. |
F | A constraint matrix of size num_constraints x num_states. rank(F) must be < num_states. If F.rows() == 0, F will be treated as a 0 x num_states zero matrix. |
std::exception | if R is not positive definite. |