pydrake.autodiffutils

Bindings for Eigen AutoDiff Scalars

class pydrake.autodiffutils.AutoDiffXd
__init__(*args, **kwargs)

Overloaded function.

  1. __init__(self: pydrake.autodiffutils.AutoDiffXd, value: float) -> None

Constructs a value with empty derivatives.

  1. __init__(self: pydrake.autodiffutils.AutoDiffXd, value: float, derivatives: numpy.ndarray[numpy.float64[m, 1]]) -> None

Constructs a value with the given derivatives.

  1. __init__(self: pydrake.autodiffutils.AutoDiffXd, value: float, size: int, offset: int) -> None

Constructs a value with a single partial derivative of 1.0 at the given offset in a vector of size otherwise-zero derivatives.

abs(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
acos(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
arccos()

acos(self: pydrake.autodiffutils.AutoDiffXd) -> pydrake.autodiffutils.AutoDiffXd

arcsin()

asin(self: pydrake.autodiffutils.AutoDiffXd) -> pydrake.autodiffutils.AutoDiffXd

arctan(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
arctan2()

atan2(self: pydrake.autodiffutils.AutoDiffXd, x: pydrake.autodiffutils.AutoDiffXd) -> pydrake.autodiffutils.AutoDiffXd

Uses self for y in atan2(y, x).

asin(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
atan(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
atan2(self: pydrake.autodiffutils.AutoDiffXd, x: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd

Uses self for y in atan2(y, x).

ceil(self: pydrake.autodiffutils.AutoDiffXd) float
cos(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
cosh(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
derivatives(self: pydrake.autodiffutils.AutoDiffXd) numpy.ndarray[numpy.float64[m, 1]]
exp(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
floor(self: pydrake.autodiffutils.AutoDiffXd) float
log(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
max(self: pydrake.autodiffutils.AutoDiffXd, arg0: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
min(self: pydrake.autodiffutils.AutoDiffXd, arg0: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
pow(self: pydrake.autodiffutils.AutoDiffXd, arg0: float) pydrake.autodiffutils.AutoDiffXd
sin(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
sinh(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
sqrt(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
tan(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
tanh(self: pydrake.autodiffutils.AutoDiffXd) pydrake.autodiffutils.AutoDiffXd
value(self: pydrake.autodiffutils.AutoDiffXd) float
pydrake.autodiffutils.ExtractGradient(auto_diff_matrix: numpy.ndarray[object[m, n]]) numpy.ndarray[numpy.float64[m, n]]

Extracts the derivatives() portion from a matrix of AutoDiffScalar entries. (Each entry contains a value and derivatives.)

Parameter auto_diff_matrix:

An object whose Eigen type represents a matrix of AutoDiffScalar entries.

Parameter num_derivatives:

(Optional) The number of derivatives to return in case the input matrix has none, which we interpret as num_derivatives zeroes. If num_derivatives is supplied and the input matrix has derivatives, the sizes must match.

Returns gradient_matrix:

An Eigen::Matrix with number of rows equal to the total size (rows x cols) of the input matrix and number of columns equal to the number of derivatives. Each output row corresponds to one entry of the input matrix, using the input matrix storage order. For example, in the typical case of a ColMajor auto_diff_matrix, we have auto_diff_matrix(r, c).derivatives() == gradient_matrix.row(r + c * auto_diff_matrix.rows()).

Template parameter Derived:

An Eigen type representing a matrix with AutoDiffScalar entries. The type will be inferred from the type of the auto_diff_matrix parameter at the call site.

Raises
  • RuntimeError if the input matrix has elements with inconsistent,

  • non-zero numbers of derivatives.

  • RuntimeError if num_derivatives is specified but the input

  • matrix has a different, non-zero number of derivatives.

pydrake.autodiffutils.ExtractValue(auto_diff_matrix: numpy.ndarray[object[m, n]]) numpy.ndarray[numpy.float64[m, n]]

Extracts the value() portion from a matrix of AutoDiffScalar entries. (Each entry contains a value and some derivatives.)

Parameter auto_diff_matrix:

An object whose Eigen type represents a matrix of AutoDiffScalar entries.

Returns value:

An Eigen::Matrix of the same dimensions as the input matrix, but containing only the value portion of each entry, without the derivatives.

Template parameter Derived:

An Eigen type representing a matrix with AutoDiffScalar entries. The type will be inferred from the type of the auto_diff_matrix parameter at the call site.

Note

Drake provides several similar functions: DiscardGradient() is specialized so that it can be applied to a Matrix<T> where T could be an AutoDiffScalar or an ordinary double, in which case it returns the original matrix at no cost. DiscardZeroGradient() is similar but requires that the discarded gradient was zero. drake::ExtractDoubleOrThrow() has many specializations, including one for Matrix<AutoDiffScalar> that behaves identically to ExtractValue().

See also

DiscardGradient(), drake::ExtractDoubleOrThrow()

pydrake.autodiffutils.InitializeAutoDiff(*args, **kwargs)

Overloaded function.

  1. InitializeAutoDiff(value: numpy.ndarray[numpy.float64[m, n]], num_derivatives: Optional[int] = None, deriv_num_start: Optional[int] = None) -> numpy.ndarray[object[m, n]]

Initializes a single AutoDiff matrix given the corresponding value matrix.

Creates an AutoDiff matrix that matches value in size with derivative of compile time size nq and runtime size num_derivatives. Sets its values to be equal to value, and for each element i of auto_diff_matrix, sets derivative number deriv_num_start + i to one (all other derivatives set to zero).

When value is a matrix (rather than just a vector) note in particular that the return value will use the same storage order (ColMajor vs RowMajor) and that the derivative numbers count up using the storage order of value(i).

Parameter value:

‘regular’ matrix of values

Parameter num_derivatives:

(Optional) size of the derivatives vector

Default: total size of the value matrix

$Parameter deriv_num_start:

(Optional) starting index into derivative vector (i.e. element deriv_num_start in derivative vector corresponds to matrix(0, 0)).

Default: 0

$Returns auto_diff_matrix:

The result as described above.

  1. InitializeAutoDiff(value: numpy.ndarray[numpy.float64[m, n]], gradient: numpy.ndarray[numpy.float64[m, n]]) -> numpy.ndarray[object[m, n]]

Returns an AutoDiff matrix given a matrix of values and a gradient matrix.

Parameter value:

The value matrix. Will be accessed with a single index.

Parameter gradient:

The gradient matrix. The number of rows must match the total size (nrow x ncol) of the value matrix. Derivatives of value(j) should be stored in row j of the gradient matrix.

Returns auto_diff_matrix:

The matrix of AutoDiffScalars. Will have the same dimensions and storage order as the value matrix.

pydrake.autodiffutils.InitializeAutoDiffTuple(*args)

Given a series of array_like input arguments, create a tuple of corresponding AutoDiff matrices with values equal to the input matrices and properly initialized derivative vectors.

The size of the derivative vector of each element of the matrices in the output tuple will be the same, and will equal the sum of the number of elements of the matrices in args.

The 0th element of the derivative vectors will correspond to the derivative with respect to the 0th element of the first argument. Subsequent derivative vector elements correspond first to subsequent elements of the first input argument (traversed first by row, then by column), and so on for subsequent arguments.

This is a pythonic implementation of drake::math::InitializeAutoDiffTuple in C++.