Linear Algebra

General convenience routines to solve linear problems.

starry.linalg.lnlike(design_matrix, data, *, C=None, cho_C=None, mu=0.0, L=None, cho_L=None, N=None, woodbury=True, lazy=None)

Compute the log marginal likelihood of the data given a design matrix.

Parameters
  • design_matrix (matrix) – The design matrix that transforms a vector from coefficient space to data space.

  • data (vector) – The observed dataset.

  • C (scalar, vector, or matrix) – The data covariance. This may be a scalar, in which case the noise is assumed to be homoscedastic, a vector, in which case the covariance is assumed to be diagonal, or a matrix specifying the full covariance of the dataset. Default is None. Either C or cho_C must be provided.

  • cho_C (matrix) – The lower Cholesky factorization of the data covariance matrix. Defaults to None. Either C or cho_C must be provided.

  • mu (scalar or vector) – The prior mean on the regression coefficients. Default is zero.

  • L (scalar, vector, or matrix) – The prior covariance. This may be a scalar, in which case the covariance is assumed to be homoscedastic, a vector, in which case the covariance is assumed to be diagonal, or a matrix specifying the full prior covariance. Default is None. Either L or cho_L must be provided.

  • cho_L (matrix) – The lower Cholesky factorization of the prior covariance matrix. Defaults to None. Either L or cho_L must be provided.

  • N (int, optional) – The number of regression coefficients. This is necessary only if both mu and L are provided as scalars.

  • woodbury (bool, optional) – Solve the linear problem using the Woodbury identity? Default is True. The Woodbury identity is used to speed up matrix operations in the case that the number of data points is much larger than the number of regression coefficients. In this limit, it can speed up the code by more than an order of magnitude. Keep in mind that the numerical stability of the Woodbury identity is not great, so if you’re getting strange results try disabling this. It’s also a good idea to disable this in the limit of few data points and large number of regressors.

Returns

The log marginal likelihood, a scalar.

starry.linalg.solve(design_matrix, data, *, C=None, cho_C=None, mu=0.0, L=None, cho_L=None, N=None, lazy=None)

Solve the generalized least squares (GLS) problem.

Parameters
  • design_matrix (matrix) – The design matrix that transforms a vector from coefficient space to data space.

  • data (vector) – The observed dataset.

  • C (scalar, vector, or matrix) – The data covariance. This may be a scalar, in which case the noise is assumed to be homoscedastic, a vector, in which case the covariance is assumed to be diagonal, or a matrix specifying the full covariance of the dataset. Default is None. Either C or cho_C must be provided.

  • cho_C (matrix) – The lower Cholesky factorization of the data covariance matrix. Defaults to None. Either C or cho_C must be provided.

  • mu (scalar or vector) – The prior mean on the regression coefficients. Default is zero.

  • L (scalar, vector, or matrix) – The prior covariance. This may be a scalar, in which case the covariance is assumed to be homoscedastic, a vector, in which case the covariance is assumed to be diagonal, or a matrix specifying the full prior covariance. Default is None. Either L or cho_L must be provided.

  • cho_L (matrix) – The lower Cholesky factorization of the prior covariance matrix. Defaults to None. Either L or cho_L must be provided.

  • N (int, optional) – The number of regression coefficients. This is necessary only if both mu and L are provided as scalars.

Returns

A tuple containing the posterior mean for the regression coefficients (a vector) and the Cholesky factorization of the posterior covariance (a lower triangular matrix).