Note

This tutorial was generated from a Jupyter notebook that can be downloaded here.

# FAQs¶

A non-exhaustive list of frequently asked questions about `starry`

.

## 1. I get weird output when I call `starry`

functions.¶

If you call a `starry.Map`

method (or any other `starry`

function) and get something like

```
Elemwise{mul,no_inplace}.0
```

or

```
TensorConstant{(1,) of 1.0}
```

or

```
Subtensor{int64}.0
```

that’s because you’re running `starry`

in lazy mode. To obtain the numerical value of a variable or expression in lazy mode, simply call its `.eval()`

method. Or, alternatively, switch to greedy mode by adding

```
starry.config.lazy = False
```

at the very **top** of your script, before instantiating any `starry`

objects. All methods will automagically return numerical values in greedy mode.

## 2. I get a `ValueError`

when instantiating a `pymc3`

model.¶

If you get an error like

```
ValueError: setting an array element with a sequence.
```

inside of a `pymc3`

model, it could be that you’re in greedy mode. If you have the line

```
starry.config.lazy = False
```

at the top of your script, simply remove it. To sample using `pymc3`

, `starry`

needs to be in lazy mode.

## 3. How do I evaluate a variable inside a `pymc3`

model?¶

If you’re in a `pymc3`

model context, running the `.eval()`

method of a `pymc3`

variable usually results in an error similar to the following:

```
MissingInputError: Input 0 of the graph (indices start from 0), used to compute [something], was not provided and not given a value. Use the Theano flag exception_verbosity='high', for more information on this error.
```

That’s because in a `pymc3`

model context, none of the `theano`

variables ever have values (“inputs”), so you get an error when you try to evaluate them the usual way. `pymc3`

stores things differently (using test values), so what you want to do instead is

```
import pymc3_ext as pmx
pmx.eval_in_model(expression)
```

where `expression`

is the `pymc3`

expression whose value you want. By default, this will evaluate the expression at the **test point** of each of the inputs. If you’ve already run the sampler and have access to a `trace`

, you can evaluate the expression at index `i`

of the `trace`

by running

```
pmx.eval_in_model(expression, point=trace.point(i))
```

Check out pymc3-ext docs for more information.

## 4. The new verison of `starry`

seems much slower than the previous version.¶

If you’re using `starry`

in `greedy`

mode, you may have noticed a performance hit when doing certain operations. Since there’s a lot of trickery in the backend to get `starry`

to work with `pymc3`

and `exoplanet`

, there’s quite a bit of overhead in the new version. There are several ways around this, and here are some things to consider:

### Don’t instantiate `starry`

models repeatedly.¶

Initializing a `starry.Map`

or `starry.System`

instance is computationally expensive. This is because `starry`

pre-computes as many things as possible when these classes are instantiated, with the goal of making flux evaluations efficient. Instantiate a `starry.Map`

object only a single time in your script!

### Use `pymc3`

, not `emcee`

, to do inference.¶

The new version of `starry`

was specifically designed for efficient gradient-based inference with `pymc3`

. Consider switching from `emcee`

to `pymc3`

to do inference: `starry`

handles all the gradient evaluations internally, so it’s fairly easy to get a model up and running. See the other tutorials for more information.

### If you use `emcee`

, manually compile your `starry`

functions.¶

The `greedy`

version of `starry`

compiles things automagically for you, but you can get much better performance if you compile things yourself (it’s not hard!). Consider the following example, for instance. We create the function `slow_flux`

that accepts some spherical harmonic coefficients `coeffs`

and a vector of rotational phases `theta`

and returns the flux seen by the observer. Note that we’re re-instantiating the map every time we call this function (which, as we mentioned above,
is a bad idea!)

```
import starry
import numpy as np
import time
# Greedy mode
starry.config.lazy = False
starry.config.quiet = True
# Instantiate the map, set the coeffs, compute the flux
def slow_flux(coeffs, theta):
map = starry.Map(10)
map[1:, :] = coeffs
return map.flux(theta=theta)
# Function args
coeffs = np.random.randn(120)
theta = np.linspace(0, 360, 1000)
# Time the function
nruns = 25
tstart = time.time()
for k in range(nruns):
slow_flux(coeffs, theta)
elapsed = (time.time() - tstart) / nruns
print("Call took {:.3f} seconds.".format(elapsed))
```

If we run this snippet, we find that a single call to the `slow_flux`

function takes about a tenth of a second:

```
>>> Call took 0.128 seconds.
```

If we’re evaluating this function repeatedly (say) in an MCMC sampler, things are going to take a long time to run.

A **much better** approach is to run `starry`

in `lazy`

mode and use `theano`

to compile this function for us. You can read more about `theano`

functions here and here.

```
import starry
import numpy as np
import theano
import theano.tensor as tt
import time
# Lazy mode
starry.config.lazy = True
starry.config.quiet = True
# Instantiate the map, set the coeffs, compute the flux
def slow_flux(coeffs, theta):
map = starry.Map(10)
map[1:, :] = coeffs
return map.flux(theta=theta)
# Compile `slow_flux` into a theano function
arg1 = tt.dvector()
arg2 = tt.dvector()
fast_flux = theano.function([arg1, arg2], slow_flux(arg1, arg2))
# Function args
coeffs = np.random.randn(120)
theta = np.linspace(0, 360, 1000)
# Time the function
nruns = 25
tstart = time.time()
for k in range(nruns):
fast_flux(coeffs, theta)
elapsed = (time.time() - tstart) / nruns
print("Call took {:.3f} seconds.".format(elapsed))
```

Running this snippet prints the following:

```
>>> Call took 0.002 seconds.
```

Compiling our function gave us a speed boost of about 3 orders of magnitude!

Warning

While compiled `starry`

functions are fast, users may experience a serious performance hit when using them in conjunction with multiprocessing. Specifically, if the parallelization relies on frequent pickling/unpickling of the `starry`

object, things will likely get very slow. While `starry`

models **can** be pickled, there can be an extreme amount of overhead when unpickling them. We therefore recommend caution when trying to use `starry`

with codes like `emcee`

with parallelization enabled. Note that this is **not** an issue for parallelized `pymc3`

models, since the parallelization occurs across *chains*: the model is only pickled/unpickled a single time per chain, so the overhead incurred is negligible.

## 5. I’m running into numerical instabilities.¶

While `starry`

is designed to be numerically stable at low spherical harmonic degree, increasing `ydeg`

beyond `20`

or so can lead to severe numerical instabilities (see here for an example). The instabilities typically arise in occultations—phase curves can also go unstable, but only above `ydeg=35`

or so. If you absolutely need to compute the model at a degree that leads to instabilities, you can try compiling the code from source
with the `STARRY_KL_NUMERICAL=1`

flag:

```
git clone https://github.com/rodluger/starry
cd starry
rm -rf build && STARRY_KL_NUMERICAL=1 python setup.py develop
```

This will switch to numerically evaluating some of the unstable integrals above `ydeg=15`

. It should improve the stability of the code, but you’ll likely incur a significant speed hit.

An older version of the code (`0.3.0`

) allowed users to compile `starry`

in multiprecision mode, which also helps with these instabilities. Support for this was removed in version `1.0.0`

, but we could consider re-implementing it. If that’s something you’re interested in, please let us know.

## 6. I can’t find what I’m looking for.¶

Please search all of the issues on GitHub, or open a new one.