MCMC Sampling#

Overview#

Stan’s MCMC sampler implements the Hamiltonian Monte Carlo (HMC) algorithm and its adaptive variant the no-U-turn sampler (NUTS). It creates a set of draws from the posterior distribution of the model conditioned on the data, allowing for exact Bayesian inference of the model parameters. Each draw consists of the values for all parameter, transformed parameter, and generated quantities variables, reported on the constrained scale.

The CmdStanModel sample method wraps the CmdStan sample method. Underlyingly, the CmdStan outputs are a set of per-chain Stan CSV files. In addition to the resulting sample, reported as one row per draw, the Stan CSV files encode information about the inference engine configuration and the sampler state. The NUTS-HMC adaptive sampler algorithm also outputs the per-chain HMC tuning parameters step_size and metric.

The sample method returns a CmdStanMCMC object, which provides access to the disparate information from the Stan CSV files. Accessor functions allow the user to access the sample in whatever data format is needed for further analysis, either as tabular data (i.e., in terms of the per-chain CSV file rows and columns), or as structured objects which correspond to the variables in the Stan model and the individual diagnostics produced by the inference method.

  • The stan_variable and stan_variables methods return a Python numpy.ndarray containing all draws from the sample where the structure of each draw corresponds to the structure of the Stan variable.

  • The draws method returns the sample as either a 2-D or 3-D numpy.ndarray.

  • The draws_pd method returns the entire sample or selected variables as a pandas.DataFrame.

  • The draws_xr method returns a structured Xarray dataset over the Stan model variables.

  • The method_variables returns a Python dict over all sampler method variables.

In addition, the CmdStanMCMC object has accessor methods for

  • The per-chain HMC tuning parameters step_size and metric

  • The CmdStan run configuration and console outputs

  • The mapping between the Stan model variables and the corresponding CSV file columns.

Notebook prerequisites#

CmdStanPy displays progress bars during sampling via use of package tqdm. In order for these to display properly in a Jupyter notebook, you must have the ipywidgets package installed, and depending on your version of Jupyter or JupyterLab, you must enable it via command:

[1]:
!jupyter nbextension enable --py widgetsnbextension
Enabling notebook extension jupyter-js-widgets/extension...
      - Validating: OK

For more information, see the the installation instructions, also this tqdm GitHub issue.

Fitting the model and data#

In this example we use the CmdStan example model bernoulli.stan and data file bernoulli.data.json.

We instantiate a CmdStanModel from the Stan program file

[2]:
import os
from cmdstanpy import CmdStanModel

# instantiate, compile bernoulli model
model = CmdStanModel(stan_file='bernoulli.stan')

By default, the model is compiled during instantiation. The compiled executable is created in the same directory as the program file. If the directory already contains an executable file with a newer timestamp, the model is not recompiled.

We run the sampler on the data using all default settings: 4 chains, each of which runs 1000 warmup and sampling iterations.

[3]:
# run CmdStan's sample method, returns object `CmdStanMCMC`
fit = model.sample(data='bernoulli.data.json')
14:29:56 - cmdstanpy - INFO - CmdStan start processing

14:29:56 - cmdstanpy - INFO - CmdStan done processing.

The CmdStanMCMC object records the command, the return code, and the paths to the sampler output csv and console files. The sample is lazily instantiated on first access of either the draws or the HMC tuning parameters, i.e., the step size and metric.

The string representation of this object displays the CmdStan commands and the location of the output files. Output filenames are composed of the model name, a timestamp in the form YYYYMMDDhhmmss and the chain id, plus the corresponding filetype suffix, either ‘.csv’ for the CmdStan output or ‘.txt’ for the console messages, e.g. bernoulli-20220617170100_1.csv.

[4]:
fit
[4]:
CmdStanMCMC: model=bernoulli chains=4['method=sample', 'algorithm=hmc', 'adapt', 'engaged=1']
 csv_files:
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_1.csv
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_2.csv
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_3.csv
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_4.csv
 output_files:
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_0-stdout.txt
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_1-stdout.txt
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_2-stdout.txt
        /tmp/tmp576tsk6g/bernoullikolo7cku/bernoulli-20220822142956_3-stdout.txt
[5]:
print(f'draws as array:  {fit.draws().shape}')
print(f'draws as structured object:\n\t{fit.stan_variables().keys()}')
print(f'sampler diagnostics:\n\t{fit.method_variables().keys()}')
draws as array:  (1000, 4, 8)
draws as structured object:
        dict_keys(['theta'])
sampler diagnostics:
        dict_keys(['lp__', 'accept_stat__', 'stepsize__', 'treedepth__', 'n_leapfrog__', 'divergent__', 'energy__'])

Sampler Progress#

Your model make take a long time to fit. The sample method provides two arguments:

  • visual progress bar: show_progress=True

  • stream CmdStan output to the console - show_console=True

By default, CmdStanPy displays a progress bar during sampling, as seen above. Since the progress bars are only visible while the sampler is running and the bernoulli example model takes no time at all to fit, we run this model for 200K iterations, in order to see the progress bars in action.

[6]:
fit = model.sample(data='bernoulli.data.json', iter_warmup=100000, iter_sampling=100000, show_progress=True)

14:29:56 - cmdstanpy - INFO - CmdStan start processing

14:29:58 - cmdstanpy - INFO - CmdStan done processing.

To see the CmdStan console outputs instead of progress bars, specify show_console=True This will stream all CmdStan messages to the terminal while the sampler is running. This option will allow you to debug a Stan program using the Stan language print statement.

[7]:
fit = model.sample(data='bernoulli.data.json', chains=2, parallel_chains=1, show_console=True)


14:30:00 - cmdstanpy - INFO - Chain [1] start processing
14:30:00 - cmdstanpy - INFO - Chain [1] done processing
14:30:00 - cmdstanpy - INFO - Chain [2] start processing
14:30:00 - cmdstanpy - INFO - Chain [2] done processing
Chain [1] method = sample (Default)
Chain [1] sample
Chain [1] num_samples = 1000 (Default)
Chain [1] num_warmup = 1000 (Default)
Chain [1] save_warmup = 0 (Default)
Chain [1] thin = 1 (Default)
Chain [1] adapt
Chain [1] engaged = 1 (Default)
Chain [1] gamma = 0.050000000000000003 (Default)
Chain [1] delta = 0.80000000000000004 (Default)
Chain [1] kappa = 0.75 (Default)
Chain [1] t0 = 10 (Default)
Chain [1] init_buffer = 75 (Default)
Chain [1] term_buffer = 50 (Default)
Chain [1] window = 25 (Default)
Chain [1] algorithm = hmc (Default)
Chain [1] hmc
Chain [1] engine = nuts (Default)
Chain [1] nuts
Chain [1] max_depth = 10 (Default)
Chain [1] metric = diag_e (Default)
Chain [1] metric_file =  (Default)
Chain [1] stepsize = 1 (Default)
Chain [1] stepsize_jitter = 0 (Default)
Chain [1] id = 1
Chain [1] data
Chain [1] file = bernoulli.data.json
Chain [1] init = 2 (Default)
Chain [1] random
Chain [1] seed = 70257
Chain [1] output
Chain [1] file = /tmp/tmp576tsk6g/bernoullitkllicz1/bernoulli-20220822143000_1.csv
Chain [1] diagnostic_file =  (Default)
Chain [1] refresh = 100 (Default)
Chain [1] sig_figs = -1 (Default)
Chain [1] profile_file = profile.csv (Default)
Chain [1] num_threads = 1
Chain [1]
Chain [1]
Chain [1] Gradient evaluation took 2e-06 seconds
Chain [1] 1000 transitions using 10 leapfrog steps per transition would take 0.02 seconds.
Chain [1] Adjust your expectations accordingly!
Chain [1]
Chain [1]
Chain [1] Iteration:    1 / 2000 [  0%]  (Warmup)
Chain [1] Iteration:  100 / 2000 [  5%]  (Warmup)
Chain [1] Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain [1] Iteration:  300 / 2000 [ 15%]  (Warmup)
Chain [1] Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain [1] Iteration:  500 / 2000 [ 25%]  (Warmup)
Chain [1] Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain [1] Iteration:  700 / 2000 [ 35%]  (Warmup)
Chain [1] Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain [1] Iteration:  900 / 2000 [ 45%]  (Warmup)
Chain [1] Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain [1] Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain [1] Iteration: 1100 / 2000 [ 55%]  (Sampling)
Chain [1] Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain [1] Iteration: 1300 / 2000 [ 65%]  (Sampling)
Chain [1] Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain [1] Iteration: 1500 / 2000 [ 75%]  (Sampling)
Chain [1] Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain [1] Iteration: 1700 / 2000 [ 85%]  (Sampling)
Chain [1] Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain [1] Iteration: 1900 / 2000 [ 95%]  (Sampling)
Chain [1] Iteration: 2000 / 2000 [100%]  (Sampling)
Chain [1]
Chain [1] Elapsed Time: 0.005 seconds (Warm-up)
Chain [1] 0.009 seconds (Sampling)
Chain [1] 0.014 seconds (Total)
Chain [1]
Chain [2] method = sample (Default)
Chain [2] sample
Chain [2] num_samples = 1000 (Default)
Chain [2] num_warmup = 1000 (Default)
Chain [2] save_warmup = 0 (Default)
Chain [2] thin = 1 (Default)
Chain [2] adapt
Chain [2] engaged = 1 (Default)
Chain [2] gamma = 0.050000000000000003 (Default)
Chain [2] delta = 0.80000000000000004 (Default)
Chain [2] kappa = 0.75 (Default)
Chain [2] t0 = 10 (Default)
Chain [2] init_buffer = 75 (Default)
Chain [2] term_buffer = 50 (Default)
Chain [2] window = 25 (Default)
Chain [2] algorithm = hmc (Default)
Chain [2] hmc
Chain [2] engine = nuts (Default)
Chain [2] nuts
Chain [2] max_depth = 10 (Default)
Chain [2] metric = diag_e (Default)
Chain [2] metric_file =  (Default)
Chain [2] stepsize = 1 (Default)
Chain [2] stepsize_jitter = 0 (Default)
Chain [2] id = 2
Chain [2] data
Chain [2] file = bernoulli.data.json
Chain [2] init = 2 (Default)
Chain [2] random
Chain [2] seed = 70257
Chain [2] output
Chain [2] file = /tmp/tmp576tsk6g/bernoullitkllicz1/bernoulli-20220822143000_2.csv
Chain [2] diagnostic_file =  (Default)
Chain [2] refresh = 100 (Default)
Chain [2] sig_figs = -1 (Default)
Chain [2] profile_file = profile.csv (Default)
Chain [2] num_threads = 1
Chain [2]
Chain [2]
Chain [2] Gradient evaluation took 4e-06 seconds
Chain [2] 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
Chain [2] Adjust your expectations accordingly!
Chain [2]
Chain [2]
Chain [2] Iteration:    1 / 2000 [  0%]  (Warmup)
Chain [2] Iteration:  100 / 2000 [  5%]  (Warmup)
Chain [2] Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain [2] Iteration:  300 / 2000 [ 15%]  (Warmup)
Chain [2] Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain [2] Iteration:  500 / 2000 [ 25%]  (Warmup)
Chain [2] Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain [2] Iteration:  700 / 2000 [ 35%]  (Warmup)
Chain [2] Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain [2] Iteration:  900 / 2000 [ 45%]  (Warmup)
Chain [2] Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain [2] Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain [2] Iteration: 1100 / 2000 [ 55%]  (Sampling)
Chain [2] Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain [2] Iteration: 1300 / 2000 [ 65%]  (Sampling)
Chain [2] Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain [2] Iteration: 1500 / 2000 [ 75%]  (Sampling)
Chain [2] Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain [2] Iteration: 1700 / 2000 [ 85%]  (Sampling)
Chain [2] Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain [2] Iteration: 1900 / 2000 [ 95%]  (Sampling)
Chain [2] Iteration: 2000 / 2000 [100%]  (Sampling)
Chain [2]
Chain [2] Elapsed Time: 0.008 seconds (Warm-up)
Chain [2] 0.014 seconds (Sampling)
Chain [2] 0.022 seconds (Total)
Chain [2]
Chain [2]

Checking the fit#

The first question to ask of the CmdStanMCMC object is: is this a valid sample from the posterior?

It is important to check whether or not the sampler was able to fit the model given the data. Often, this is not possible, for any number of reasons. To appreciate the sampler diagnostics, we use a hierarchical model which, given a small amount of data, encounters difficulty: the centered parameterization of the “8-schools” model (Rubin, 1981). The “8-schools” model is a simple hierarchical model, first developed on a dataset taken from an experiment was conducted in 8 schools, with only treatment effects and their standard errors reported.

The Stan model and the original dataset are in files eight_schools.stan and eight_schools.data.json.

eight_schools.stan

[8]:
with open('eight_schools.stan', 'r') as fd:
    print(fd.read())
data {
  int<lower=0> J; // number of schools
  array[J] real y; // estimated treatment effect (school j)
  array[J] real<lower=0> sigma; // std err of effect estimate (school j)
}
parameters {
  real mu;
  array[J] real theta;
  real<lower=0> tau;
}
model {
  theta ~ normal(mu, tau);
  y ~ normal(theta, sigma);
}


eight_schools.data.json

[9]:
with open('eight_schools.data.json', 'r') as fd:
    print(fd.read())
{
    "J" : 8,
    "y" : [28,8,-3,7,-1,1,18,12],
    "sigma" : [15,10,16,11,9,11,10,18],
    "tau" : 25
}

Because there is not much data, the geometry of posterior distribution is highly curved, thus the sampler may encounter difficulty in fitting the model. By specifying the initial seed for the pseudo-random number generator, we insure that the sampler will have difficulty in fitting this model. In particular, some post-warmup iterations diverge, resulting in a biased sample. In addition, some post-warmup iterations hit the maximum allowed treedepth before the trajectory hits the “U-turn” condition of the NUTS algorithm, in which case the sampler may fail to properly explore the entire posterior.

These diagnostics are checked for automatically at the end of each run; if problems are detected, a WARNING message is logged.

[10]:
eight_schools_model = CmdStanModel(stan_file='eight_schools.stan')
eight_schools_fit = eight_schools_model.sample(data='eight_schools.data.json', seed=55157)
14:30:00 - cmdstanpy - INFO - CmdStan start processing

14:30:00 - cmdstanpy - INFO - CmdStan done processing.
14:30:00 - cmdstanpy - WARNING - Some chains may have failed to converge.
        Chain 1 had 29 divergent transitions (2.9%)
        Chain 2 had 208 divergent transitions (20.8%)
        Chain 3 had 17 divergent transitions (1.7%)
        Chain 4 had 31 divergent transitions (3.1%)
        Use function "diagnose()" to see further information.

More information on how to address convergence problems can be found at https://mc-stan.org/misc/warnings

The number of post-warmup divergences and iterations which hit the maximum treedepth can be inspected directly via properties divergences and max_treedepths.

[11]:
print(f'divergences:\n{eight_schools_fit.divergences}\niterations at max_treedepth:\n{eight_schools_fit.max_treedepths}')
divergences:
[ 29 208  17  31]
iterations at max_treedepth:
[0 0 0 0]

Summarizing the sample#

The summary method reports the R-hat statistic, a measure of how well the sampler chains have converged.

[12]:
eight_schools_fit.summary()
[12]:
Mean MCSE StdDev 5% 50% 95% N_Eff N_Eff/s R_hat
lp__ -17.80420 1.297600 5.69607 -26.306500 -18.54890 -8.18723 19.26950 86.41040 1.18588
mu 7.98088 0.196141 5.17030 -0.847043 8.46135 16.43540 694.85500 3115.94000 1.00830
theta[1] 11.69530 0.357324 8.65978 -0.384251 10.32760 28.16590 587.33900 2633.81000 1.00867
theta[2] 7.76656 0.201329 6.38375 -2.518540 7.25419 18.44390 1005.40000 4508.53000 1.00249
theta[3] 5.96852 0.239867 8.27095 -8.756270 7.21487 18.39070 1188.96000 5331.68000 1.01065
theta[4] 7.71660 0.201524 6.85139 -3.644710 8.26866 18.80780 1155.86000 5183.21000 1.00593
theta[5] 4.97621 0.447597 6.65889 -6.701880 5.63811 14.88990 221.32500 992.48700 1.03708
theta[6] 5.88040 0.212933 6.89437 -6.316140 6.83414 16.41370 1048.34000 4701.09000 1.01177
theta[7] 10.95250 0.243737 6.90153 0.384918 9.93285 23.58810 801.76400 3595.35000 1.00408
theta[8] 8.47301 0.218012 8.06921 -4.222510 8.14475 22.19720 1369.94000 6143.23000 1.00222
tau 7.01515 0.772870 5.53896 1.022930 5.78582 17.32530 51.36157 230.32095 1.07508

Sampler Diagnostics#

The diagnose() method provides more information about the sample.

[13]:
print(eight_schools_fit.diagnose())
Processing csv files: /tmp/tmp576tsk6g/eight_schoolsk2xullo4/eight_schools-20220822143000_1.csv, /tmp/tmp576tsk6g/eight_schoolsk2xullo4/eight_schools-20220822143000_2.csv, /tmp/tmp576tsk6g/eight_schoolsk2xullo4/eight_schools-20220822143000_3.csv, /tmp/tmp576tsk6g/eight_schoolsk2xullo4/eight_schools-20220822143000_4.csv

Checking sampler transitions treedepth.
Treedepth satisfactory for all transitions.

Checking sampler transitions for divergences.
285 of 4000 (7.12%) transitions ended with a divergence.
These divergent transitions indicate that HMC is not fully able to explore the posterior distribution.
Try increasing adapt delta closer to 1.
If this doesn't remove all divergences, try to reparameterize the model.

Checking E-BFMI - sampler transitions HMC potential energy.
The E-BFMI, 0.28, is below the nominal threshold of 0.30 which suggests that HMC may have trouble exploring the target distribution.
If possible, try to reparameterize the model.

Effective sample size satisfactory.

The following parameters had split R-hat greater than 1.05:
  tau
Such high values indicate incomplete mixing and biased estimation.
You should consider regularizating your model with additional prior information or a more effective parameterization.

Processing complete.

Accessing the sampler outputs#

[14]:
fit = model.sample(data='bernoulli.data.json')
14:30:01 - cmdstanpy - INFO - CmdStan start processing

14:30:01 - cmdstanpy - INFO - CmdStan done processing.

Extracting the draws as structured Stan program variables#

Per-variable draws can be accessed as either a numpy.ndarray object via method stan_variable or as an xarray.Dataset object via draws_xr.

[15]:
print(fit.stan_variable('theta'))
[0.319739 0.384567 0.298657 ... 0.15686  0.251358 0.306462]

The stan_variables method returns a Python dict over all Stan variables in the output.

[16]:
for k, v in fit.stan_variables().items():
    print(f'name: {k}, shape: {v.shape}')
name: theta, shape: (4000,)
[17]:
print(fit.draws_xr('theta'))
<xarray.Dataset>
Dimensions:  (chain: 4, draw: 1000)
Coordinates:
  * chain    (chain) int64 1 2 3 4
  * draw     (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999
Data variables:
    theta    (chain, draw) float64 0.3197 0.3846 0.2987 ... 0.1569 0.2514 0.3065
Attributes:
    stan_version:        2.27.0
    model:               bernoulli_model
    num_draws_sampling:  1000

Extracting the draws in tabular format#

The sample can be accessed either as a numpy array or a pandas DataFrame:

[18]:
print(f'sample as ndarray: {fit.draws().shape}\nfirst 2 draws, chain 1:\n{fit.draws()[:2, 0, :]}')
sample as ndarray: (1000, 4, 8)
first 2 draws, chain 1:
[[-6.88826   0.971817  0.998143  1.        1.        0.        6.89162
   0.319739]
 [-7.23577   0.893005  0.998143  1.        1.        0.        7.24696
   0.384567]]
[19]:
fit.draws_pd().head()
[19]:
lp__ accept_stat__ stepsize__ treedepth__ n_leapfrog__ divergent__ energy__ theta
0 -6.88826 0.971817 0.998143 1.0 1.0 0.0 6.89162 0.319739
1 -7.23577 0.893005 0.998143 1.0 1.0 0.0 7.24696 0.384567
2 -6.81820 1.000000 0.998143 2.0 3.0 0.0 7.10348 0.298657
3 -6.79992 1.000000 0.998143 1.0 1.0 0.0 6.82455 0.291636
4 -6.80004 0.999965 0.998143 1.0 1.0 0.0 6.81503 0.291685

Extracting sampler method diagnostics#

[20]:
for k, v in fit.method_variables().items():
    print(f'name: {k}, shape: {v.shape}')
name: lp__, shape: (1000, 4)
name: accept_stat__, shape: (1000, 4)
name: stepsize__, shape: (1000, 4)
name: treedepth__, shape: (1000, 4)
name: n_leapfrog__, shape: (1000, 4)
name: divergent__, shape: (1000, 4)
name: energy__, shape: (1000, 4)

Extracting the per-chain HMC tuning parameters#

[21]:
print(f'adapted step_size per chain\n{fit.step_size}\nmetric_type: {fit.metric_type}\nmetric:\n{fit.metric}')
adapted step_size per chain
[0.998143 0.991067 0.894233 0.896342]
metric_type: diag_e
metric:
[[0.482997]
 [0.485668]
 [0.444746]
 [0.535956]]

Extracting the sample meta-data#

[22]:
print('sample method variables:\n{}\n'.format(fit.metadata.method_vars_cols.keys()))
print('stan model variables:\n{}'.format(fit.metadata.stan_vars_cols.keys()))
sample method variables:
dict_keys(['lp__', 'accept_stat__', 'stepsize__', 'treedepth__', 'n_leapfrog__', 'divergent__', 'energy__'])

stan model variables:
dict_keys(['theta'])

Saving the sampler output files#

The sampler output files are written to a temporary directory which is deleted upon session exit unless the output_dir argument is specified. The save_csvfiles function moves the CmdStan CSV output files to a specified directory without having to re-run the sampler. The console output files are not saved. These files are treated as ephemeral; if the sample is valid, all relevant information is recorded in the CSV files.

Parallelization via multi-threaded processing#

Stan’s multi-threaded processing is based on the Intel Threading Building Blocks (TBB) library, which must be linked to by the C++ compiler. To take advantage of this option, you must compile (or recompile) the program with the the C++ compiler option STAN_THREADS. The CmdStanModel object constructor and its compile method both have argument cpp_options which takes as its value a dictionary of compiler flags.

We compile the example model bernoulli.stan, this time with arguments cpp_options and compile, and use the function exe_info() to check that the model has been compiled for multi-threading.

[23]:
model = CmdStanModel(stan_file='bernoulli.stan',
                     cpp_options={'STAN_THREADS': 'TRUE'},
                     compile='force')
model.exe_info()
14:30:01 - cmdstanpy - INFO - compiling stan file /home/brian/Dev/py/cmdstanpy/docsrc/users-guide/examples/bernoulli.stan to exe file /home/brian/Dev/py/cmdstanpy/docsrc/users-guide/examples/bernoulli
14:30:18 - cmdstanpy - INFO - compiled model executable: /home/brian/Dev/py/cmdstanpy/docsrc/users-guide/examples/bernoulli
[23]:
{'stan_version_major': '2',
 'stan_version_minor': '29',
 'stan_version_patch': '0',
 'STAN_THREADS': 'true',
 'STAN_MPI': 'false',
 'STAN_OPENCL': 'false',
 'STAN_NO_RANGE_CHECKS': 'false',
 'STAN_CPP_OPTIMS': 'false'}

Cross-chain multi-threading#

As of version CmdStan 2.28, it is possible to run the NUTS-HMC sampler on multiple chains from within a single executable using threads. This has the potential to speed up sampling. It also reduces the overall memory footprint required for sampling as all chains share the same copy of data.the input data. When using within-chain parallelization all chains started within a single executable can share all the available threads and once a chain finishes the threads will be reused.

The sample program argument parallel_chains takes an integer value which specifies how many chains to run in parallel. For models which have been compiled with option STAN_THREADS set, all chains are run from within a single process and the value of the parallel_chains argument specifies the total number of threads.

[24]:
fit = model.sample(data='bernoulli.data.json', parallel_chains=4)
14:30:18 - cmdstanpy - INFO - CmdStan start processing

14:30:18 - cmdstanpy - INFO - CmdStan done processing.

Within-chain multi-threading#

The Stan language reduce_sum function provides within-chain parallelization. For models which require computing the sum of a number of independent function evaluations, e.g., when evaluating a number of conditionally independent terms in a log-likelihood, the reduce_sum function is used to parallelize this computation.

To see how this works, we run the “reflag” model, used in the reduce_sum minimal example case study. The Stan model and the original dataset are in files “redcard_reduce_sum.stan” and “redcard.json”.

[25]:
with open('redcard_reduce_sum.stan', 'r') as fd:
    print(fd.read())
functions {
  real partial_sum(array[] int slice_n_redcards, int start, int end,
                   array[] int n_games, vector rating, vector beta) {
    return binomial_logit_lpmf(slice_n_redcards | n_games[start : end], beta[1]
                                                                    + beta[2]
                                                                    * rating[start : end]);
  }
}
data {
  int<lower=0> N;
  array[N] int<lower=0> n_redcards;
  array[N] int<lower=0> n_games;
  vector[N] rating;
  int<lower=1> grainsize;
}
parameters {
  vector[2] beta;
}
model {
  beta[1] ~ normal(0, 10);
  beta[2] ~ normal(0, 1);

  target += reduce_sum(partial_sum, n_redcards, grainsize, n_games, rating,
                       beta);
}


As before, we compile the model specifying argument cpp_options.

[26]:
redcard_model = CmdStanModel(stan_file='redcard_reduce_sum.stan',
                     cpp_options={'STAN_THREADS': 'TRUE'},
                     compile='force')
redcard_model.exe_info()
14:30:18 - cmdstanpy - INFO - compiling stan file /home/brian/Dev/py/cmdstanpy/docsrc/users-guide/examples/redcard_reduce_sum.stan to exe file /home/brian/Dev/py/cmdstanpy/docsrc/users-guide/examples/redcard_reduce_sum
14:30:40 - cmdstanpy - INFO - compiled model executable: /home/brian/Dev/py/cmdstanpy/docsrc/users-guide/examples/redcard_reduce_sum
[26]:
{'stan_version_major': '2',
 'stan_version_minor': '29',
 'stan_version_patch': '0',
 'STAN_THREADS': 'true',
 'STAN_MPI': 'false',
 'STAN_OPENCL': 'false',
 'STAN_NO_RANGE_CHECKS': 'false',
 'STAN_CPP_OPTIMS': 'false'}

The sample method argument threads_per_chain specifies the number of threads allotted to each chain; this corresponds to CmdStan’s num_threads argument.

[27]:
redcard_fit = redcard_model.sample(data='redcard.json', threads_per_chain=4)
14:30:40 - cmdstanpy - INFO - CmdStan start processing

14:31:48 - cmdstanpy - INFO - CmdStan done processing.

The number of threads to use is passed to the model exe file by means of the shell environment variable STAN_NUM_THREADS.

On my machine, which has 4 cores, all 4 chains are run in parallel from within a single process. Therefore, the total number of threads used by this process will be threads_per_chain * chains. To check this, we examine the shell environment variable STAN_NUM_THREADS.

[28]:
os.environ['STAN_NUM_THREADS']
[28]:
'16'