Adapted from brms workshop by Paul-Christian Bürkner

1 Preamble

1.1 Install Libraries

#install.packages("remotes")
#remotes::install_github("DevPsyLab/petersenlab")

1.2 Load Libraries

library("lme4")
library("rstan")
library("brms")
library("bayestestR")
library("mice")

1.3 Simulate Data

set.seed(52242)

sampleSize <- 1000

id <- rep(1:100, each = 10)
X <- rnorm(sampleSize)
M <- 0.5*X + rnorm(sampleSize)
Y <- 0.7*M + rnorm(sampleSize)

X[sample(1:length(X), size = 10)] <- NA
M[sample(1:length(M), size = 10)] <- NA
Y[sample(1:length(Y), size = 10)] <- NA

mydata <- data.frame(
  id = id,
  X = X,
  Y = Y,
  M = M)

1.4 Load Data

data("sleepstudy", package = "lme4")

1.5 Prepare Data

conditions <- make_conditions(sleepstudy, "Subject")

2 brms

2.1 Post-Processing Methods

methods(class = "brmsfit")
  [1] add_criterion           add_ic                  as_draws_array         
  [4] as_draws_df             as_draws_list           as_draws_matrix        
  [7] as_draws_rvars          as_draws                as.array               
 [10] as.data.frame           as.matrix               as.mcmc                
 [13] autocor                 bayes_factor            bayes_R2               
 [16] bayesfactor_models      bayesfactor_parameters  bayesfactor_restricted 
 [19] bci                     bridge_sampler          check_prior            
 [22] ci                      coef                    conditional_effects    
 [25] conditional_smooths     control_params          default_prior          
 [28] describe_posterior      describe_prior          diagnostic_draws       
 [31] diagnostic_posterior    effective_sample        equivalence_test       
 [34] estimate_density        eti                     expose_functions       
 [37] family                  fitted                  fixef                  
 [40] formula                 getCall                 hdi                    
 [43] hypothesis              kfold                   log_lik                
 [46] log_posterior           logLik                  loo_compare            
 [49] loo_epred               loo_linpred             loo_model_weights      
 [52] loo_moment_match        loo_predict             loo_predictive_interval
 [55] loo_R2                  loo_subsample           loo                    
 [58] LOO                     map_estimate            marginal_effects       
 [61] marginal_smooths        mcmc_plot               mcse                   
 [64] mediation               model_to_priors         model_weights          
 [67] model.frame             nchains                 ndraws                 
 [70] neff_ratio              ngrps                   niterations            
 [73] nobs                    nsamples                nuts_params            
 [76] nvariables              p_direction             p_map                  
 [79] p_rope                  p_significance          pairs                  
 [82] parnames                plot                    point_estimate         
 [85] post_prob               posterior_average       posterior_epred        
 [88] posterior_interval      posterior_linpred       posterior_predict      
 [91] posterior_samples       posterior_smooths       posterior_summary      
 [94] pp_average              pp_check                pp_mixture             
 [97] predict                 predictive_error        predictive_interval    
[100] prepare_predictions     print                   prior_draws            
[103] prior_summary           psis                    ranef                  
[106] reloo                   residuals               restructure            
[109] rhat                    rope                    sexit_thresholds       
[112] si                      simulate_prior          spi                    
[115] stancode                standata                stanplot               
[118] summary                 unupdate                update                 
[121] VarCorr                 variables               vcov                   
[124] waic                    WAIC                    weighted_posteriors    
see '?methods' for accessing help and source code

3 Multilevel Models

3.1 Fit the Models

3.1.1 Complete Pooling

fit_sleep1 <- brm(
  Reaction ~ 1 + Days,
  data = sleepstudy,
  seed  = 52242)
Compiling Stan program...
Start sampling

3.1.2 Random Intercepts

fit_sleep2 <- brm(
  Reaction ~ 1 + Days + (1 | Subject), 
  data = sleepstudy,
  seed  = 52242)
Compiling Stan program...
Start sampling

3.1.3 Random Intercepts and Slopes

fit_sleep3 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  seed  = 52242
)
Compiling Stan program...
Start sampling

3.2 Summarize Results

For convergence, Rhat values should not be above 1.00.

summary(fit_sleep3)
 Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: Reaction ~ 1 + Days + (1 + Days | Subject) 
   Data: sleepstudy (Number of observations: 180) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~Subject (Number of levels: 18) 
                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)          26.76      6.70    15.60    41.96 1.00     1836     2290
sd(Days)                6.58      1.55     4.17    10.23 1.00     1265     1938
cor(Intercept,Days)     0.08      0.30    -0.49     0.66 1.00     1041     1736

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept   251.38      7.39   236.80   266.10 1.00     1912     2086
Days         10.43      1.76     7.03    13.97 1.00     1294     1865

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma    25.96      1.55    23.13    29.23 1.00     3535     2807

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

3.3 Model Priors

prior_summary(fit_sleep3)

3.4 Model Parameters

variables(fit_sleep3)
 [1] "b_Intercept"                  "b_Days"                      
 [3] "sd_Subject__Intercept"        "sd_Subject__Days"            
 [5] "cor_Subject__Intercept__Days" "sigma"                       
 [7] "Intercept"                    "r_Subject[308,Intercept]"    
 [9] "r_Subject[309,Intercept]"     "r_Subject[310,Intercept]"    
[11] "r_Subject[330,Intercept]"     "r_Subject[331,Intercept]"    
[13] "r_Subject[332,Intercept]"     "r_Subject[333,Intercept]"    
[15] "r_Subject[334,Intercept]"     "r_Subject[335,Intercept]"    
[17] "r_Subject[337,Intercept]"     "r_Subject[349,Intercept]"    
[19] "r_Subject[350,Intercept]"     "r_Subject[351,Intercept]"    
[21] "r_Subject[352,Intercept]"     "r_Subject[369,Intercept]"    
[23] "r_Subject[370,Intercept]"     "r_Subject[371,Intercept]"    
[25] "r_Subject[372,Intercept]"     "r_Subject[308,Days]"         
[27] "r_Subject[309,Days]"          "r_Subject[310,Days]"         
[29] "r_Subject[330,Days]"          "r_Subject[331,Days]"         
[31] "r_Subject[332,Days]"          "r_Subject[333,Days]"         
[33] "r_Subject[334,Days]"          "r_Subject[335,Days]"         
[35] "r_Subject[337,Days]"          "r_Subject[349,Days]"         
[37] "r_Subject[350,Days]"          "r_Subject[351,Days]"         
[39] "r_Subject[352,Days]"          "r_Subject[369,Days]"         
[41] "r_Subject[370,Days]"          "r_Subject[371,Days]"         
[43] "r_Subject[372,Days]"          "lprior"                      
[45] "lp__"                        

3.5 Model Coefficients

coef(fit_sleep3)
$Subject
, , Intercept

    Estimate Est.Error     Q2.5    Q97.5
308 253.8036  13.42340 227.2528 279.4458
309 211.5798  13.53907 184.5701 237.5476
310 212.9776  13.44786 186.3989 238.6100
330 274.5239  13.43628 248.1815 301.9403
331 273.0398  13.23918 248.1694 299.4477
332 260.2840  12.23798 236.3152 284.9809
333 267.8093  12.62690 244.2342 293.6989
334 244.3311  12.62822 219.1096 269.0711
335 250.7611  13.16495 224.1951 276.3869
337 286.1682  13.38914 259.9519 312.2581
349 226.5570  12.82480 200.4294 251.4542
350 238.6786  13.13686 212.3307 263.6997
351 255.7790  12.53802 230.2905 280.3204
352 272.2132  12.54778 247.9377 297.7906
369 254.4293  12.50311 230.2014 278.3378
370 226.7179  13.30644 200.0374 251.3793
371 252.3647  12.07777 229.1029 275.8134
372 263.5458  12.38755 239.3714 287.7161

, , Days

      Estimate Est.Error        Q2.5     Q97.5
308 19.6319154  2.542400 14.91127989 24.921229
309  1.7458810  2.561539 -3.16997162  6.728358
310  4.9887767  2.536991  0.03313305 10.034349
330  5.7537881  2.555568  0.62257977 10.503648
331  7.5170261  2.469967  2.42232385 12.196072
332 10.2271380  2.325028  5.69008473 14.750818
333 10.2869556  2.337880  5.51782984 14.730549
334 11.5339766  2.405073  6.85791261 16.290102
335 -0.2252846  2.582433 -5.24670583  4.873086
337 19.1100826  2.537751 14.24044379 24.060945
349 11.5705306  2.403886  7.01645714 16.487190
350 17.0331885  2.543700 12.24508630 22.034614
351  7.4875668  2.395344  2.89592486 12.176265
352 14.0213129  2.421787  9.20693205 18.780621
369 11.3270352  2.379922  6.54155107 15.929727
370 15.1243321  2.534860 10.24501333 20.291155
371  9.4030963  2.306262  4.87040862 14.017797
372 11.7617904  2.361364  7.07438198 16.475036

3.6 Plots

3.6.1 Trace Plots

plot(fit_sleep3, ask = FALSE)

3.6.2 Visualize Predictions

3.6.2.1 Sample-Level

plot(conditional_effects(fit_sleep1), points = TRUE)

3.6.2.2 Person-Level

# re_formula = NULL ensures that group-level effects are included
ce2 <- conditional_effects(
  fit_sleep3,
  conditions = conditions,
  re_formula = NULL)

plot(ce2, ncol = 6, points = TRUE)

3.6.3 Check Model Fit

3.6.3.1 Posterior Predictive Check

Evaluate how closely the posterior predictions match the observed values. If they do not match the general pattern of the observed values, a different response distribution may be necessary.

pp_check(fit_sleep3)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

pp_check(fit_sleep3, type = "dens_overlay")
Using 10 posterior draws for ppc type 'dens_overlay' by default.

pp_check(fit_sleep3, "error_scatter_avg")
Using all posterior draws for ppc type 'error_scatter_avg' by default.

3.7 Fitted Values

fitted(fit_sleep3)
       Estimate Est.Error     Q2.5    Q97.5
  [1,] 253.8036 13.423405 227.2528 279.4458
  [2,] 273.4355 11.520427 251.1066 295.5886
  [3,] 293.0674  9.908596 273.5416 312.1998
  [4,] 312.6994  8.750308 295.6254 329.6635
  [5,] 332.3313  8.239121 316.1146 348.5817
  [6,] 351.9632  8.492698 335.3675 368.3632
  [7,] 371.5951  9.449673 352.8905 389.8877
  [8,] 391.2270 10.926771 369.8542 412.5090
  [9,] 410.8589 12.744407 385.4375 435.8825
 [10,] 430.4909 14.777452 401.3990 459.9801
 [11,] 211.5798 13.539067 184.5701 237.5476
 [12,] 213.3257 11.594215 190.0978 235.9280
 [13,] 215.0716  9.933190 195.1221 234.5327
 [14,] 216.8174  8.719730 199.3095 234.0045
 [15,] 218.5633  8.156106 203.0775 234.6862
 [16,] 220.3092  8.374570 204.7937 236.6225
 [17,] 222.0551  9.320287 204.3521 240.2189
 [18,] 223.8010 10.803936 202.8558 244.8068
 [19,] 225.5468 12.637456 200.9486 249.8845
 [20,] 227.2927 14.690423 198.2139 255.6742
 [21,] 212.9776 13.447858 186.3989 238.6100
 [22,] 217.9664 11.512588 195.5189 240.0293
 [23,] 222.9552  9.854294 203.7593 241.8930
 [24,] 227.9439  8.634072 211.1470 244.6109
 [25,] 232.9327  8.053566 217.1679 248.8342
 [26,] 237.9215  8.248957 221.8949 254.3320
 [27,] 242.9103  9.170786 225.2566 260.7676
 [28,] 247.8990 10.631744 227.3514 268.4746
 [29,] 252.8878 12.443363 228.9194 276.8797
 [30,] 257.8766 14.474573 229.8574 285.8047
 [31,] 274.5239 13.436278 248.1815 301.9403
 [32,] 280.2777 11.472291 257.6589 303.5975
 [33,] 286.0315  9.785459 266.5387 305.5182
 [34,] 291.7853  8.541593 274.8651 308.8530
 [35,] 297.5391  7.951370 281.6089 313.1176
 [36,] 303.2929  8.157916 287.2294 319.3005
 [37,] 309.0467  9.107182 291.3656 327.0262
 [38,] 314.8004 10.601500 294.3065 335.9972
 [39,] 320.5542 12.446071 296.7421 345.2748
 [40,] 326.3080 14.507909 298.5109 355.0400
 [41,] 273.0398 13.239183 248.1694 299.4477
 [42,] 280.5569 11.353368 258.9471 302.8042
 [43,] 288.0739  9.732595 269.2880 307.5558
 [44,] 295.5909  8.529322 279.0331 312.7310
 [45,] 303.1080  7.935787 287.8983 318.7171
 [46,] 310.6250  8.087372 295.1102 326.6103
 [47,] 318.1420  8.946279 300.3617 335.9235
 [48,] 325.6590 10.337684 304.9808 346.4615
 [49,] 333.1761 12.078947 308.6069 357.2433
 [50,] 340.6931 14.040502 311.9999 368.6084
 [51,] 260.2840 12.237982 236.3152 284.9809
 [52,] 270.5112 10.596908 249.4092 291.4664
 [53,] 280.7383  9.253769 262.2199 298.9000
 [54,] 290.9654  8.353534 274.5185 306.8564
 [55,] 301.1926  8.046262 285.4025 316.5144
 [56,] 311.4197  8.397301 294.8811 327.6733
 [57,] 321.6468  9.332657 303.7138 339.8676
 [58,] 331.8740 10.700180 311.4909 352.7807
 [59,] 342.1011 12.357213 318.5939 365.9429
 [60,] 352.3283 14.202784 325.0814 379.7613
 [61,] 267.8093 12.626904 244.2342 293.6989
 [62,] 278.0962 10.899411 257.6470 300.7191
 [63,] 288.3832  9.438589 270.4678 307.7862
 [64,] 298.6701  8.384995 282.4267 315.2729
 [65,] 308.9571  7.903206 293.5251 324.2920
 [66,] 319.2440  8.095960 303.3288 334.8122
 [67,] 329.5310  8.919632 311.9167 346.4874
 [68,] 339.8179 10.222841 319.7585 358.8223
 [69,] 350.1049 11.848396 326.8274 372.1452
 [70,] 360.3919 13.681880 333.3820 386.2789
 [71,] 244.3311 12.628215 219.1096 269.0711
 [72,] 255.8651 10.843243 234.8299 277.5367
 [73,] 267.3991  9.340705 249.6269 286.1411
 [74,] 278.9331  8.275892 262.7474 294.9786
 [75,] 290.4670  7.829478 275.3135 305.1521
 [76,] 302.0010  8.104309 286.2364 317.3663
 [77,] 313.5350  9.034803 296.2280 330.8457
 [78,] 325.0690 10.447213 304.2386 345.2764
 [79,] 336.6030 12.174958 312.7848 360.3547
 [80,] 348.1369 14.102613 320.6016 376.0334
 [81,] 250.7611 13.164951 224.1951 276.3869
 [82,] 250.5358 11.184087 228.1236 272.3726
 [83,] 250.3105  9.496820 231.2769 269.0982
 [84,] 250.0852  8.284523 233.2194 266.2534
 [85,] 249.8599  7.772707 234.3995 264.8948
 [86,] 249.6346  8.095341 233.6417 265.8702
 [87,] 249.4093  9.164717 231.5820 267.5963
 [88,] 249.1841 10.760458 228.4438 270.7762
 [89,] 248.9588 12.685456 224.6143 274.6212
 [90,] 248.7335 14.811890 220.4098 278.3679
 [91,] 286.1682 13.389144 259.9519 312.2581
 [92,] 305.2783 11.475844 282.8305 327.6996
 [93,] 324.3884  9.848919 305.2016 343.8880
 [94,] 343.4985  8.671087 326.7256 360.5936
 [95,] 362.6085  8.139696 346.8772 378.4415
 [96,] 381.7186  8.378659 365.1972 397.5078
 [97,] 400.8287  9.328963 382.4937 418.4049
 [98,] 419.9388 10.804515 398.5223 440.7333
 [99,] 439.0489 12.622436 413.9451 463.7273
[100,] 458.1590 14.655872 429.1228 486.4694
[101,] 226.5570 12.824803 200.4294 251.4542
[102,] 238.1276 11.032094 215.9740 259.2405
[103,] 249.6981  9.512936 230.3887 267.9422
[104,] 261.2686  8.416777 244.4129 277.6187
[105,] 272.8392  7.921215 257.1578 288.3382
[106,] 284.4097  8.136738 268.2892 300.1678
[107,] 295.9802  9.012474 278.6256 313.8360
[108,] 307.5508 10.382689 287.7080 327.8511
[109,] 319.1213 12.080277 295.9433 342.9261
[110,] 330.6918 13.986540 303.8621 358.5722
[111,] 238.6786 13.136865 212.3307 263.6997
[112,] 255.7118 11.212769 233.3976 276.9796
[113,] 272.7450  9.582065 253.7956 291.4491
[114,] 289.7782  8.417041 273.2937 306.1856
[115,] 306.8113  7.925781 291.4421 322.7084
[116,] 323.8445  8.229839 307.6099 340.5938
[117,] 340.8777  9.251125 322.8653 359.5570
[118,] 357.9109 10.787827 336.8251 379.3801
[119,] 374.9441 12.653534 350.5318 399.9437
[120,] 391.9773 14.723703 363.9737 420.5865
[121,] 255.7790 12.538023 230.2905 280.3204
[122,] 263.2665 10.819083 241.2003 284.4985
[123,] 270.7541  9.400980 251.5079 289.1507
[124,] 278.2417  8.436803 261.1905 294.5546
[125,] 285.7292  8.090501 269.3469 301.2611
[126,] 293.2168  8.438489 276.5729 309.7038
[127,] 300.7044  9.404007 282.1637 319.0133
[128,] 308.1919 10.823028 286.8555 329.7530
[129,] 315.6795 12.542562 291.0335 340.4165
[130,] 323.1671 14.455765 294.8664 351.5085
[131,] 272.2132 12.547779 247.9377 297.7906
[132,] 286.2346 10.786103 265.4433 307.7685
[133,] 300.2559  9.325415 282.4138 318.7572
[134,] 314.2772  8.325673 298.3476 330.6147
[135,] 328.2985  7.962436 313.0277 344.1861
[136,] 342.3198  8.319499 325.8859 358.6065
[137,] 356.3411  9.314389 338.3377 374.7585
[138,] 370.3624 10.771802 349.3693 391.7183
[139,] 384.3838 12.531389 359.3285 409.3472
[140,] 398.4051 14.483432 369.4604 426.8266
[141,] 254.4293 12.503112 230.2014 278.3378
[142,] 265.7563 10.773693 244.8015 286.6302
[143,] 277.0834  9.335157 259.1086 295.2843
[144,] 288.4104  8.339419 272.0860 304.6397
[145,] 299.7374  7.954541 284.2035 314.8833
[146,] 311.0645  8.266292 294.9191 327.2119
[147,] 322.3915  9.204157 304.1501 340.6066
[148,] 333.7185 10.603277 312.6269 354.5394
[149,] 345.0456 12.307335 320.8454 369.4530
[150,] 356.3726 14.207026 328.3684 384.8325
[151,] 226.7179 13.306439 200.0374 251.3793
[152,] 241.8423 11.394588 219.0042 263.3217
[153,] 256.9666  9.770516 237.4683 275.6759
[154,] 272.0909  8.598859 254.9541 288.7202
[155,] 287.2153  8.078910 271.2685 302.8427
[156,] 302.3396  8.333561 286.0381 318.3356
[157,] 317.4639  9.299394 299.2168 335.7071
[158,] 332.5883 10.787041 311.4962 354.1105
[159,] 347.7126 12.613200 323.3014 372.7489
[160,] 362.8369 14.651839 334.4412 391.6530
[161,] 252.3647 12.077768 229.1029 275.8134
[162,] 261.7678 10.423329 241.2570 281.8510
[163,] 271.1709  9.058519 253.5544 288.7477
[164,] 280.5740  8.130526 264.9742 296.7426
[165,] 289.9771  7.796910 274.9358 305.3184
[166,] 299.3802  8.131166 283.7186 315.9440
[167,] 308.7833  9.059669 291.1535 326.7062
[168,] 318.1864 10.424828 298.0406 338.9303
[169,] 327.5895 12.079493 304.4257 351.3125
[170,] 336.9925 13.920810 310.2639 364.4948
[171,] 263.5458 12.387551 239.3714 287.7161
[172,] 275.3076 10.657490 254.9976 296.0264
[173,] 287.0694  9.212211 269.2020 305.0324
[174,] 298.8312  8.203637 282.4993 314.8825
[175,] 310.5930  7.802984 295.2651 325.9087
[176,] 322.3548  8.100958 306.6305 338.2362
[177,] 334.1166  9.028653 316.6219 351.7410
[178,] 345.8784 10.419199 325.6658 366.2850
[179,] 357.6402 12.114244 334.1252 380.9293
[180,] 369.4019 14.003649 342.1829 396.2329

3.8 Residuals

residuals(fit_sleep3)
           Estimate Est.Error        Q2.5      Q97.5
  [1,]   -4.2244673  29.28706  -63.002265  53.128273
  [2,]  -15.2258734  28.02991  -70.891336  39.747922
  [3,]  -42.7137771  27.79352  -96.701875  12.326794
  [4,]    9.0896556  27.38952  -45.543725  63.650947
  [5,]   24.0863878  27.11413  -29.988776  77.641748
  [6,]   62.8630668  27.10382    9.918941 115.616064
  [7,]   10.7967209  28.18829  -43.506810  67.776789
  [8,] -101.0242464  27.95897 -156.796686 -47.135290
  [9,]   19.0289570  29.02182  -37.947663  75.968144
 [10,]   35.9679828  30.05818  -22.619714  95.283838
 [11,]   10.8079797  28.88379  -46.164272  65.434946
 [12,]   -7.5680787  29.01514  -64.114191  48.280094
 [13,]  -12.3264170  27.91637  -67.023547  42.299361
 [14,]  -12.0506382  28.02378  -68.333068  42.724023
 [15,]  -10.2508166  27.13701  -63.741910  41.896208
 [16,]   -4.1352820  27.20537  -58.260765  48.432132
 [17,]   -7.9827051  27.85208  -64.137881  44.781207
 [18,]   -6.9447045  27.78456  -60.836789  46.859561
 [19,]   -1.4023237  28.82651  -58.466019  54.085573
 [20,]    9.9275054  29.53783  -47.990517  68.778058
 [21,]  -13.7318990  28.66353  -68.839142  42.144727
 [22,]  -23.4081843  28.49820  -77.953568  32.985362
 [23,]   11.7845766  27.91344  -42.374344  67.332793
 [24,]    5.0480395  27.67880  -49.935871  59.923446
 [25,]   -3.3787437  27.21590  -56.685213  48.561840
 [26,]  -17.6586327  27.97261  -72.606340  37.791079
 [27,]   -7.7210286  27.36769  -60.081905  47.976011
 [28,]    7.2544990  28.09069  -47.343929  61.584794
 [29,]    7.9954282  28.75324  -49.895246  63.742358
 [30,]  -10.0755777  29.70183  -66.939758  48.042172
 [31,]   47.1350729  29.48763  -11.565006 104.661624
 [32,]   20.2182528  28.48794  -35.113909  75.782897
 [33,]   -1.8660173  27.46365  -55.344197  52.649728
 [34,]   -6.9236967  27.02505  -59.299748  47.098047
 [35,]  -12.2506201  26.89874  -64.312813  39.945039
 [36,]   -5.3127676  27.15819  -59.464006  47.637983
 [37,]  -28.4792129  27.63301  -82.248162  25.985264
 [38,]    3.6543684  27.83417  -50.353751  57.904114
 [39,]  -15.1484514  29.26645  -72.252116  41.788958
 [40,]   28.0658928  29.35092  -30.583225  84.633817
 [41,]   14.8717128  29.08505  -42.214918  70.484262
 [42,]    4.3242094  28.50086  -51.341599  61.153497
 [43,]   13.8504402  28.05522  -40.913331  68.398581
 [44,]   24.3362194  27.28591  -30.788101  78.914085
 [45,]   13.1889431  27.54427  -40.715730  67.631888
 [46,]  -17.7137627  27.80068  -72.569896  36.414923
 [47,]  -28.3472950  27.29476  -81.678244  25.608079
 [48,]    9.6170948  28.06588  -45.830459  65.058308
 [49,]  -39.4598317  28.66948  -94.986989  17.393661
 [50,]   30.9086055  29.55650  -26.572501  88.329012
 [51,]  -25.8076366  28.94518  -83.383272  30.483731
 [52,]  -27.9344955  28.28366  -83.500392  27.486197
 [53,]   -8.0502355  27.42836  -61.613145  45.863473
 [54,]   19.2579856  27.03447  -33.502671  70.986982
 [55,]   16.2909217  27.66439  -38.052394  69.143371
 [56,]   -1.3228837  27.23727  -55.037646  53.096391
 [57,]  132.6726978  27.93106   77.206137 187.357331
 [58,]   15.3854189  27.44064  -38.977512  68.151968
 [59,]  -11.1021018  28.82826  -68.121326  47.100381
 [60,]  -98.4821229  29.81761 -157.161360 -39.718893
 [61,]   15.9789884  29.07316  -42.057333  72.926161
 [62,]   11.1301601  27.74470  -41.371425  65.286299
 [63,]  -11.2582348  27.79820  -65.845164  42.115609
 [64,]    0.7878905  27.34122  -52.032957  55.202097
 [65,]  -11.2076660  27.25256  -64.511382  41.487374
 [66,]   19.3969010  27.21057  -34.020029  74.136133
 [67,]    2.7413390  27.37918  -51.426456  55.531692
 [68,]    8.2943051  27.90299  -45.690878  62.797717
 [69,]  -17.1992905  28.29974  -72.534877  38.919021
 [70,]    1.5071670  29.70570  -57.437486  59.401577
 [71,]   20.8939676  28.92899  -35.987943  77.979178
 [72,]   20.7384532  28.22225  -33.222491  77.164287
 [73,]  -23.7536273  27.31639  -79.600621  30.224124
 [74,]  -24.3197072  27.21669  -78.018696  29.064264
 [75,]  -10.9198895  26.94082  -64.727842  42.597232
 [76,]  -18.3519567  27.38319  -71.581832  36.963853
 [77,]   -7.0780084  27.72428  -62.094219  45.695043
 [78,]    6.1788854  27.86481  -47.264481  61.252139
 [79,]   -0.7823376  29.00939  -58.323713  56.949851
 [80,]   29.4033174  30.11908  -30.421139  91.266906
 [81,]   -8.2632425  29.26668  -66.610496  48.867494
 [82,]   23.8145000  27.58684  -30.942961  77.722409
 [83,]    4.8575626  27.66325  -48.351131  59.644099
 [84,]   20.3904477  27.07922  -32.030167  72.499148
 [85,]    1.7451560  26.68396  -50.377078  53.120733
 [86,]    5.0312298  27.48604  -49.511514  57.622505
 [87,]   -3.5468474  27.34797  -55.915356  51.679646
 [88,]  -14.2663646  28.29239  -67.847265  42.089247
 [89,]  -12.4371538  28.87968  -69.821446  43.482297
 [90,]  -11.8790136  29.81158  -71.208520  45.480586
 [91,]   26.0171585  29.20766  -30.038091  85.375120
 [92,]    7.9445807  28.47354  -47.037698  65.102815
 [93,]  -33.6100632  27.38287  -88.355109  19.776385
 [94,]    2.1611300  27.13725  -51.694605  56.458443
 [95,]    3.0344026  26.94435  -49.562906  56.141762
 [96,]   10.2042187  27.03990  -41.849247  62.926880
 [97,]    3.2323238  27.99663  -51.220682  58.432583
 [98,]   -3.1560171  28.43919  -57.916122  52.620312
 [99,]   16.8474105  29.12410  -40.608396  74.864152
[100,]    1.0091687  30.05855  -58.433140  58.681566
[101,]    9.7313939  29.08582  -47.555087  66.634698
[102,]   -8.1613084  27.98578  -62.902932  46.795870
[103,]  -11.0084064  27.63072  -65.334627  43.825662
[104,]   -6.3686903  27.45658  -60.350031  48.049654
[105,]  -21.9359022  27.54636  -76.421955  33.382814
[106,]  -14.3629928  27.16031  -65.278820  38.392060
[107,]  -14.5449835  27.69595  -68.122893  40.408575
[108,]    0.6573357  28.24338  -55.091787  56.585500
[109,]   17.1863718  28.80868  -38.919359  71.919224
[110,]   20.5621092  29.24914  -37.219896  78.646516
[111,]   17.9965429  29.47544  -41.464296  74.103829
[112,]  -11.7668505  28.50034  -68.575339  42.664311
[113,]  -16.4576884  28.04687  -72.912191  37.544355
[114,]  -34.5885129  27.58706  -88.300127  18.468037
[115,]  -37.7808870  27.24016  -91.964538  15.866419
[116,]    5.8793799  27.11760  -47.287881  59.747935
[117,]   38.5737262  27.15999  -15.540225  91.749533
[118,]    5.5372407  28.20525  -50.153684  60.059953
[119,]   19.6188928  28.44534  -37.454506  76.496327
[120,]   -3.3657784  30.05201  -63.089191  54.815028
[121,]   -5.7588610  28.26386  -60.117322  47.793813
[122,]   36.3992500  28.41437  -20.941171  93.045153
[123,]   -0.7592382  27.57961  -53.859534  52.501323
[124,]    2.6689136  27.10773  -49.976338  57.604008
[125,]  -13.7773247  27.10203  -67.322177  39.475178
[126,]   11.2033790  27.21120  -41.010565  65.306322
[127,]  -13.1617072  27.39834  -67.210756  40.749569
[128,]  -41.7794969  27.96592  -95.664816  13.047176
[129,]    6.1565976  28.43068  -50.436851  63.030017
[130,]   24.2307695  29.33433  -34.562678  81.750961
[131,]  -50.5287655  28.75460 -107.766298   5.502855
[132,]   11.8878467  28.00367  -41.800030  67.161674
[133,]   26.3445173  27.88222  -28.062456  81.011281
[134,]   32.2314300  27.63113  -21.515342  85.469826
[135,]   19.8834068  27.58353  -34.795368  74.287228
[136,]   10.3634070  28.00825  -43.891951  64.427959
[137,]   -2.1253800  27.61106  -56.317242  52.471969
[138,]   -9.9925668  28.56169  -65.409750  47.065316
[139,]   -8.9106062  28.45719  -65.225630  46.066217
[140,]   -9.3813569  29.96843  -68.884786  49.232889
[141,]   17.3575607  28.92836  -40.066329  73.778632
[142,]    3.4131616  28.26150  -51.218879  58.355724
[143,]  -19.1739905  27.84646  -73.553085  35.209401
[144,]   -9.4444957  27.44893  -62.862543  44.960354
[145,]   15.3734319  26.88666  -36.844027  69.230582
[146,]    6.0053845  27.56009  -49.185780  59.829004
[147,]  -25.2752645  27.89850  -79.656786  30.357560
[148,]   14.7457663  27.94262  -38.924833  70.236543
[149,]   -4.3719206  28.74177  -60.030737  49.742929
[150,]   10.7447807  29.98415  -48.151526  70.239442
[151,]   -2.1547111  28.84685  -59.700674  54.220042
[152,]   -7.5099338  28.44010  -63.560652  48.292218
[153,]  -17.5507261  27.98482  -73.264030  36.060693
[154,]  -30.9786194  27.79869  -86.288731  23.161655
[155,]  -19.0839475  27.32837  -73.305653  34.583350
[156,]   41.7623717  27.66888  -12.092815  94.875495
[157,]  -36.4474394  27.40064  -89.271537  17.970059
[158,]   14.9040041  27.71451  -41.485350  69.539959
[159,]   17.6758065  28.72776  -37.630394  75.476323
[160,]    9.3395327  30.20087  -50.037040  68.902599
[161,]   17.6074905  28.96742  -40.245647  74.093538
[162,]   11.1078854  28.11767  -43.567214  64.468572
[163,]    6.6902202  27.88459  -47.499881  60.814755
[164,]    0.8499879  27.08400  -53.106702  54.184408
[165,]  -11.1672150  27.11328  -64.701485  42.438264
[166,]  -14.5162212  27.23590  -67.592466  40.089936
[167,]  -49.5763850  27.32816 -102.762919   3.495567
[168,]  -13.0714634  28.48988  -68.945979  43.583314
[169,]   22.8183510  28.52828  -32.677928  77.757969
[170,]   32.5603981  28.99731  -24.633516  89.478661
[171,]    6.4484739  28.52375  -50.038578  63.126532
[172,]   -2.0101903  27.94278  -56.639697  51.203558
[173,]   10.0258693  27.51549  -43.561998  63.677171
[174,]   11.8882142  26.55038  -38.738214  64.488492
[175,]  -23.5377678  27.08766  -76.857261  30.636991
[176,]    7.6821194  27.16770  -44.069064  62.063792
[177,]    0.7254750  26.93533  -51.140271  55.049178
[178,]   -2.3682298  27.88377  -58.034041  52.294497
[179,]   12.1459358  29.01541  -44.404463  69.565984
[180,]   -5.3515491  29.73252  -62.975318  52.234002

3.9 Compare Models

elpd values: higher is better looic values: lower is better

elpd_diff values that are greater than ~2 standard errors of the elpd_diff values indicate a significantly better model (i.e., if elpd_diff value is greater than 2 times the se_diff value).

loo(fit_sleep1, fit_sleep2, fit_sleep3)
Warning: Found 3 observations with a pareto_k > 0.7 in model 'fit_sleep3'. We
recommend to set 'moment_match = TRUE' in order to perform moment matching for
problematic observations.
Output of model 'fit_sleep1':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -953.1 10.5
p_loo         3.0  0.5
looic      1906.3 21.0
------
MCSE of elpd_loo is 0.0.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.9, 1.2]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep2':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -884.7 14.3
p_loo        19.2  3.3
looic      1769.4 28.7
------
MCSE of elpd_loo is 0.1.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.4, 1.8]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep3':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -861.2 22.1
p_loo        34.0  8.2
looic      1722.4 44.1
------
MCSE of elpd_loo is NA.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.5, 1.2]).

Pareto k diagnostic values:
                         Count Pct.    Min. ESS
(-Inf, 0.7]   (good)     177   98.3%   49      
   (0.7, 1]   (bad)        3    1.7%   <NA>    
   (1, Inf)   (very bad)   0    0.0%   <NA>    
See help('pareto-k-diagnostic') for details.

Model comparisons:
           elpd_diff se_diff
fit_sleep3   0.0       0.0  
fit_sleep2 -23.5      11.4  
fit_sleep1 -91.9      20.7  
print(loo(fit_sleep1, fit_sleep2, fit_sleep3), simplify = FALSE)
Warning: Found 3 observations with a pareto_k > 0.7 in model 'fit_sleep3'. We
recommend to set 'moment_match = TRUE' in order to perform moment matching for
problematic observations.
Output of model 'fit_sleep1':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -953.1 10.5
p_loo         3.0  0.5
looic      1906.3 21.0
------
MCSE of elpd_loo is 0.0.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.9, 1.2]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep2':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -884.7 14.3
p_loo        19.2  3.3
looic      1769.4 28.7
------
MCSE of elpd_loo is 0.1.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.4, 1.8]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep3':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -861.2 22.1
p_loo        34.0  8.2
looic      1722.4 44.1
------
MCSE of elpd_loo is NA.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.5, 1.2]).

Pareto k diagnostic values:
                         Count Pct.    Min. ESS
(-Inf, 0.7]   (good)     177   98.3%   49      
   (0.7, 1]   (bad)        3    1.7%   <NA>    
   (1, Inf)   (very bad)   0    0.0%   <NA>    
See help('pareto-k-diagnostic') for details.

Model comparisons:
           elpd_diff se_diff elpd_loo se_elpd_loo p_loo  se_p_loo looic 
fit_sleep3    0.0       0.0  -861.2     22.1        34.0    8.2   1722.4
fit_sleep2  -23.5      11.4  -884.7     14.3        19.2    3.3   1769.4
fit_sleep1  -91.9      20.7  -953.1     10.5         3.0    0.5   1906.3
           se_looic
fit_sleep3   44.1  
fit_sleep2   28.7  
fit_sleep1   21.0  

3.9.1 Compute Model Weights

model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo")
  fit_sleep1   fit_sleep2   fit_sleep3 
1.194774e-40 6.125789e-11 1.000000e+00 
round(model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo"))
fit_sleep1 fit_sleep2 fit_sleep3 
         0          0          1 

4 Multilevel Mediation

The syntax below estimates random intercepts (which allows each participant to have a different intercept) to account for nested data within the same participant.

bayesianMediationSyntax <-
  bf(M ~ X + (1 |i| id)) +
  bf(Y ~ X + M + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between M and Y

bayesianMediationModel <- brm(
  bayesianMediationSyntax,
  data  = mydata,
  seed  = 52242
)
Warning: Rows containing NAs were excluded from the model.
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000142 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.42 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 5.345 seconds (Warm-up)
Chain 1:                3.543 seconds (Sampling)
Chain 1:                8.888 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000111 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.11 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 5.138 seconds (Warm-up)
Chain 2:                3.417 seconds (Sampling)
Chain 2:                8.555 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000112 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.12 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 5.114 seconds (Warm-up)
Chain 3:                3.545 seconds (Sampling)
Chain 3:                8.659 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000113 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.13 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 4.906 seconds (Warm-up)
Chain 4:                2.856 seconds (Sampling)
Chain 4:                7.762 seconds (Total)
Chain 4: 
summary(bayesianMediationModel)
 Family: MV(gaussian, gaussian) 
  Links: mu = identity; sigma = identity
         mu = identity; sigma = identity 
Formula: M ~ X + (1 | i | id) 
         Y ~ X + M + (1 | i | id) 
   Data: mydata (Number of observations: 970) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~id (Number of levels: 100) 
                             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(M_Intercept)                  0.07      0.05     0.00     0.17 1.00     1697
sd(Y_Intercept)                  0.11      0.06     0.01     0.23 1.00     1061
cor(M_Intercept,Y_Intercept)     0.04      0.55    -0.93     0.95 1.00     1275
                             Tail_ESS
sd(M_Intercept)                  2298
sd(Y_Intercept)                  1675
cor(M_Intercept,Y_Intercept)     1906

Regression Coefficients:
            Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
M_Intercept     0.00      0.03    -0.06     0.07 1.00     6932     3140
Y_Intercept     0.06      0.03    -0.01     0.13 1.00     6151     2914
M_X             0.51      0.03     0.44     0.58 1.00     7856     2892
Y_X             0.03      0.04    -0.04     0.10 1.00     6309     3423
Y_M             0.68      0.03     0.62     0.75 1.00     6429     3024

Further Distributional Parameters:
        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_M     1.01      0.02     0.97     1.06 1.00     6987     1902
sigma_Y     1.00      0.02     0.96     1.05 1.00     4782     2966

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
hypothesis(
  bayesianMediationModel,
  "b_M_X * b_Y_M = 0", # indirect effect = a path * b path
  class = NULL,
  seed  =  52242
)
Hypothesis Tests for class :
         Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
1 (b_M_X*b_Y_M) = 0     0.35      0.03     0.29      0.4         NA        NA
  Star
1    *
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.
mediation(bayesianMediationModel)

5 Setting Priors

5.1 Parameters for Which to Set Priors

get_prior(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy)

5.2 Define Priors

bprior <- c(
  set_prior("normal(5, 5)", coef = "Days"),
  set_prior("cauchy(0, 10)", class = "sd"),
  set_prior("lkj(2)", class = "cor"))

bprior

5.3 Fit the Model

Fit the model with these priors, and sample from these priors:

fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  seed  = 52242
)
Compiling Stan program...
Start sampling

6 Evaluate a Hypothesis

# Evid.Ratio is the ratio of P(Days > 7) / P(Days <= 7)
(hyp1 <- hypothesis(fit_sleep4, "Days < 7"))
Hypothesis Tests for class b:
      Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
1 (Days)-(7) < 0     2.84      1.59     0.18      5.4       0.04      0.04     
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.
plot(hyp1)

# Evid.Ratio is the Bayes Factor of the posterior
# vs the prior that Days = 10 is TRUE (Savage-Dickey Ratio)
(hyp2 <- hypothesis(fit_sleep4, "Days = 10"))
Hypothesis Tests for class b:
       Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
1 (Days)-(10) = 0    -0.16      1.59    -3.32     2.75       5.17      0.84
  Star
1     
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.
plot(hyp2)

7 Parallel Processing

7.1 Between-Chain Parallelization

fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  cores = 4,
  seed  = 52242
)
Compiling Stan program...
Start sampling

8 Missing Data Handling

8.1 Multiply Imputed Datasets from mice

?brm_multiple
imp <- mice::mice(
  mydata,
  m = 5,
  print = FALSE)

fit_imp <- brm_multiple(
  bayesianMediationSyntax,
  data = imp,
  chains = 2)
Compiling the C++ model

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000236 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.36 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 4.764 seconds (Warm-up)
Chain 1:                2.485 seconds (Sampling)
Chain 1:                7.249 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000113 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.13 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 5.291 seconds (Warm-up)
Chain 2:                2.484 seconds (Sampling)
Chain 2:                7.775 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000123 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.23 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 5.364 seconds (Warm-up)
Chain 1:                3.651 seconds (Sampling)
Chain 1:                9.015 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000149 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.49 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 5.646 seconds (Warm-up)
Chain 2:                1.846 seconds (Sampling)
Chain 2:                7.492 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000126 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.26 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 5.056 seconds (Warm-up)
Chain 1:                3.616 seconds (Sampling)
Chain 1:                8.672 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000115 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.15 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 5.125 seconds (Warm-up)
Chain 2:                3.636 seconds (Sampling)
Chain 2:                8.761 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000125 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.25 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 5.127 seconds (Warm-up)
Chain 1:                3.615 seconds (Sampling)
Chain 1:                8.742 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000121 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.21 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 4.76 seconds (Warm-up)
Chain 2:                2.218 seconds (Sampling)
Chain 2:                6.978 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000126 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.26 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 4.789 seconds (Warm-up)
Chain 1:                3.675 seconds (Sampling)
Chain 1:                8.464 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000115 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.15 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 4.839 seconds (Warm-up)
Chain 2:                3.671 seconds (Sampling)
Chain 2:                8.51 seconds (Total)
Chain 2: 
Fitting imputed model 1
Start sampling
Fitting imputed model 2
Start sampling
Fitting imputed model 3
Start sampling
Fitting imputed model 4
Start sampling
Fitting imputed model 5
Start sampling

8.2 Imputation on the Fly During Model Fitting

https://paul-buerkner.github.io/brms/articles/brms_missings.html (archived at https://perma.cc/4Y9L-USQR)

?mi
bayesianRegressionImputationSyntax <-
  bf(X | mi() ~ (1 |i| id)) +
  bf(M | mi() ~ mi(X) + (1 |i| id)) +
  bf(Y | mi() ~ mi(X) + mi(M) + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between X, M, and Y

bayesianRegressionModel <- brm(
  bayesianRegressionImputationSyntax,
  data = mydata,
  seed = 52242
)
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000484 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.84 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 21.223 seconds (Warm-up)
Chain 1:                13.498 seconds (Sampling)
Chain 1:                34.721 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000411 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.11 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 20.06 seconds (Warm-up)
Chain 2:                13.443 seconds (Sampling)
Chain 2:                33.503 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000409 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 4.09 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 19.117 seconds (Warm-up)
Chain 3:                13.401 seconds (Sampling)
Chain 3:                32.518 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000412 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.12 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 18.614 seconds (Warm-up)
Chain 4:                12.402 seconds (Sampling)
Chain 4:                31.016 seconds (Total)
Chain 4: 
summary(bayesianRegressionModel)
 Family: MV(gaussian, gaussian, gaussian) 
  Links: mu = identity; sigma = identity
         mu = identity; sigma = identity
         mu = identity; sigma = identity 
Formula: X | mi() ~ (1 | i | id) 
         M | mi() ~ mi(X) + (1 | i | id) 
         Y | mi() ~ mi(X) + mi(M) + (1 | i | id) 
   Data: mydata (Number of observations: 1000) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~id (Number of levels: 100) 
                             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(X_Intercept)                  0.09      0.06     0.00     0.21 1.00     1007
sd(M_Intercept)                  0.07      0.05     0.00     0.18 1.00     1534
sd(Y_Intercept)                  0.10      0.06     0.01     0.22 1.00     1310
cor(X_Intercept,M_Intercept)     0.08      0.49    -0.84     0.90 1.00     3556
cor(X_Intercept,Y_Intercept)     0.06      0.47    -0.82     0.88 1.00     2320
cor(M_Intercept,Y_Intercept)     0.00      0.48    -0.88     0.86 1.00     2474
                             Tail_ESS
sd(X_Intercept)                  1428
sd(M_Intercept)                  2177
sd(Y_Intercept)                  2087
cor(X_Intercept,M_Intercept)     2922
cor(X_Intercept,Y_Intercept)     2480
cor(M_Intercept,Y_Intercept)     2693

Regression Coefficients:
            Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
X_Intercept    -0.02      0.03    -0.08     0.05 1.00     7564     3001
M_Intercept     0.01      0.03    -0.06     0.07 1.00     7728     3049
Y_Intercept     0.06      0.03    -0.01     0.12 1.00     6781     2915
M_miX           0.51      0.03     0.45     0.58 1.00     8400     2801
Y_miX           0.04      0.04    -0.04     0.11 1.00     5567     2993
Y_miM           0.68      0.03     0.62     0.74 1.00     5954     3270

Further Distributional Parameters:
        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_X     0.98      0.02     0.94     1.03 1.00     6184     2817
sigma_M     1.01      0.02     0.96     1.05 1.00     6661     2907
sigma_Y     1.01      0.02     0.96     1.05 1.00     5554     2605

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
hypothesis(
  bayesianRegressionModel,
  "bsp_M_miX * bsp_Y_miM = 0", # indirect effect = a path * b path
  class = NULL,
  seed = 52242
)
Hypothesis Tests for class :
                Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
1 (bsp_M_miX*bsp_Y_... = 0     0.35      0.03      0.3      0.4         NA
  Post.Prob Star
1        NA    *
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.

9 Session Info

sessionInfo()
R version 4.4.2 (2024-10-31)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 24.04.1 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0

locale:
 [1] LC_CTYPE=C.UTF-8       LC_NUMERIC=C           LC_TIME=C.UTF-8       
 [4] LC_COLLATE=C.UTF-8     LC_MONETARY=C.UTF-8    LC_MESSAGES=C.UTF-8   
 [7] LC_PAPER=C.UTF-8       LC_NAME=C              LC_ADDRESS=C          
[10] LC_TELEPHONE=C         LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C   

time zone: UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] mice_3.17.0         bayestestR_0.15.0   brms_2.22.0        
[4] Rcpp_1.0.13-1       rstan_2.32.6        StanHeaders_2.32.10
[7] lme4_1.1-36         Matrix_1.7-1       

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.1     farver_2.1.2         dplyr_1.1.4         
 [4] loo_2.8.0            fastmap_1.2.0        tensorA_0.36.2.1    
 [7] digest_0.6.37        rpart_4.1.23         lifecycle_1.0.4     
[10] survival_3.7-0       processx_3.8.5       magrittr_2.0.3      
[13] posterior_1.6.0      compiler_4.4.2       rlang_1.1.4         
[16] sass_0.4.9           tools_4.4.2          yaml_2.3.10         
[19] knitr_1.49           labeling_0.4.3       bridgesampling_1.1-2
[22] pkgbuild_1.4.5       plyr_1.8.9           abind_1.4-8         
[25] withr_3.0.2          purrr_1.0.2          nnet_7.3-19         
[28] grid_4.4.2           stats4_4.4.2         jomo_2.7-6          
[31] future_1.34.0        colorspace_2.1-1     inline_0.3.21       
[34] ggplot2_3.5.1        globals_0.16.3       scales_1.3.0        
[37] iterators_1.0.14     MASS_7.3-61          insight_1.0.1       
[40] cli_3.6.3            mvtnorm_1.3-3        rmarkdown_2.29      
[43] reformulas_0.4.0     generics_0.1.3       RcppParallel_5.1.9  
[46] future.apply_1.11.3  reshape2_1.4.4       minqa_1.2.8         
[49] cachem_1.1.0         stringr_1.5.1        splines_4.4.2       
[52] bayesplot_1.11.1     parallel_4.4.2       matrixStats_1.5.0   
[55] vctrs_0.6.5          boot_1.3-31          glmnet_4.1-8        
[58] jsonlite_1.8.9       callr_3.7.6          mitml_0.4-5         
[61] listenv_0.9.1        foreach_1.5.2        jquerylib_0.1.4     
[64] tidyr_1.3.1          parallelly_1.41.0    glue_1.8.0          
[67] nloptr_2.1.1         pan_1.9              codetools_0.2-20    
[70] ps_1.8.1             distributional_0.5.0 stringi_1.8.4       
[73] gtable_0.3.6         shape_1.4.6.1        QuickJSR_1.5.1      
[76] munsell_0.5.1        tibble_3.2.1         pillar_1.10.1       
[79] htmltools_0.5.8.1    Brobdingnag_1.2-9    R6_2.5.1            
[82] Rdpack_2.6.2         evaluate_1.0.3       lattice_0.22-6      
[85] rbibutils_2.3        backports_1.5.0      broom_1.0.7         
[88] bslib_0.8.0          rstantools_2.4.0     coda_0.19-4.1       
[91] gridExtra_2.3        nlme_3.1-166         checkmate_2.3.2     
[94] xfun_0.50            pkgconfig_2.0.3     
---
title: "Bayesian Analysis"
output:
  html_document:
    code_folding: show
---

```{r setup, include = FALSE}
knitr::opts_chunk$set(
  echo = TRUE,
  error = TRUE,
  comment = "",
  class.source = "fold-show")
```

Adapted from `brms` workshop by Paul-Christian Bürkner

# Preamble

## Install Libraries

```{r}
#install.packages("remotes")
#remotes::install_github("DevPsyLab/petersenlab")
```

## Load Libraries

```{r, message = FALSE, warning = FALSE}
library("lme4")
library("rstan")
library("brms")
library("bayestestR")
library("mice")
```

## Simulate Data

```{r, class.source = "fold-hide"}
set.seed(52242)

sampleSize <- 1000

id <- rep(1:100, each = 10)
X <- rnorm(sampleSize)
M <- 0.5*X + rnorm(sampleSize)
Y <- 0.7*M + rnorm(sampleSize)

X[sample(1:length(X), size = 10)] <- NA
M[sample(1:length(M), size = 10)] <- NA
Y[sample(1:length(Y), size = 10)] <- NA

mydata <- data.frame(
  id = id,
  X = X,
  Y = Y,
  M = M)
```

## Load Data

```{r}
data("sleepstudy", package = "lme4")
```

## Prepare Data

```{r}
conditions <- make_conditions(sleepstudy, "Subject")
```

# brms

## Post-Processing Methods

```{r, class.source = "fold-hide"}
methods(class = "brmsfit")
```

# Multilevel Models

## Fit the Models

### Complete Pooling

```{r, results = "hide"}
fit_sleep1 <- brm(
  Reaction ~ 1 + Days,
  data = sleepstudy,
  seed  = 52242)
```

### Random Intercepts

```{r, results = "hide"}
fit_sleep2 <- brm(
  Reaction ~ 1 + Days + (1 | Subject), 
  data = sleepstudy,
  seed  = 52242)
```

### Random Intercepts and Slopes

```{r, results = "hide"}
fit_sleep3 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  seed  = 52242
)
```

## Summarize Results

For convergence, `Rhat` values should not be above 1.00.

```{r}
summary(fit_sleep3)
```

## Model Priors

```{r}
prior_summary(fit_sleep3)
```

## Model Parameters

```{r}
variables(fit_sleep3)
```

## Model Coefficients

```{r}
coef(fit_sleep3)
```

## Plots

### Trace Plots

```{r}
plot(fit_sleep3, ask = FALSE)
```

### Visualize Predictions

#### Sample-Level

```{r}
plot(conditional_effects(fit_sleep1), points = TRUE)
```

#### Person-Level

```{r}
# re_formula = NULL ensures that group-level effects are included
ce2 <- conditional_effects(
  fit_sleep3,
  conditions = conditions,
  re_formula = NULL)

plot(ce2, ncol = 6, points = TRUE)
```

### Check Model Fit

#### Posterior Predictive Check

Evaluate how closely the posterior predictions match the observed values.
If they do not match the general pattern of the observed values, a different response distribution may be necessary.

```{r}
pp_check(fit_sleep3)
pp_check(fit_sleep3, type = "dens_overlay")
pp_check(fit_sleep3, "error_scatter_avg")
```

## Fitted Values

```{r}
fitted(fit_sleep3)
```

## Residuals

```{r}
residuals(fit_sleep3)
```

## Compare Models

`elpd` values: higher is better
`looic` values: lower is better

`elpd_diff` values that are greater than ~2 standard errors of the `elpd_diff` values indicate a significantly better model (i.e., if `elpd_diff` value is greater than 2 times the `se_diff` value).

```{r}
loo(fit_sleep1, fit_sleep2, fit_sleep3)
print(loo(fit_sleep1, fit_sleep2, fit_sleep3), simplify = FALSE)
```

### Compute Model Weights

```{r}
model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo")

round(model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo"))
```

# Multilevel Mediation {#multilevelMediation}

The syntax below estimates random intercepts (which allows each participant to have a different intercept) to account for nested data within the same participant.

```{r}
bayesianMediationSyntax <-
  bf(M ~ X + (1 |i| id)) +
  bf(Y ~ X + M + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between M and Y

bayesianMediationModel <- brm(
  bayesianMediationSyntax,
  data  = mydata,
  seed  = 52242
)

summary(bayesianMediationModel)

hypothesis(
  bayesianMediationModel,
  "b_M_X * b_Y_M = 0", # indirect effect = a path * b path
  class = NULL,
  seed  =  52242
)

mediation(bayesianMediationModel)
```

# Setting Priors

## Parameters for Which to Set Priors

```{r}
get_prior(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy)
```

## Define Priors

```{r}
bprior <- c(
  set_prior("normal(5, 5)", coef = "Days"),
  set_prior("cauchy(0, 10)", class = "sd"),
  set_prior("lkj(2)", class = "cor"))

bprior
```

## Fit the Model

Fit the model with these priors, and sample from these priors:

```{r, results = "hide"}
fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  seed  = 52242
)
```

# Evaluate a Hypothesis

```{r}
# Evid.Ratio is the ratio of P(Days > 7) / P(Days <= 7)
(hyp1 <- hypothesis(fit_sleep4, "Days < 7"))
plot(hyp1)
```

```{r}
# Evid.Ratio is the Bayes Factor of the posterior
# vs the prior that Days = 10 is TRUE (Savage-Dickey Ratio)
(hyp2 <- hypothesis(fit_sleep4, "Days = 10"))
plot(hyp2)
```

# Parallel Processing

## Between-Chain Parallelization

```{r, results = "hide"}
fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  cores = 4,
  seed  = 52242
)
```

## Within-Chain Parallelization

https://paul-buerkner.github.io/brms/articles/brms_threading.html (archived at https://perma.cc/NCG3-KV4G)

# Missing Data Handling

## Multiply Imputed Datasets from `mice`

```{r}
?brm_multiple
```

```{r}
imp <- mice::mice(
  mydata,
  m = 5,
  print = FALSE)

fit_imp <- brm_multiple(
  bayesianMediationSyntax,
  data = imp,
  chains = 2)
```

## Imputation on the Fly During Model Fitting

https://paul-buerkner.github.io/brms/articles/brms_missings.html (archived at https://perma.cc/4Y9L-USQR)

```{r}
?mi
```

```{r}
bayesianRegressionImputationSyntax <-
  bf(X | mi() ~ (1 |i| id)) +
  bf(M | mi() ~ mi(X) + (1 |i| id)) +
  bf(Y | mi() ~ mi(X) + mi(M) + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between X, M, and Y

bayesianRegressionModel <- brm(
  bayesianRegressionImputationSyntax,
  data = mydata,
  seed = 52242
)

summary(bayesianRegressionModel)

hypothesis(
  bayesianRegressionModel,
  "bsp_M_miX * bsp_Y_miM = 0", # indirect effect = a path * b path
  class = NULL,
  seed = 52242
)
```

# Session Info

```{r, class.source = "fold-hide"}
sessionInfo()
```




Developmental Psychopathology Lab