Adapted from brms workshop by Paul-Christian Bürkner

1 Preamble

1.1 Install Libraries

#install.packages("remotes")
#remotes::install_github("DevPsyLab/petersenlab")

1.2 Load Libraries

library("lme4")
library("rstan")
library("brms")
library("bayestestR")
library("mice")

1.3 Simulate Data

set.seed(52242)

sampleSize <- 1000

id <- rep(1:100, each = 10)
X <- rnorm(sampleSize)
M <- 0.5*X + rnorm(sampleSize)
Y <- 0.7*M + rnorm(sampleSize)

X[sample(1:length(X), size = 10)] <- NA
M[sample(1:length(M), size = 10)] <- NA
Y[sample(1:length(Y), size = 10)] <- NA

mydata <- data.frame(
  id = id,
  X = X,
  Y = Y,
  M = M)

1.4 Load Data

data("sleepstudy", package = "lme4")

1.5 Prepare Data

conditions <- make_conditions(sleepstudy, "Subject")

2 brms

2.1 Post-Processing Methods

methods(class = "brmsfit")
  [1] add_criterion           add_ic                  as_draws_array         
  [4] as_draws_df             as_draws_list           as_draws_matrix        
  [7] as_draws_rvars          as_draws                as.array               
 [10] as.data.frame           as.matrix               as.mcmc                
 [13] autocor                 bayes_factor            bayes_R2               
 [16] bayesfactor_models      bayesfactor_parameters  bayesfactor_restricted 
 [19] bci                     bridge_sampler          check_prior            
 [22] ci                      coef                    conditional_effects    
 [25] conditional_smooths     control_params          default_prior          
 [28] describe_posterior      describe_prior          diagnostic_draws       
 [31] diagnostic_posterior    effective_sample        equivalence_test       
 [34] estimate_density        eti                     expose_functions       
 [37] family                  fitted                  fixef                  
 [40] formula                 getCall                 hdi                    
 [43] hypothesis              kfold                   log_lik                
 [46] log_posterior           logLik                  loo_compare            
 [49] loo_linpred             loo_model_weights       loo_moment_match       
 [52] loo_predict             loo_predictive_interval loo_R2                 
 [55] loo_subsample           loo                     LOO                    
 [58] map_estimate            marginal_effects        marginal_smooths       
 [61] mcmc_plot               mcse                    mediation              
 [64] model_to_priors         model_weights           model.frame            
 [67] nchains                 ndraws                  neff_ratio             
 [70] ngrps                   niterations             nobs                   
 [73] nsamples                nuts_params             nvariables             
 [76] p_direction             p_map                   p_rope                 
 [79] p_significance          pairs                   parnames               
 [82] plot                    point_estimate          post_prob              
 [85] posterior_average       posterior_epred         posterior_interval     
 [88] posterior_linpred       posterior_predict       posterior_samples      
 [91] posterior_smooths       posterior_summary       pp_average             
 [94] pp_check                pp_mixture              predict                
 [97] predictive_error        predictive_interval     prepare_predictions    
[100] print                   prior_draws             prior_summary          
[103] psis                    ranef                   reloo                  
[106] residuals               restructure             rhat                   
[109] rope                    sexit_thresholds        si                     
[112] simulate_prior          spi                     stancode               
[115] standata                stanplot                summary                
[118] unupdate                update                  VarCorr                
[121] variables               vcov                    waic                   
[124] WAIC                    weighted_posteriors    
see '?methods' for accessing help and source code

3 Multilevel Models

3.1 Fit the Models

3.1.1 Complete Pooling

fit_sleep1 <- brm(
  Reaction ~ 1 + Days,
  data = sleepstudy,
  seed  = 52242)
Compiling Stan program...
Start sampling

3.1.2 Random Intercepts

fit_sleep2 <- brm(
  Reaction ~ 1 + Days + (1 | Subject), 
  data = sleepstudy,
  seed  = 52242)
Compiling Stan program...
Start sampling

3.1.3 Random Intercepts and Slopes

fit_sleep3 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  seed  = 52242
)
Compiling Stan program...
Start sampling
Warning: There were 1 divergent transitions after warmup. See
https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
to find out why this is a problem and how to eliminate them.
Warning: Examine the pairs() plot to diagnose sampling problems

3.2 Summarize Results

For convergence, Rhat values should not be above 1.00.

summary(fit_sleep3)
Warning: There were 1 divergent transitions after warmup. Increasing
adapt_delta above 0.8 may help. See
http://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
 Family: gaussian 
  Links: mu = identity; sigma = identity 
Formula: Reaction ~ 1 + Days + (1 + Days | Subject) 
   Data: sleepstudy (Number of observations: 180) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~Subject (Number of levels: 18) 
                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)          26.73      6.85    15.35    42.67 1.00     1795     2510
sd(Days)                6.51      1.52     4.09    10.09 1.00     1468     1809
cor(Intercept,Days)     0.09      0.30    -0.48     0.69 1.00     1052     1442

Regression Coefficients:
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept   251.51      7.63   236.39   266.74 1.00     1693     2026
Days         10.44      1.72     7.05    13.83 1.00     1151     1568

Further Distributional Parameters:
      Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma    25.93      1.59    23.04    29.27 1.00     3810     2691

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

3.3 Model Priors

prior_summary(fit_sleep3)

3.4 Model Parameters

variables(fit_sleep3)
 [1] "b_Intercept"                  "b_Days"                      
 [3] "sd_Subject__Intercept"        "sd_Subject__Days"            
 [5] "cor_Subject__Intercept__Days" "sigma"                       
 [7] "Intercept"                    "r_Subject[308,Intercept]"    
 [9] "r_Subject[309,Intercept]"     "r_Subject[310,Intercept]"    
[11] "r_Subject[330,Intercept]"     "r_Subject[331,Intercept]"    
[13] "r_Subject[332,Intercept]"     "r_Subject[333,Intercept]"    
[15] "r_Subject[334,Intercept]"     "r_Subject[335,Intercept]"    
[17] "r_Subject[337,Intercept]"     "r_Subject[349,Intercept]"    
[19] "r_Subject[350,Intercept]"     "r_Subject[351,Intercept]"    
[21] "r_Subject[352,Intercept]"     "r_Subject[369,Intercept]"    
[23] "r_Subject[370,Intercept]"     "r_Subject[371,Intercept]"    
[25] "r_Subject[372,Intercept]"     "r_Subject[308,Days]"         
[27] "r_Subject[309,Days]"          "r_Subject[310,Days]"         
[29] "r_Subject[330,Days]"          "r_Subject[331,Days]"         
[31] "r_Subject[332,Days]"          "r_Subject[333,Days]"         
[33] "r_Subject[334,Days]"          "r_Subject[335,Days]"         
[35] "r_Subject[337,Days]"          "r_Subject[349,Days]"         
[37] "r_Subject[350,Days]"          "r_Subject[351,Days]"         
[39] "r_Subject[352,Days]"          "r_Subject[369,Days]"         
[41] "r_Subject[370,Days]"          "r_Subject[371,Days]"         
[43] "r_Subject[372,Days]"          "lprior"                      
[45] "lp__"                        

3.5 Model Coefficients

coef(fit_sleep3)
$Subject
, , Intercept

    Estimate Est.Error     Q2.5    Q97.5
308 254.0551  13.01284 227.8884 278.5841
309 211.3558  13.22181 184.4714 236.2753
310 213.0988  13.16381 186.4747 238.4973
330 274.6631  13.47012 249.0079 301.7730
331 272.7292  13.08615 248.4475 299.1193
332 260.2286  12.48478 236.3837 285.6190
333 267.7372  12.46235 244.6130 292.4716
334 244.1334  12.15931 219.7139 268.0252
335 250.5754  13.64444 224.0874 277.3162
337 285.9261  13.36021 259.9145 312.8108
349 226.5263  13.17249 199.0440 251.3439
350 239.2886  13.53475 211.8912 264.5031
351 256.0125  12.48011 232.1178 281.1492
352 272.0472  12.60619 248.0108 297.2377
369 254.5540  12.22773 230.0027 277.9011
370 226.9908  13.58638 199.6275 252.5526
371 251.8782  12.04853 228.2643 275.9140
372 263.7822  12.23356 240.8359 288.3172

, , Days

      Estimate Est.Error       Q2.5     Q97.5
308 19.6074451  2.514940 14.7072608 24.705811
309  1.8177425  2.499286 -2.8285966  6.818598
310  4.8922010  2.481539  0.1064874  9.852368
330  5.7488917  2.562112  0.5850291 10.579395
331  7.5622754  2.462179  2.6457945 12.227480
332 10.1993887  2.374254  5.5000127 14.720567
333 10.3639734  2.383374  5.5851281 15.076151
334 11.5364194  2.359143  6.9781400 16.236849
335 -0.1677765  2.643660 -5.4487757  5.040595
337 19.1484855  2.495961 14.3233721 24.008153
349 11.5813121  2.505751  6.8147401 16.720071
350 16.9635793  2.551501 12.0656223 22.035578
351  7.4819371  2.385631  2.7193498 12.038451
352 14.0305526  2.357995  9.2909766 18.642498
369 11.3662234  2.342286  6.8789298 15.927235
370 15.1160226  2.527380 10.1317020 20.075526
371  9.5284324  2.315761  4.9194399 14.012695
372 11.7299988  2.325239  7.0906223 16.233601

3.6 Plots

3.6.1 Trace Plots

plot(fit_sleep3, ask = FALSE)

3.6.2 Visualize Predictions

3.6.2.1 Sample-Level

plot(conditional_effects(fit_sleep1), points = TRUE)

3.6.2.2 Person-Level

# re_formula = NULL ensures that group-level effects are included
ce2 <- conditional_effects(
  fit_sleep3,
  conditions = conditions,
  re_formula = NULL)

plot(ce2, ncol = 6, points = TRUE)

3.6.3 Check Model Fit

3.6.3.1 Posterior Predictive Check

Evaluate how closely the posterior predictions match the observed values. If they do not match the general pattern of the observed values, a different response distribution may be necessary.

pp_check(fit_sleep3)
Using 10 posterior draws for ppc type 'dens_overlay' by default.

pp_check(fit_sleep3, type = "dens_overlay")
Using 10 posterior draws for ppc type 'dens_overlay' by default.

pp_check(fit_sleep3, "error_scatter_avg")
Using all posterior draws for ppc type 'error_scatter_avg' by default.

3.7 Fitted Values

fitted(fit_sleep3)
       Estimate Est.Error     Q2.5    Q97.5
  [1,] 254.0551 13.012843 227.8884 278.5841
  [2,] 273.6626 11.113986 251.3713 294.8805
  [3,] 293.2700  9.505637 274.2675 311.8816
  [4,] 312.8775  8.357237 296.3990 329.4955
  [5,] 332.4849  7.872708 317.0295 348.1170
  [6,] 352.0924  8.171016 336.2176 368.0066
  [7,] 371.6998  9.176127 354.1220 389.4363
  [8,] 391.3072 10.690508 370.5681 412.0605
  [9,] 410.9147 12.530860 386.4152 435.2729
 [10,] 430.5221 14.574217 401.9619 459.1956
 [11,] 211.3558 13.221809 184.4714 236.2753
 [12,] 213.1735 11.362145 190.0534 234.5760
 [13,] 214.9912  9.791493 195.1046 233.6490
 [14,] 216.8090  8.668402 199.1447 233.5706
 [15,] 218.6267  8.179358 202.3385 234.3633
 [16,] 220.4445  8.435371 203.5882 236.3208
 [17,] 222.2622  9.375603 203.4594 239.8009
 [18,] 224.0800 10.823180 202.4925 244.5022
 [19,] 225.8977 12.604499 201.1395 249.5959
 [20,] 227.7154 14.597891 198.6731 255.6506
 [21,] 213.0988 13.163815 186.4747 238.4973
 [22,] 217.9910 11.300065 195.1809 239.7089
 [23,] 222.8832  9.716634 203.1292 241.6534
 [24,] 227.7754  8.570330 210.6063 244.3683
 [25,] 232.6676  8.050106 216.8407 248.2433
 [26,] 237.5598  8.274898 221.2506 253.8004
 [27,] 242.4520  9.190197 224.5019 260.5614
 [28,] 247.3442 10.618925 226.7249 268.2020
 [29,] 252.2364 12.384647 228.3492 276.5317
 [30,] 257.1286 14.363617 229.0311 285.3348
 [31,] 274.6631 13.470116 249.0079 301.7730
 [32,] 280.4119 11.486251 257.7297 303.4735
 [33,] 286.1608  9.775108 267.0888 305.5981
 [34,] 291.9097  8.502961 275.4806 308.3853
 [35,] 297.6586  7.885226 282.4659 313.1907
 [36,] 303.4075  8.073542 287.6877 319.4632
 [37,] 309.1564  9.017550 291.9216 327.5592
 [38,] 314.9053 10.515663 294.6965 335.5844
 [39,] 320.6542 12.368144 296.1826 344.6095
 [40,] 326.4031 14.439239 297.8141 354.4234
 [41,] 272.7292 13.086150 248.4475 299.1193
 [42,] 280.2915 11.230126 259.1573 302.8477
 [43,] 287.8537  9.649291 269.5346 307.5840
 [44,] 295.4160  8.498621 279.5477 312.6518
 [45,] 302.9783  7.966739 287.9478 318.8925
 [46,] 310.5406  8.175327 294.8251 327.0097
 [47,] 318.1028  9.073459 300.4301 335.9344
 [48,] 325.6651 10.485419 305.2906 346.0395
 [49,] 333.2274 12.234582 309.4211 357.8104
 [50,] 340.7897 14.196854 312.9635 369.3380
 [51,] 260.2286 12.484784 236.3837 285.6190
 [52,] 270.4280 10.781694 249.7156 292.4677
 [53,] 280.6274  9.375189 262.3207 299.4393
 [54,] 290.8268  8.415319 274.6495 307.1901
 [55,] 301.0262  8.063199 285.3182 316.7396
 [56,] 311.2256  8.395649 294.8786 327.7737
 [57,] 321.4250  9.339852 303.3050 338.9929
 [58,] 331.6244 10.735591 310.5276 351.6272
 [59,] 341.8237 12.431699 317.2813 365.7252
 [60,] 352.0231 14.321856 323.4303 379.2967
 [61,] 267.7372 12.462346 244.6130 292.4716
 [62,] 278.1012 10.721121 258.0140 299.3701
 [63,] 288.4652  9.270154 270.7844 306.5566
 [64,] 298.8292  8.263777 282.5528 315.0321
 [65,] 309.1931  7.874339 293.7298 324.5191
 [66,] 319.5571  8.190322 303.6234 335.2091
 [67,] 329.9211  9.138844 312.1717 347.2774
 [68,] 340.2850 10.550664 319.9648 360.5059
 [69,] 350.6490 12.266846 327.0562 374.2666
 [70,] 361.0130 14.177286 333.8326 388.6119
 [71,] 244.1334 12.159314 219.7139 268.0252
 [72,] 255.6699 10.462954 234.6865 276.3664
 [73,] 267.2063  9.068020 249.5953 285.2923
 [74,] 278.7427  8.131156 262.8088 294.9306
 [75,] 290.2791  7.818792 274.8134 305.8992
 [76,] 301.8155  8.202586 285.4333 318.3049
 [77,] 313.3520  9.195784 295.2755 331.5079
 [78,] 324.8884 10.628903 304.0289 345.6192
 [79,] 336.4248 12.349730 312.0158 360.4332
 [80,] 347.9612 14.254444 320.2001 376.1371
 [81,] 250.5754 13.644441 224.0874 277.3162
 [82,] 250.4077 11.582134 228.0384 273.2162
 [83,] 250.2399  9.802999 231.0375 269.9630
 [84,] 250.0721  8.487026 233.5038 267.0029
 [85,] 249.9043  7.870089 234.8263 265.7494
 [86,] 249.7366  8.113252 234.3483 265.7847
 [87,] 249.5688  9.148186 231.4844 267.5545
 [88,] 249.4010 10.748564 229.1778 270.7564
 [89,] 249.2332 12.702433 225.1245 274.8328
 [90,] 249.0655 14.871109 220.7681 278.9884
 [91,] 285.9261 13.360213 259.9145 312.8108
 [92,] 305.0746 11.449491 282.7247 327.8004
 [93,] 324.2231  9.805408 305.0686 343.3848
 [94,] 343.3716  8.582590 326.4366 360.1231
 [95,] 362.5201  7.977175 346.9321 378.0883
 [96,] 381.6686  8.128310 365.4243 397.6564
 [97,] 400.8170  8.997953 382.9835 418.0397
 [98,] 419.9655 10.407523 399.1847 440.0881
 [99,] 439.1140 12.170849 415.2072 462.7225
[100,] 458.2625 14.156349 430.2394 485.9227
[101,] 226.5263 13.172492 199.0440 251.3439
[102,] 238.1076 11.293795 214.7298 258.8494
[103,] 249.6889  9.702712 229.9850 267.7737
[104,] 261.2703  8.561135 243.7951 277.5529
[105,] 272.8516  8.062320 256.4860 288.3810
[106,] 284.4329  8.322654 267.7087 300.5798
[107,] 296.0142  9.278453 277.7180 314.2151
[108,] 307.5955 10.745715 286.4011 328.6732
[109,] 319.1768 12.546261 294.5525 344.0958
[110,] 330.7581 14.556940 302.5549 359.6565
[111,] 239.2886 13.534750 211.8912 264.5031
[112,] 256.2522 11.590786 232.8323 277.9377
[113,] 273.2158  9.925901 252.9023 292.3892
[114,] 290.1794  8.701781 272.5454 307.2396
[115,] 307.1430  8.120270 290.7261 322.9692
[116,] 324.1065  8.317265 307.6538 340.3518
[117,] 341.0701  9.243123 322.2983 359.0851
[118,] 358.0337 10.710464 336.2023 379.2235
[119,] 374.9973 12.530486 349.5683 399.4199
[120,] 391.9608 14.571631 362.6754 420.3377
[121,] 256.0125 12.480114 232.1178 281.1492
[122,] 263.4945 10.706893 242.7513 284.8154
[123,] 270.9764  9.214356 252.9802 289.3028
[124,] 278.4584  8.158040 262.6011 294.8159
[125,] 285.9403  7.719155 270.9712 301.2460
[126,] 293.4222  7.999972 278.1956 309.1054
[127,] 300.9042  8.932873 283.6480 318.6625
[128,] 308.3861 10.342890 288.2687 329.0312
[129,] 315.8680 12.063871 292.1932 340.2246
[130,] 323.3500 13.981454 296.1423 351.4170
[131,] 272.0472 12.606186 248.0108 297.2377
[132,] 286.0777 10.909099 265.2308 307.6539
[133,] 300.1083  9.498486 281.8385 318.8017
[134,] 314.1388  8.517883 297.6714 330.8056
[135,] 328.1694  8.124512 312.2333 344.1914
[136,] 342.1999  8.401271 325.7069 358.9466
[137,] 356.2305  9.288450 337.9854 375.1359
[138,] 370.2610 10.634356 349.0755 391.7013
[139,] 384.2916 12.289184 360.2229 409.3132
[140,] 398.3221 14.144923 370.6524 426.8190
[141,] 254.5540 12.227726 230.0027 277.9011
[142,] 265.9202 10.525750 244.8474 285.9942
[143,] 277.2865  9.112526 259.4349 294.5405
[144,] 288.6527  8.139870 272.5957 304.3956
[145,] 300.0189  7.774923 284.8119 315.3846
[146,] 311.3851  8.100247 295.4707 327.6310
[147,] 322.7514  9.041635 305.0207 340.5801
[148,] 334.1176 10.433645 313.6148 354.9469
[149,] 345.4838 12.122019 321.5245 369.4507
[150,] 356.8500 13.999941 329.5620 383.9702
[151,] 226.9908 13.586378 199.6275 252.5526
[152,] 242.1068 11.630977 218.9460 264.2417
[153,] 257.2228  9.937046 237.3211 276.4037
[154,] 272.3389  8.659413 255.0725 289.2919
[155,] 287.4549  8.000080 271.4939 302.7040
[156,] 302.5709  8.111254 286.6262 318.5425
[157,] 317.6869  8.964312 300.0199 335.3376
[158,] 332.8030 10.377893 312.7188 352.9962
[159,] 347.9190 12.158031 324.3525 371.7308
[160,] 363.0350 14.167218 335.4789 390.7439
[161,] 251.8782 12.048535 228.2643 275.9140
[162,] 261.4066 10.406368 241.1415 282.2043
[163,] 270.9350  9.063295 253.2386 288.6086
[164,] 280.4635  8.168208 264.1425 296.7167
[165,] 289.9919  7.875369 273.5774 305.4187
[166,] 299.5203  8.249166 282.4680 315.9655
[167,] 309.0488  9.208776 290.3334 327.1522
[168,] 318.5772 10.596219 297.0160 339.6502
[169,] 328.1056 12.267178 303.5009 352.0193
[170,] 337.6341 14.121365 309.2548 365.3477
[171,] 263.7822 12.233558 240.8359 288.3172
[172,] 275.5122 10.551034 255.3891 296.7475
[173,] 287.2422  9.154353 269.8694 305.1486
[174,] 298.9722  8.191062 283.0682 315.0649
[175,] 310.7022  7.822934 295.4437 326.0126
[176,] 322.4322  8.131208 306.5335 338.1708
[177,] 334.1622  9.047004 316.7115 351.6283
[178,] 345.8922 10.411218 325.4625 366.0703
[179,] 357.6222 12.072785 333.7623 381.0907
[180,] 369.3522 13.925669 341.6751 396.4863

3.8 Residuals

residuals(fit_sleep3)
           Estimate Est.Error        Q2.5      Q97.5
  [1,]   -4.4224973  29.05132  -61.514547  52.705542
  [2,]  -15.4736366  27.92169  -72.095186  37.398218
  [3,]  -42.9162691  27.68657  -97.083565   9.702928
  [4,]    8.8834266  27.21499  -45.343313  62.199192
  [5,]   23.9425908  27.15247  -28.495756  78.481704
  [6,]   62.7244555  26.95167    9.733324 115.325435
  [7,]   10.7554932  28.05958  -42.532600  66.720442
  [8,] -101.1151063  27.97845 -156.484411 -45.973912
  [9,]   18.9897600  28.89240  -38.614717  73.873065
 [10,]   35.9082883  29.91054  -22.575882  94.624309
 [11,]   11.0225611  28.74104  -46.420207  66.624807
 [12,]   -7.3797476  28.56706  -62.246063  48.052364
 [13,]  -12.2246854  28.18848  -67.653086  42.611356
 [14,]  -11.9908492  27.70427  -66.100178  41.372843
 [15,]  -10.3291837  27.26929  -63.268299  42.327988
 [16,]   -4.3455725  27.16014  -58.362653  49.285769
 [17,]   -8.2326294  27.71198  -63.703608  45.489897
 [18,]   -7.1728096  27.20679  -61.133828  45.842315
 [19,]   -1.7601206  28.45366  -57.135898  54.942984
 [20,]    9.5109432  29.37012  -48.331215  68.263988
 [21,]  -13.9464789  28.30825  -68.408627  40.539306
 [22,]  -23.4000853  28.26923  -78.409474  31.414449
 [23,]   11.8434115  27.77341  -43.109649  66.672652
 [24,]    5.1901319  27.72286  -50.705897  58.324226
 [25,]   -3.1754491  27.46668  -57.306852  50.287383
 [26,]  -17.2664230  27.36969  -71.321142  37.646758
 [27,]   -7.2562891  27.31312  -60.727141  46.281060
 [28,]    7.8673172  28.29952  -46.902381  63.329112
 [29,]    8.5776961  28.73116  -47.111406  65.398877
 [30,]   -9.3209684  29.52641  -66.533382  48.488330
 [31,]   47.0244525  29.23652  -10.405211 105.589427
 [32,]   20.1222729  28.26420  -36.101339  76.132922
 [33,]   -2.0290762  27.42725  -55.246265  51.125473
 [34,]   -7.0554183  27.09343  -60.212489  46.295648
 [35,]  -12.3199854  27.11615  -65.024613  40.119226
 [36,]   -5.5198074  27.19894  -59.729482  46.716573
 [37,]  -28.5226670  27.51163  -83.230393  24.981537
 [38,]    3.5190394  27.97805  -51.190010  57.464019
 [39,]  -15.2033031  28.68706  -72.775109  41.018195
 [40,]   27.9230741  29.44606  -28.141041  85.402156
 [41,]   15.2299968  28.75887  -40.644286  70.974598
 [42,]    4.6113581  28.37165  -50.744394  60.532406
 [43,]   14.0813635  27.76799  -41.230392  68.339080
 [44,]   24.4818679  27.57893  -30.158464  79.838025
 [45,]   13.3229763  27.51980  -39.848157  68.449778
 [46,]  -17.5644179  27.97273  -72.323556  36.600223
 [47,]  -28.3742604  27.27381  -82.706990  25.514315
 [48,]    9.6540013  28.39785  -46.324605  65.980262
 [49,]  -39.5336267  29.01703  -95.617275  16.997098
 [50,]   30.8329349  29.72642  -26.414939  88.119517
 [51,]  -25.8200824  28.81030  -83.472825  29.082955
 [52,]  -27.8220684  28.52563  -84.403339  27.684997
 [53,]   -7.9239236  27.28381  -61.908002  44.602168
 [54,]   19.3964410  26.89147  -34.838548  71.337697
 [55,]   16.4259475  27.76871  -37.786394  69.072358
 [56,]   -1.1489419  27.51378  -56.016911  52.937763
 [57,]  132.9059031  27.85545   78.647990 189.269005
 [58,]   15.6473233  27.36777  -37.737565  69.546918
 [59,]  -10.8459894  29.03989  -67.373623  45.229917
 [60,]  -98.2124258  30.00944 -156.327190 -40.213308
 [61,]   16.1484427  28.99773  -41.110975  73.486273
 [62,]   11.1513382  27.69033  -43.361425  64.423765
 [63,]  -11.3293468  27.69787  -65.576612  40.806421
 [64,]    0.6079586  27.00804  -51.822298  54.164056
 [65,]  -11.4548895  27.12244  -65.552942  40.464624
 [66,]   19.1215056  27.38460  -34.502216  71.901547
 [67,]    2.3606765  27.52206  -52.115775  56.202374
 [68,]    7.7735024  28.06785  -47.417939  62.873974
 [69,]  -17.7064094  28.52003  -73.478013  39.200478
 [70,]    0.9157391  29.26041  -57.528680  56.646066
 [71,]   21.1499096  28.78786  -33.665561  78.092632
 [72,]   20.8302258  28.19875  -35.236500  77.211659
 [73,]  -23.5092619  27.26958  -78.116151  28.736202
 [74,]  -24.1416348  27.22089  -77.633630  29.771093
 [75,]  -10.7119086  26.92398  -64.345154  41.058282
 [76,]  -18.2061798  27.29628  -70.952469  35.498834
 [77,]   -6.8721977  28.09088  -63.538266  45.570025
 [78,]    6.3662382  27.79417  -47.520309  61.537857
 [79,]   -0.5846388  28.96104  -59.291993  58.331365
 [80,]   29.5153531  30.34956  -29.473041  89.642117
 [81,]   -8.0667615  29.47551  -65.631283  48.113588
 [82,]   23.9718571  27.87158  -29.633695  78.349492
 [83,]    5.0283137  27.99492  -49.749730  60.124476
 [84,]   20.4306287  27.04206  -31.611977  72.917128
 [85,]    1.7044413  26.84986  -50.239452  53.877277
 [86,]    4.9918366  27.66997  -48.538047  59.223417
 [87,]   -3.7312939  27.39027  -56.501893  51.148183
 [88,]  -14.4626498  28.46667  -68.407088  42.337640
 [89,]  -12.7114815  28.96266  -68.559401  42.846398
 [90,]  -12.2085556  29.89559  -72.820170  45.107474
 [91,]   26.2557671  29.08373  -32.384376  83.745685
 [92,]    8.1446401  28.56522  -46.940341  65.119042
 [93,]  -33.4567191  27.27019  -87.343404  21.358675
 [94,]    2.3075723  27.29365  -52.656825  56.423872
 [95,]    3.1534222  27.07818  -49.255505  57.616071
 [96,]   10.2096274  26.91086  -43.439267  62.370186
 [97,]    3.2500799  27.43897  -50.995155  57.586716
 [98,]   -3.2835909  28.38747  -58.444463  52.473674
 [99,]   16.8355830  29.04559  -41.215910  73.438558
[100,]    0.8586329  29.83015  -56.271659  58.558658
[101,]    9.7806587  29.17624  -50.471335  67.242640
[102,]   -8.1419501  28.02675  -63.945373  46.004765
[103,]  -10.9602825  27.52133  -64.564357  43.489859
[104,]   -6.4155680  27.37249  -58.203165  47.315778
[105,]  -21.9053681  27.54770  -75.515517  32.759783
[106,]  -14.4387340  26.94169  -65.735349  37.180240
[107,]  -14.5551720  27.77826  -68.505002  40.695918
[108,]    0.5827845  28.54991  -55.100657  55.917138
[109,]   17.1256843  29.01460  -39.249365  74.335068
[110,]   20.4829268  30.75429  -40.866248  80.219745
[111,]   17.3934120  29.23012  -41.481250  73.666754
[112,]  -12.3479480  28.82467  -69.793875  43.014611
[113,]  -16.9119575  28.30481  -72.114805  36.953427
[114,]  -34.9540188  27.69187  -90.475130  18.811231
[115,]  -38.1357151  27.01373  -91.710893  14.514722
[116,]    5.6050293  27.04192  -46.747282  59.694547
[117,]   38.3955674  27.20775  -14.353629  93.092845
[118,]    5.4666072  27.95529  -49.304880  60.993825
[119,]   19.5688029  28.05303  -34.391684  75.572363
[120,]   -3.3214517  29.42409  -63.132141  54.647657
[121,]   -6.0156628  28.52448  -61.944833  50.534813
[122,]   36.1183094  28.21760  -20.044803  90.787119
[123,]   -0.9189712  27.36097  -53.476247  54.551833
[124,]    2.4735113  26.71703  -49.197802  55.773795
[125,]  -13.9989155  26.92775  -68.392938  37.566020
[126,]   10.9832313  27.42443  -43.299203  64.327414
[127,]  -13.4090325  26.96411  -66.418541  37.681370
[128,]  -42.0228474  27.89534  -96.561361  12.640494
[129,]    5.8991570  28.33898  -49.950955  61.162970
[130,]   24.0690644  29.03305  -33.000143  81.139197
[131,]  -50.3560872  28.76297 -106.746824   5.272884
[132,]   12.0256779  27.93154  -42.804297  66.286807
[133,]   26.5531327  27.79477  -28.473150  80.974803
[134,]   32.3874538  27.65695  -21.668105  86.643479
[135,]   19.9825551  27.43489  -33.642012  74.378475
[136,]   10.4294631  27.63722  -43.604363  64.507200
[137,]   -2.0603829  27.80297  -56.787510  51.404356
[138,]   -9.8956112  28.24449  -65.108614  46.650924
[139,]   -8.8774495  29.01683  -66.411664  47.785444
[140,]   -9.2247508  29.94604  -67.564865  49.909691
[141,]   17.2335823  28.95718  -41.382357  72.994673
[142,]    3.2698396  28.38840  -51.102937  58.473698
[143,]  -19.4139182  27.60119  -73.531895  34.593878
[144,]   -9.7448583  27.29020  -63.862960  44.669308
[145,]   15.1184861  26.73103  -37.097029  68.604181
[146,]    5.6803406  27.52211  -49.723191  60.778025
[147,]  -25.6386114  28.07245  -80.267923  29.956082
[148,]   14.3350346  27.65556  -39.223088  68.812970
[149,]   -4.8049386  28.49878  -60.341988  51.488042
[150,]   10.1919508  29.50187  -45.700684  67.654387
[151,]   -2.4137478  29.23207  -60.117082  54.777017
[152,]   -7.7812372  28.56330  -64.638779  48.431967
[153,]  -17.8307953  28.12550  -73.479367  36.618382
[154,]  -31.1521573  27.78498  -86.065966  21.797035
[155,]  -19.3348544  27.20100  -71.524848  34.293861
[156,]   41.5229322  27.25488  -11.491111  94.771149
[157,]  -36.6421307  27.22311  -90.018073  15.428966
[158,]   14.7062911  27.49704  -42.175437  68.020995
[159,]   17.5104018  28.75981  -37.547434  74.678291
[160,]    9.1486622  29.84067  -49.087875  67.528861
[161,]   18.0958579  28.81678  -40.286312  73.650913
[162,]   11.4567049  27.81100  -43.717966  65.038520
[163,]    6.9307923  28.14417  -48.067764  62.327249
[164,]    0.9068675  27.32943  -52.567247  54.445318
[165,]  -11.1659817  27.17695  -64.808855  41.415541
[166,]  -14.7245733  27.26764  -68.360963  39.408735
[167,]  -49.7686380  27.19497 -101.579770   4.517006
[168,]  -13.4897199  28.41483  -68.484911  41.568373
[169,]   22.2769757  28.45938  -33.652280  78.878061
[170,]   31.8653072  29.29141  -26.160255  88.644878
[171,]    6.2333179  28.43920  -49.953639  62.094379
[172,]   -2.2294034  27.78600  -57.924534  50.987655
[173,]    9.8057348  27.41421  -43.773126  63.395967
[174,]   11.7504155  26.68218  -39.944051  64.938097
[175,]  -23.6902001  27.18840  -77.165808  30.708407
[176,]    7.6058934  27.27074  -45.013038  61.278219
[177,]    0.7220444  27.00758  -50.596945  54.423699
[178,]   -2.3991487  27.87346  -56.774493  50.909589
[179,]   12.0776569  29.13258  -45.792168  69.479032
[180,]   -5.2599179  29.78878  -62.602602  53.182059

3.9 Compare Models

elpd values: higher is better looic values: lower is better

elpd_diff values that are greater than ~2 standard errors of the elpd_diff values indicate a significantly better model (i.e., if elpd_diff value is greater than 2 times the se_diff value).

loo(fit_sleep1, fit_sleep2, fit_sleep3)
Warning: Found 3 observations with a pareto_k > 0.7 in model 'fit_sleep3'. We
recommend to set 'moment_match = TRUE' in order to perform moment matching for
problematic observations.
Output of model 'fit_sleep1':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -953.1 10.5
p_loo         3.0  0.5
looic      1906.3 21.0
------
MCSE of elpd_loo is 0.0.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.9, 1.2]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep2':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -884.7 14.3
p_loo        19.2  3.3
looic      1769.4 28.7
------
MCSE of elpd_loo is 0.1.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.4, 1.8]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep3':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -861.4 22.4
p_loo        34.3  8.5
looic      1722.8 44.7
------
MCSE of elpd_loo is NA.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.4, 1.3]).

Pareto k diagnostic values:
                         Count Pct.    Min. ESS
(-Inf, 0.7]   (good)     177   98.3%   523     
   (0.7, 1]   (bad)        2    1.1%   <NA>    
   (1, Inf)   (very bad)   1    0.6%   <NA>    
See help('pareto-k-diagnostic') for details.

Model comparisons:
           elpd_diff se_diff
fit_sleep3   0.0       0.0  
fit_sleep2 -23.3      11.6  
fit_sleep1 -91.7      21.0  
print(loo(fit_sleep1, fit_sleep2, fit_sleep3), simplify = FALSE)
Warning: Found 3 observations with a pareto_k > 0.7 in model 'fit_sleep3'. We
recommend to set 'moment_match = TRUE' in order to perform moment matching for
problematic observations.
Output of model 'fit_sleep1':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -953.1 10.5
p_loo         3.0  0.5
looic      1906.3 21.0
------
MCSE of elpd_loo is 0.0.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.9, 1.2]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep2':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -884.7 14.3
p_loo        19.2  3.3
looic      1769.4 28.7
------
MCSE of elpd_loo is 0.1.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.4, 1.8]).

All Pareto k estimates are good (k < 0.7).
See help('pareto-k-diagnostic') for details.

Output of model 'fit_sleep3':

Computed from 4000 by 180 log-likelihood matrix.

         Estimate   SE
elpd_loo   -861.4 22.4
p_loo        34.3  8.5
looic      1722.8 44.7
------
MCSE of elpd_loo is NA.
MCSE and ESS estimates assume MCMC draws (r_eff in [0.4, 1.3]).

Pareto k diagnostic values:
                         Count Pct.    Min. ESS
(-Inf, 0.7]   (good)     177   98.3%   523     
   (0.7, 1]   (bad)        2    1.1%   <NA>    
   (1, Inf)   (very bad)   1    0.6%   <NA>    
See help('pareto-k-diagnostic') for details.

Model comparisons:
           elpd_diff se_diff elpd_loo se_elpd_loo p_loo  se_p_loo looic 
fit_sleep3    0.0       0.0  -861.4     22.4        34.3    8.5   1722.8
fit_sleep2  -23.3      11.6  -884.7     14.3        19.2    3.3   1769.4
fit_sleep1  -91.7      21.0  -953.1     10.5         3.0    0.5   1906.3
           se_looic
fit_sleep3   44.7  
fit_sleep2   28.7  
fit_sleep1   21.0  

3.9.1 Compute Model Weights

model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo")
  fit_sleep1   fit_sleep2   fit_sleep3 
1.447170e-40 7.419859e-11 1.000000e+00 
round(model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo"))
fit_sleep1 fit_sleep2 fit_sleep3 
         0          0          1 

4 Multilevel Mediation

The syntax below estimates random intercepts (which allows each participant to have a different intercept) to account for nested data within the same participant.

bayesianMediationSyntax <-
  bf(M ~ X + (1 |i| id)) +
  bf(Y ~ X + M + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between M and Y

bayesianMediationModel <- brm(
  bayesianMediationSyntax,
  data  = mydata,
  seed  = 52242
)
Warning: Rows containing NAs were excluded from the model.
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000171 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.71 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 6.24 seconds (Warm-up)
Chain 1:                4.431 seconds (Sampling)
Chain 1:                10.671 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000141 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.41 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 6.035 seconds (Warm-up)
Chain 2:                4.434 seconds (Sampling)
Chain 2:                10.469 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.000145 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 1.45 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 6.425 seconds (Warm-up)
Chain 3:                4.434 seconds (Sampling)
Chain 3:                10.859 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000143 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 1.43 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 5.804 seconds (Warm-up)
Chain 4:                3.913 seconds (Sampling)
Chain 4:                9.717 seconds (Total)
Chain 4: 
summary(bayesianMediationModel)
 Family: MV(gaussian, gaussian) 
  Links: mu = identity; sigma = identity
         mu = identity; sigma = identity 
Formula: M ~ X + (1 | i | id) 
         Y ~ X + M + (1 | i | id) 
   Data: mydata (Number of observations: 970) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~id (Number of levels: 100) 
                             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(M_Intercept)                  0.07      0.05     0.00     0.17 1.00     1746
sd(Y_Intercept)                  0.11      0.06     0.01     0.23 1.01     1144
cor(M_Intercept,Y_Intercept)     0.05      0.57    -0.94     0.95 1.00     1457
                             Tail_ESS
sd(M_Intercept)                  1730
sd(Y_Intercept)                  2027
cor(M_Intercept,Y_Intercept)     1951

Regression Coefficients:
            Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
M_Intercept     0.00      0.03    -0.06     0.07 1.00     7652     2690
Y_Intercept     0.06      0.03    -0.01     0.13 1.00     6857     2994
M_X             0.51      0.03     0.45     0.57 1.00     8868     2660
Y_X             0.03      0.04    -0.04     0.10 1.00     7229     3120
Y_M             0.68      0.03     0.62     0.75 1.00     7216     3446

Further Distributional Parameters:
        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_M     1.01      0.02     0.97     1.06 1.00     7747     2629
sigma_Y     1.00      0.02     0.96     1.05 1.00     6806     2912

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
hypothesis(
  bayesianMediationModel,
  "b_M_X * b_Y_M = 0", # indirect effect = a path * b path
  class = NULL,
  seed  =  52242
)
Hypothesis Tests for class :
         Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
1 (b_M_X*b_Y_M) = 0     0.35      0.03     0.29      0.4         NA        NA
  Star
1    *
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.
mediation(bayesianMediationModel)

5 Setting Priors

5.1 Parameters for Which to Set Priors

get_prior(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy)

5.2 Define Priors

bprior <- c(
  set_prior("normal(5, 5)", coef = "Days"),
  set_prior("cauchy(0, 10)", class = "sd"),
  set_prior("lkj(2)", class = "cor"))

bprior

5.3 Fit the Model

Fit the model with these priors, and sample from these priors:

fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  seed  = 52242
)
Compiling Stan program...
Start sampling

6 Evaluate a Hypothesis

# Evid.Ratio is the ratio of P(Days > 7) / P(Days <= 7)
(hyp1 <- hypothesis(fit_sleep4, "Days < 7"))
Hypothesis Tests for class b:
      Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob Star
1 (Days)-(7) < 0     2.88      1.55     0.26     5.36       0.04      0.04     
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.
plot(hyp1)

# Evid.Ratio is the Bayes Factor of the posterior
# vs the prior that Days = 10 is TRUE (Savage-Dickey Ratio)
(hyp2 <- hypothesis(fit_sleep4, "Days = 10"))
Hypothesis Tests for class b:
       Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
1 (Days)-(10) = 0    -0.12      1.55    -3.34     2.83       5.35      0.84
  Star
1     
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.
plot(hyp2)

7 Parallel Processing

7.1 Between-Chain Parallelization

fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  cores = 4,
  seed  = 52242
)
Compiling Stan program...
Start sampling

8 Missing Data Handling

8.1 Multiply Imputed Datasets from mice

?brm_multiple
imp <- mice::mice(
  mydata,
  m = 5,
  print = FALSE)

fit_imp <- brm_multiple(
  bayesianMediationSyntax,
  data = imp,
  chains = 2)
Compiling the C++ model

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.00024 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.4 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 6.528 seconds (Warm-up)
Chain 1:                2.308 seconds (Sampling)
Chain 1:                8.836 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000147 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.47 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 6.372 seconds (Warm-up)
Chain 2:                4.357 seconds (Sampling)
Chain 2:                10.729 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000159 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.59 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 6.169 seconds (Warm-up)
Chain 1:                4.56 seconds (Sampling)
Chain 1:                10.729 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.00015 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.5 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 6.634 seconds (Warm-up)
Chain 2:                4.561 seconds (Sampling)
Chain 2:                11.195 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000161 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.61 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 6.544 seconds (Warm-up)
Chain 1:                4.338 seconds (Sampling)
Chain 1:                10.882 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000148 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.48 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 6.728 seconds (Warm-up)
Chain 2:                4.598 seconds (Sampling)
Chain 2:                11.326 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.000161 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.61 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 6.802 seconds (Warm-up)
Chain 1:                2.307 seconds (Sampling)
Chain 1:                9.109 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000147 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.47 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 6.294 seconds (Warm-up)
Chain 2:                4.612 seconds (Sampling)
Chain 2:                10.906 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.00016 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.6 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 6.052 seconds (Warm-up)
Chain 1:                3.484 seconds (Sampling)
Chain 1:                9.536 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000148 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.48 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 6.397 seconds (Warm-up)
Chain 2:                4.591 seconds (Sampling)
Chain 2:                10.988 seconds (Total)
Chain 2: 
Fitting imputed model 1
Start sampling
Fitting imputed model 2
Start sampling
Fitting imputed model 3
Start sampling
Fitting imputed model 4
Start sampling
Fitting imputed model 5
Start sampling

8.2 Imputation on the Fly During Model Fitting

https://paul-buerkner.github.io/brms/articles/brms_missings.html (archived at https://perma.cc/4Y9L-USQR)

?mi
bayesianRegressionImputationSyntax <-
  bf(X | mi() ~ (1 |i| id)) +
  bf(M | mi() ~ mi(X) + (1 |i| id)) +
  bf(Y | mi() ~ mi(X) + mi(M) + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between X, M, and Y

bayesianRegressionModel <- brm(
  bayesianRegressionImputationSyntax,
  data = mydata,
  seed = 52242
)
Compiling Stan program...
Start sampling

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 0.00057 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 5.7 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 22.81 seconds (Warm-up)
Chain 1:                15.187 seconds (Sampling)
Chain 1:                37.997 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 0.000487 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 4.87 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 26.974 seconds (Warm-up)
Chain 2:                15.165 seconds (Sampling)
Chain 2:                42.139 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 0.00048 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 4.8 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 23.081 seconds (Warm-up)
Chain 3:                15.21 seconds (Sampling)
Chain 3:                38.291 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 0.000476 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 4.76 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 26.076 seconds (Warm-up)
Chain 4:                15.191 seconds (Sampling)
Chain 4:                41.267 seconds (Total)
Chain 4: 
summary(bayesianRegressionModel)
 Family: MV(gaussian, gaussian, gaussian) 
  Links: mu = identity; sigma = identity
         mu = identity; sigma = identity
         mu = identity; sigma = identity 
Formula: X | mi() ~ (1 | i | id) 
         M | mi() ~ mi(X) + (1 | i | id) 
         Y | mi() ~ mi(X) + mi(M) + (1 | i | id) 
   Data: mydata (Number of observations: 1000) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Multilevel Hyperparameters:
~id (Number of levels: 100) 
                             Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
sd(X_Intercept)                  0.10      0.06     0.01     0.22 1.00      940
sd(M_Intercept)                  0.07      0.05     0.00     0.18 1.00     1540
sd(Y_Intercept)                  0.10      0.06     0.00     0.22 1.00     1082
cor(X_Intercept,M_Intercept)     0.05      0.49    -0.86     0.88 1.00     2752
cor(X_Intercept,Y_Intercept)     0.09      0.47    -0.84     0.88 1.00     2009
cor(M_Intercept,Y_Intercept)     0.02      0.49    -0.87     0.87 1.00     1534
                             Tail_ESS
sd(X_Intercept)                  1349
sd(M_Intercept)                  2175
sd(Y_Intercept)                  1583
cor(X_Intercept,M_Intercept)     2669
cor(X_Intercept,Y_Intercept)     1813
cor(M_Intercept,Y_Intercept)     2264

Regression Coefficients:
            Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
X_Intercept    -0.02      0.03    -0.08     0.05 1.00     6350     3132
M_Intercept     0.01      0.03    -0.06     0.07 1.00     6230     2860
Y_Intercept     0.06      0.03    -0.01     0.12 1.00     6250     2976
M_miX           0.51      0.03     0.45     0.58 1.00     8022     2457
Y_miX           0.04      0.04    -0.04     0.11 1.00     5385     3250
Y_miM           0.68      0.03     0.62     0.75 1.00     5923     3050

Further Distributional Parameters:
        Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sigma_X     0.98      0.02     0.94     1.03 1.00     5490     2750
sigma_M     1.01      0.02     0.96     1.06 1.00     8156     2841
sigma_Y     1.01      0.02     0.96     1.06 1.00     6482     2847

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).
hypothesis(
  bayesianRegressionModel,
  "bsp_M_miX * bsp_Y_miM = 0", # indirect effect = a path * b path
  class = NULL,
  seed = 52242
)
Hypothesis Tests for class :
                Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
1 (bsp_M_miX*bsp_Y_... = 0     0.35      0.03     0.29     0.41         NA
  Post.Prob Star
1        NA    *
---
'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
'*': For one-sided hypotheses, the posterior probability exceeds 95%;
for two-sided hypotheses, the value tested against lies outside the 95%-CI.
Posterior probabilities of point hypotheses assume equal prior probabilities.

9 Session Info

sessionInfo()
R version 4.4.1 (2024-06-14)
Platform: x86_64-pc-linux-gnu
Running under: Ubuntu 22.04.4 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.20.so;  LAPACK version 3.10.0

locale:
 [1] LC_CTYPE=C.UTF-8       LC_NUMERIC=C           LC_TIME=C.UTF-8       
 [4] LC_COLLATE=C.UTF-8     LC_MONETARY=C.UTF-8    LC_MESSAGES=C.UTF-8   
 [7] LC_PAPER=C.UTF-8       LC_NAME=C              LC_ADDRESS=C          
[10] LC_TELEPHONE=C         LC_MEASUREMENT=C.UTF-8 LC_IDENTIFICATION=C   

time zone: UTC
tzcode source: system (glibc)

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] mice_3.16.0         bayestestR_0.14.0   brms_2.21.0        
[4] Rcpp_1.0.13         rstan_2.32.6        StanHeaders_2.32.10
[7] lme4_1.1-35.5       Matrix_1.7-0       

loaded via a namespace (and not attached):
 [1] tidyselect_1.2.1     farver_2.1.2         dplyr_1.1.4         
 [4] loo_2.8.0            fastmap_1.2.0        tensorA_0.36.2.1    
 [7] digest_0.6.37        rpart_4.1.23         lifecycle_1.0.4     
[10] survival_3.6-4       processx_3.8.4       magrittr_2.0.3      
[13] posterior_1.6.0      compiler_4.4.1       rlang_1.1.4         
[16] sass_0.4.9           tools_4.4.1          utf8_1.2.4          
[19] yaml_2.3.10          knitr_1.48           labeling_0.4.3      
[22] bridgesampling_1.1-2 pkgbuild_1.4.4       plyr_1.8.9          
[25] abind_1.4-5          withr_3.0.1          purrr_1.0.2         
[28] nnet_7.3-19          grid_4.4.1           stats4_4.4.1        
[31] fansi_1.0.6          jomo_2.7-6           future_1.34.0       
[34] colorspace_2.1-1     inline_0.3.19        ggplot2_3.5.1       
[37] globals_0.16.3       scales_1.3.0         iterators_1.0.14    
[40] MASS_7.3-60.2        insight_0.20.4       cli_3.6.3           
[43] mvtnorm_1.3-1        rmarkdown_2.28       generics_0.1.3      
[46] RcppParallel_5.1.9   future.apply_1.11.2  reshape2_1.4.4      
[49] minqa_1.2.8          cachem_1.1.0         stringr_1.5.1       
[52] splines_4.4.1        bayesplot_1.11.1     parallel_4.4.1      
[55] matrixStats_1.4.0    vctrs_0.6.5          boot_1.3-30         
[58] glmnet_4.1-8         jsonlite_1.8.8       callr_3.7.6         
[61] mitml_0.4-5          listenv_0.9.1        foreach_1.5.2       
[64] jquerylib_0.1.4      tidyr_1.3.1          parallelly_1.38.0   
[67] glue_1.7.0           nloptr_2.1.1         pan_1.9             
[70] codetools_0.2-20     ps_1.7.7             distributional_0.4.0
[73] stringi_1.8.4        gtable_0.3.5         shape_1.4.6.1       
[76] QuickJSR_1.3.1       munsell_0.5.1        tibble_3.2.1        
[79] pillar_1.9.0         htmltools_0.5.8.1    Brobdingnag_1.2-9   
[82] R6_2.5.1             evaluate_0.24.0      lattice_0.22-6      
[85] highr_0.11           backports_1.5.0      broom_1.0.6         
[88] bslib_0.8.0          rstantools_2.4.0     coda_0.19-4.1       
[91] gridExtra_2.3        nlme_3.1-164         checkmate_2.3.2     
[94] xfun_0.47            pkgconfig_2.0.3     
---
title: "Bayesian Analysis"
---

```{r setup, include = FALSE}
knitr::opts_chunk$set(
  echo = TRUE,
  error = TRUE,
  comment = "",
  class.source = "fold-show")
```

Adapted from `brms` workshop by Paul-Christian Bürkner

# Preamble

## Install Libraries

```{r}
#install.packages("remotes")
#remotes::install_github("DevPsyLab/petersenlab")
```

## Load Libraries

```{r, message = FALSE, warning = FALSE}
library("lme4")
library("rstan")
library("brms")
library("bayestestR")
library("mice")
```

## Simulate Data

```{r, class.source = "fold-hide"}
set.seed(52242)

sampleSize <- 1000

id <- rep(1:100, each = 10)
X <- rnorm(sampleSize)
M <- 0.5*X + rnorm(sampleSize)
Y <- 0.7*M + rnorm(sampleSize)

X[sample(1:length(X), size = 10)] <- NA
M[sample(1:length(M), size = 10)] <- NA
Y[sample(1:length(Y), size = 10)] <- NA

mydata <- data.frame(
  id = id,
  X = X,
  Y = Y,
  M = M)
```

## Load Data

```{r}
data("sleepstudy", package = "lme4")
```

## Prepare Data

```{r}
conditions <- make_conditions(sleepstudy, "Subject")
```

# brms

## Post-Processing Methods

```{r, class.source = "fold-hide"}
methods(class = "brmsfit")
```

# Multilevel Models

## Fit the Models

### Complete Pooling

```{r, results = "hide"}
fit_sleep1 <- brm(
  Reaction ~ 1 + Days,
  data = sleepstudy,
  seed  = 52242)
```

### Random Intercepts

```{r, results = "hide"}
fit_sleep2 <- brm(
  Reaction ~ 1 + Days + (1 | Subject), 
  data = sleepstudy,
  seed  = 52242)
```

### Random Intercepts and Slopes

```{r, results = "hide"}
fit_sleep3 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  seed  = 52242
)
```

## Summarize Results

For convergence, `Rhat` values should not be above 1.00.

```{r}
summary(fit_sleep3)
```

## Model Priors

```{r}
prior_summary(fit_sleep3)
```

## Model Parameters

```{r}
variables(fit_sleep3)
```

## Model Coefficients

```{r}
coef(fit_sleep3)
```

## Plots

### Trace Plots

```{r}
plot(fit_sleep3, ask = FALSE)
```

### Visualize Predictions

#### Sample-Level

```{r}
plot(conditional_effects(fit_sleep1), points = TRUE)
```

#### Person-Level

```{r}
# re_formula = NULL ensures that group-level effects are included
ce2 <- conditional_effects(
  fit_sleep3,
  conditions = conditions,
  re_formula = NULL)

plot(ce2, ncol = 6, points = TRUE)
```

### Check Model Fit

#### Posterior Predictive Check

Evaluate how closely the posterior predictions match the observed values.
If they do not match the general pattern of the observed values, a different response distribution may be necessary.

```{r}
pp_check(fit_sleep3)
pp_check(fit_sleep3, type = "dens_overlay")
pp_check(fit_sleep3, "error_scatter_avg")
```

## Fitted Values

```{r}
fitted(fit_sleep3)
```

## Residuals

```{r}
residuals(fit_sleep3)
```

## Compare Models

`elpd` values: higher is better
`looic` values: lower is better

`elpd_diff` values that are greater than ~2 standard errors of the `elpd_diff` values indicate a significantly better model (i.e., if `elpd_diff` value is greater than 2 times the `se_diff` value).

```{r}
loo(fit_sleep1, fit_sleep2, fit_sleep3)
print(loo(fit_sleep1, fit_sleep2, fit_sleep3), simplify = FALSE)
```

### Compute Model Weights

```{r}
model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo")

round(model_weights(fit_sleep1, fit_sleep2, fit_sleep3, weights = "loo"))
```

# Multilevel Mediation {#multilevelMediation}

The syntax below estimates random intercepts (which allows each participant to have a different intercept) to account for nested data within the same participant.

```{r}
bayesianMediationSyntax <-
  bf(M ~ X + (1 |i| id)) +
  bf(Y ~ X + M + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between M and Y

bayesianMediationModel <- brm(
  bayesianMediationSyntax,
  data  = mydata,
  seed  = 52242
)

summary(bayesianMediationModel)

hypothesis(
  bayesianMediationModel,
  "b_M_X * b_Y_M = 0", # indirect effect = a path * b path
  class = NULL,
  seed  =  52242
)

mediation(bayesianMediationModel)
```

# Setting Priors

## Parameters for Which to Set Priors

```{r}
get_prior(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy)
```

## Define Priors

```{r}
bprior <- c(
  set_prior("normal(5, 5)", coef = "Days"),
  set_prior("cauchy(0, 10)", class = "sd"),
  set_prior("lkj(2)", class = "cor"))

bprior
```

## Fit the Model

Fit the model with these priors, and sample from these priors:

```{r, results = "hide"}
fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  seed  = 52242
)
```

# Evaluate a Hypothesis

```{r}
# Evid.Ratio is the ratio of P(Days > 7) / P(Days <= 7)
(hyp1 <- hypothesis(fit_sleep4, "Days < 7"))
plot(hyp1)
```

```{r}
# Evid.Ratio is the Bayes Factor of the posterior
# vs the prior that Days = 10 is TRUE (Savage-Dickey Ratio)
(hyp2 <- hypothesis(fit_sleep4, "Days = 10"))
plot(hyp2)
```

# Parallel Processing

## Between-Chain Parallelization

```{r, results = "hide"}
fit_sleep4 <- brm(
  Reaction ~ 1 + Days + (1 + Days | Subject), 
  data = sleepstudy,
  prior = bprior, 
  sample_prior = TRUE,
  cores = 4,
  seed  = 52242
)
```

## Within-Chain Parallelization

https://paul-buerkner.github.io/brms/articles/brms_threading.html (archived at https://perma.cc/NCG3-KV4G)

# Missing Data Handling

## Multiply Imputed Datasets from `mice`

```{r}
?brm_multiple
```

```{r}
imp <- mice::mice(
  mydata,
  m = 5,
  print = FALSE)

fit_imp <- brm_multiple(
  bayesianMediationSyntax,
  data = imp,
  chains = 2)
```

## Imputation on the Fly During Model Fitting

https://paul-buerkner.github.io/brms/articles/brms_missings.html (archived at https://perma.cc/4Y9L-USQR)

```{r}
?mi
```

```{r}
bayesianRegressionImputationSyntax <-
  bf(X | mi() ~ (1 |i| id)) +
  bf(M | mi() ~ mi(X) + (1 |i| id)) +
  bf(Y | mi() ~ mi(X) + mi(M) + (1 |i| id)) +
  set_rescor(FALSE) # don't add a residual correlation between X, M, and Y

bayesianRegressionModel <- brm(
  bayesianRegressionImputationSyntax,
  data = mydata,
  seed = 52242
)

summary(bayesianRegressionModel)

hypothesis(
  bayesianRegressionModel,
  "bsp_M_miX * bsp_Y_miM = 0", # indirect effect = a path * b path
  class = NULL,
  seed = 52242
)
```

# Session Info

```{r, class.source = "fold-hide"}
sessionInfo()
```




Developmental Psychopathology Lab