Simple-to-use functions for fitting regression models and testing indirect effects using just one function.
Usage
q_mediation(
x,
y,
m = NULL,
cov = NULL,
data = NULL,
boot_ci = TRUE,
level = 0.95,
R = 100,
seed = NULL,
boot_type = c("perc", "bc"),
model = NULL,
parallel = TRUE,
ncores = max(parallel::detectCores(logical = FALSE) - 1, 1),
progress = TRUE
)
q_simple_mediation(
x,
y,
m = NULL,
cov = NULL,
data = NULL,
boot_ci = TRUE,
level = 0.95,
R = 100,
seed = NULL,
boot_type = c("perc", "bc"),
parallel = TRUE,
ncores = max(parallel::detectCores(logical = FALSE) - 1, 1),
progress = TRUE
)
q_serial_mediation(
x,
y,
m = NULL,
cov = NULL,
data = NULL,
boot_ci = TRUE,
level = 0.95,
R = 100,
seed = NULL,
boot_type = c("perc", "bc"),
parallel = TRUE,
ncores = max(parallel::detectCores(logical = FALSE) - 1, 1),
progress = TRUE
)
q_parallel_mediation(
x,
y,
m = NULL,
cov = NULL,
data = NULL,
boot_ci = TRUE,
level = 0.95,
R = 100,
seed = NULL,
boot_type = c("perc", "bc"),
parallel = TRUE,
ncores = max(parallel::detectCores(logical = FALSE) - 1, 1),
progress = TRUE
)
# S3 method for class 'q_mediation'
print(
x,
digits = 4,
annotation = TRUE,
pvalue = TRUE,
pvalue_digits = 4,
se = TRUE,
for_each_path = FALSE,
se_ci = TRUE,
wrap_computation = TRUE,
lm_ci = TRUE,
lm_beta = TRUE,
lm_ci_level = 0.95,
...
)
Arguments
- x
For
q_mediation()
,q_simple_mediation()
,q_serial_mediation()
, andq_parallel_mediation()
, it is the name of the predictor. For theprint
method of these functions,x
is the output of these functions.- y
The name of the outcome.
- m
A character vector of the name(s) of the mediator(s). For a simple mediation model, it must has only one name. For serial and parallel mediation models, it can have one or more names. For a serial mediation models, the direction of the paths go from the first names to the last names. For example,
c("m1", "m3", "m4")
denoted that the path ism1 -> m3 -> m4
.- cov
The names of the covariates, if any. If it is a character vector, then the outcome (
y
) and all mediators (m
) regress on all the covariates. If it is a named list of character vectors, then the covariates in an element predict only the variable with the name of this element. For example,list(m1 = "c1", dv = c("c2", "c3"))
indicates thatc1
predicts"m1"
, whilec2
andc3
predicts"dv"
. Default isNULL
, no covariates.- data
The data frame. Note that listwise deletion will be used and only cases with no missing data on all variables in the model (e.g.,
x
,m
,y
andcov
) will be retained.- boot_ci
Logical. Whether bootstrap confidence interval will be formed. Default is
TRUE
.- level
The level of confidence of the confidence interval. Default is .95 (for 95% confidence intervals).
- R
The number of bootstrap samples. Default is 100. Should be set to 5000 or at least 10000.
- seed
The seed for the random number generator. Default is
NULL
. Should nearly always be set to an integer to make the results reproducible.- boot_type
The type of the bootstrap confidence intervals. Default is
"perc"
, percentile confidence interval. Set"bc"
for bias-corrected confidence interval.- model
The type of model. For
q_mediation()
, it can be"simple"
(simple mediation model),"serial"
(serial mediation model), or"parallel"
(parallel mediation model). It is recommended to call the corresponding wrappers directly (q_simple_mediation()
,q_serial_mediation()
, andq_parallel_mediation()
) instead of callq_mediation()
.- parallel
If
TRUE
, default, parallel processing will be used when doing bootstrapping.- ncores
Integer. The number of CPU cores to use when
parallel
isTRUE
. Default is the number of non-logical cores minus one (one minimum). Will raise an error if greater than the number of cores detected byparallel::detectCores()
. Ifncores
is set, it will overridemake_cluster_args
indo_boot()
.- progress
Logical. Display bootstrapping progress or not. Default is
TRUE
.- digits
Number of digits to display. Default is 4.
- annotation
Logical. Whether the annotation after the table of effects is to be printed. Default is
TRUE.
- pvalue
Logical. If
TRUE
, asymmetric p-values based on bootstrapping will be printed if available. Default isTRUE
.- pvalue_digits
Number of decimal places to display for the p-values. Default is 4.
- se
Logical. If
TRUE
and confidence intervals are available, the standard errors of the estimates are also printed. They are simply the standard deviations of the bootstrap estimates. Default isTRUE
.- for_each_path
Logical. If
TRUE
, each of the paths will be printed individually, using theprint
-method of the output ofindirect_effect()
. Default isFALSE
.- se_ci
Logical. If
TRUE
and confidence interval has not been computed, the function will try to compute them from stored standard error if the original standard error is to be used. Ignored if confidence interval has already been computed. Default isTRUE
.- wrap_computation
Logical. If
TRUE
, the default, long computational symbols and values will be wrapped to fit to the screen width.- lm_ci
If
TRUE
, when printing the regression results ofstats::lm()
, confidence interval based on t statistic and standard error will be computed and added to the output. Default isTRUE
.- lm_beta
If
TRUE
, when printing the regression results ofstats::lm()
, standardized coefficients are computed and included in the printout. Only numeric variables will be computed, and any derived terms, such as product terms, will be formed after standardization. Default isTRUE
.- lm_ci_level
The level of confidence of the confidence interval. Ignored if
lm_ci
is notTRUE
.- ...
Other arguments. If
for_each_path
isTRUE
, they will be passed to the print method of the output ofindirect_effect()
. Ignored otherwise.
Value
The function q_mediation()
returns
a q_mediation
class object, with
its print
method.
The function q_simple_mediation()
returns
a q_simple_mediation
class object, which
is a subclass of q_mediation
.
The function q_serial_mediation()
returns
a q_serial_mediation
class object, which
is a subclass of q_mediation
.
The function q_parallel_mediation()
returns
a q_parallel_mediation
class object, which
is a subclass of q_mediation
.
Details
The family of "q" (quick) functions are for testing mediation effects in common models. These functions do the following in one single call:
Fit the regression models.
Compute and test all the indirect effects.
They are easy-to-use and are suitable for common models which are not too complicated. For now, there are "q" functions for these models:
A simple mediation: One predictor (
x
), one mediator (m
), one outcome (y
), and optionally some control variables (cov
ariates) (q_simple_mediation()
)A serial mediation model: One predictor (
x
), one or more mediators (m
), one outcome (y
), and optionally some control variables (cov
ariates). The mediators positioned sequentially betweenx
andy
(q_serial_mediation()
):x -> m1 -> m2 -> ... -> y
A parallel mediation model: One predictor (
x
), one or more mediators (m
), one outcome (y
), and optionally some control variables (cov
ariates). The mediators positioned in parallel betweenx
andy
(q_parallel_mediation()
):x -> m1 -> y
x -> m2 -> y
...
Users only need to specify the x
,
m
, and y
variables, and covariates
or control variables, if any (by cov
),
and the functions will automatically
identify all indirect effects and
total effects.
Note that they are not intended to
be flexible. For models that are
different from these common models,
it is recommended to fit the models
manually, either by structural
equation modelling (e.g.,
lavaan::sem()
) or by regression
analysis using stats::lm()
or
lmhelprs::many_lm()
. See
https://sfcheung.github.io/manymome/articles/med_lm.html
for an illustration on how to compute
and test indirect effects for an
arbitrary mediation model.
Workflow
This is the workflow of the "q" functions:
Do listwise deletion based on all the variables used in the models.
Generate the regression models based on the variables specified.
Fit all the models by OLS regression using
stats::lm()
.Call
all_indirect_paths()
to identify all indirect paths.Call
many_indirect_effects()
to compute all indirect effects and form their confidence intervals.Call
total_indirect_effect()
to compute the total indirect effect.Return all the results for printing.
The output of the "q" functions have
a print
method for
printing all the major results.
Notes
Flexibility
The "q" functions are designed to be
easy to use. They are not designed to
be flexible. For maximum flexibility,
fit the models manually and call
functions such as
indirect_effect()
separately. See
https://sfcheung.github.io/manymome/articles/med_lm.html
for illustrations.
Monte Carlo Confidence Intervals
We do not recommend using Monte Carlo
confidence intervals for models
fitted by regression because the
covariances between parameter
estimates are assumed to be zero,
which may not be the case in some
models. Therefore, the "q" functions
do not support Monte Carlo confidence
intervals. If Monte Carlo intervals
are desired, please fit the model by
structural equation modeling using
lavaan::sem()
.
Methods (by generic)
print(q_mediation)
: Theprint
method of the outputs ofq_mediation()
,q_simple_mediation()
,q_serial_mediation()
, andq_parallel_mediation()
.
Functions
q_mediation()
: The general "q" function for common mediation models. Not to be used directly.q_simple_mediation()
: A wrapper ofq_mediation()
for simple mediation models (a model with only one mediator).q_serial_mediation()
: A wrapper ofq_mediation()
for serial mediation models.q_parallel_mediation()
: A wrapper ofq_mediation()
for parallel mediation models.
See also
lmhelprs::many_lm()
for
fitting several regression models
using model syntax,
indirect_effect()
for computing and
testing a specific path,
all_indirect_paths()
for
identifying all paths in a model,
many_indirect_effects()
for
computing and testing indirect
effects along several paths, and
total_indirect_effect()
for
computing and testing the total
indirect effects.
Examples
# ===== Simple mediation
# Set R to 5000 or 10000 in real studies
# Remove 'parallel' or set it to TRUE for faster bootstrapping
# Remove 'progress' or set it to TRUE to see a progress bar
out <- q_simple_mediation(x = "x",
y = "y",
m = "m",
cov = c("c2", "c1"),
data = data_med,
R = 100,
seed = 1234,
parallel = FALSE,
progress = FALSE)
out
#>
#> =============== Simple Mediation Model ===============
#>
#> Call:
#>
#> q_simple_mediation(x = "x", y = "y", m = "m", cov = c("c2", "c1"),
#> data = data_med, R = 100, seed = 1234, parallel = FALSE,
#> progress = FALSE)
#>
#> ===================================================
#> | Basic Information |
#> ===================================================
#>
#> Predictor(x): x
#> Outcome(y): y
#> Mediator(s)(m): m
#> Model: Simple Mediation Model
#>
#> The regression models fitted:
#>
#> m ~ x + c2 + c1
#> y ~ m + x + c2 + c1
#>
#> The number of cases included: 100
#>
#> ===================================================
#> | Regression Results |
#> ===================================================
#>
#>
#> Model:
#> m ~ x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 9.6894 7.8636 11.5152 0.0000 0.9198 10.5344 0.0000 ***
#> x 0.9347 0.7742 1.0951 0.7536 0.0808 11.5632 0.0000 ***
#> c2 -0.1684 -0.3730 0.0361 -0.1070 0.1030 -1.6343 0.1055
#> c1 0.1978 0.0454 0.3502 0.1676 0.0768 2.5759 0.0115 *
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: m, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.598. Adjusted R-square = 0.586. F(3, 96) = 47.622, p < .001
#>
#> Model:
#> y ~ m + x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 4.4152 -2.1393 10.9698 0.0000 3.3016 1.3373 0.1843
#> m 0.7847 0.2894 1.2800 0.4079 0.2495 3.1450 0.0022 **
#> x 0.5077 -0.0992 1.1145 0.2128 0.3057 1.6608 0.1000
#> c2 -0.1544 -0.6614 0.3526 -0.0510 0.2554 -0.6045 0.5470
#> c1 0.1405 -0.2448 0.5258 0.0619 0.1941 0.7239 0.4709
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: y, m, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.358. Adjusted R-square = 0.331. F(4, 95) = 13.220, p < .001
#>
#> ===================================================
#> | Indirect Effect Results |
#> ===================================================
#>
#> == Indirect Effect(s) ==
#> ind CI.lo CI.hi Sig pvalue SE
#> x -> m -> y 0.7334 0.3224 1.3636 Sig 0.0000 0.2450
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - The 'ind' column shows the indirect effects.
#>
#> == Indirect Effect(s) (x-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m -> y 0.7739 0.3287 1.4360 Sig 0.0000 0.2591
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized indirect effects.
#> - x-variable(s) standardized.
#>
#> == Indirect Effect(s) (y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m -> y 0.2914 0.1351 0.5292 Sig 0.0000 0.0896
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized indirect effects.
#> - y-variable(s) standardized.
#>
#> == Indirect Effect(s) (Both x-variable(s) and y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m -> y 0.3074 0.1352 0.5396 Sig 0.0000 0.0949
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The standardized indirect effects.
#>
#> ===================================================
#> | Direct Effect Results |
#> ===================================================
#>
#> == Effect(s) ==
#> ind CI.lo CI.hi Sig pvalue SE
#> x -> y 0.5077 -0.3369 1.1325 0.1400 0.3290
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - The 'ind' column shows the effects.
#>
#> == Effect(s) (x-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.5357 -0.3446 1.0575 0.1400 0.3361
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized effects.
#> - x-variable(s) standardized.
#>
#> == Effect(s) (y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.2017 -0.1252 0.4404 0.1400 0.1317
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized effects.
#> - y-variable(s) standardized.
#>
#> == Effect(s) (Both x-variable(s) and y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.2128 -0.1277 0.4291 0.1400 0.1344
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The standardized effects.
#>
#> ===================================================
#> | Notes |
#> ===================================================
#>
#> - For reference, the bootstrap confidence interval (and bootstrap
#> p-value, if requested) of the (unstandardize) direct effect is also
#> reported. The bootstrap p-value and the OLS t-statistic p-value can
#> be different.
#> - For the direct effects with either 'x'-variable or 'y'-variable, or
#> both, standardized, it is recommended to use the bootstrap confidence
#> intervals, which take into account the sampling error of the sample
#> standard deviations.
#> - The asymmetric bootstrap value for an effect is the same whether x
#> and/or y is/are standardized.
# Different control variables for m and y
# out <- q_simple_mediation(x = "x",
# y = "y",
# m = "m",
# cov = list(m = "c1",
# y = c("c1", "c2")),
# data = data_med,
# R = 100,
# seed = 1234,
# parallel = FALSE,
# progress = FALSE)
# out
# ===== Serial mediation
# Set R to 5000 or 10000 in real studies
# Remove 'parallel' or set it to TRUE for faster bootstrapping
# Remove 'progress' or set it to TRUE to see a progress bar
out <- q_serial_mediation(x = "x",
y = "y",
m = c("m1", "m2"),
cov = c("c2", "c1"),
data = data_serial,
R = 100,
seed = 1234,
parallel = FALSE,
progress = FALSE)
out
#>
#> =============== Serial Mediation Model ===============
#>
#> Call:
#>
#> q_serial_mediation(x = "x", y = "y", m = c("m1", "m2"), cov = c("c2",
#> "c1"), data = data_serial, R = 100, seed = 1234, parallel = FALSE,
#> progress = FALSE)
#>
#> ===================================================
#> | Basic Information |
#> ===================================================
#>
#> Predictor(x): x
#> Outcome(y): y
#> Mediator(s)(m): m1, m2
#> Model: Serial Mediation Model
#>
#> The regression models fitted:
#>
#> m1 ~ x + c2 + c1
#> m2 ~ x + m1 + c2 + c1
#> y ~ m1 + m2 + x + c2 + c1
#>
#> The number of cases included: 100
#>
#> ===================================================
#> | Regression Results |
#> ===================================================
#>
#>
#> Model:
#> m1 ~ x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 10.8157 8.5453 13.0861 -0.0000 1.1438 9.4560 0.0000 ***
#> x 0.8224 0.6354 1.0095 0.6493 0.0942 8.7274 0.0000 ***
#> c2 -0.1889 -0.3730 -0.0047 -0.1534 0.0928 -2.0355 0.0446 *
#> c1 0.1715 -0.0086 0.3515 0.1423 0.0907 1.8906 0.0617 .
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: m1, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.477. Adjusted R-square = 0.461. F(3, 96) = 29.235, p < .001
#>
#> Model:
#> m2 ~ x + m1 + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 3.5194 0.3692 6.6696 -0.0000 1.5868 2.2179 0.0289 *
#> x -0.1161 -0.3662 0.1340 -0.1036 0.1260 -0.9216 0.3591
#> m1 0.4208 0.2185 0.6230 0.4758 0.1019 4.1300 0.0001 ***
#> c2 -0.1619 -0.3497 0.0259 -0.1487 0.0946 -1.7120 0.0902 .
#> c1 0.2775 0.0945 0.4606 0.2603 0.0922 3.0096 0.0033 **
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: m2, x, m1, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.341. Adjusted R-square = 0.313. F(4, 95) = 12.290, p < .001
#>
#> Model:
#> y ~ m1 + m2 + x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 9.4679 2.3021 16.6336 -0.0000 3.6090 2.6234 0.0102 *
#> m1 -0.4353 -0.9226 0.0519 -0.2620 0.2454 -1.7740 0.0793 .
#> m2 0.5208 0.0690 0.9725 0.2772 0.2275 2.2888 0.0243 *
#> x 0.4929 -0.0643 1.0500 0.2342 0.2806 1.7562 0.0823 .
#> c2 -0.0960 -0.5190 0.3269 -0.0469 0.2130 -0.4509 0.6531
#> c1 0.0988 -0.3261 0.5238 0.0493 0.2140 0.4618 0.6453
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: y, m1, m2, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.091. Adjusted R-square = 0.043. F(5, 94) = 1.893, p = 0.103
#>
#> ===================================================
#> | Indirect Effect Results |
#> ===================================================
#>
#> == Indirect Effect(s) ==
#> ind CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> m2 -> y 0.1802 0.0376 0.3518 Sig 0.0000 0.0749
#> x -> m1 -> y -0.3580 -0.7488 0.0285 0.0600 0.1751
#> x -> m2 -> y -0.0605 -0.1963 0.1371 0.4800 0.0740
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - The 'ind' column shows the indirect effects.
#>
#> == Indirect Effect(s) (x-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> m2 -> y 0.1721 0.0331 0.3249 Sig 0.0000 0.0718
#> x -> m1 -> y -0.3419 -0.7121 0.0258 0.0600 0.1649
#> x -> m2 -> y -0.0577 -0.1864 0.1164 0.4800 0.0689
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized indirect effects.
#> - x-variable(s) standardized.
#>
#> == Indirect Effect(s) (y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> m2 -> y 0.0897 0.0197 0.1758 Sig 0.0000 0.0380
#> x -> m1 -> y -0.1782 -0.3599 0.0138 0.0600 0.0883
#> x -> m2 -> y -0.0301 -0.1045 0.0644 0.4800 0.0373
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized indirect effects.
#> - y-variable(s) standardized.
#>
#> == Indirect Effect(s) (Both x-variable(s) and y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> m2 -> y 0.0856 0.0166 0.1610 Sig 0.0000 0.0364
#> x -> m1 -> y -0.1701 -0.3414 0.0125 0.0600 0.0831
#> x -> m2 -> y -0.0287 -0.0974 0.0562 0.4800 0.0347
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The standardized indirect effects.
#>
#> ===================================================
#> | Total Indirect Effect Results |
#> ===================================================
#>
#> == Indirect Effect ==
#>
#> Path: x -> m1 -> m2 -> y
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: -0.2383
#> 95.0% Bootstrap CI: [-0.5488 to 0.1154]
#> Bootstrap p-value: 0.1400
#> Bootstrap SE: 0.1544
#>
#> Computation of the Function of Effects:
#> ((x->m1->m2->y)
#> +(x->m1->y))
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> == Indirect Effect (‘x’ Standardized) ==
#>
#> Path: x -> m1 -> m2 -> y
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: -0.2275
#> 95.0% Bootstrap CI: [-0.5008 to 0.1069]
#> Bootstrap p-value: 0.1400
#> Bootstrap SE: 0.1448
#>
#> Computation of the Function of Effects:
#> ((x->m1->m2->y)
#> +(x->m1->y))
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> == Indirect Effect (‘y’ Standardized) ==
#>
#> Path: x -> m1 -> m2 -> y
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: -0.1186
#> 95.0% Bootstrap CI: [-0.2845 to 0.0568]
#> Bootstrap p-value: 0.1400
#> Bootstrap SE: 0.0792
#>
#> Computation of the Function of Effects:
#> ((x->m1->m2->y)
#> +(x->m1->y))
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> == Indirect Effect (Both ‘x’ and ‘y’ Standardized) ==
#>
#> Path: x -> m1 -> m2 -> y
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: -0.1132
#> 95.0% Bootstrap CI: [-0.2680 to 0.0527]
#> Bootstrap p-value: 0.1400
#> Bootstrap SE: 0.0741
#>
#> Computation of the Function of Effects:
#> ((x->m1->m2->y)
#> +(x->m1->y))
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> ===================================================
#> | Direct Effect Results |
#> ===================================================
#>
#> == Effect(s) ==
#> ind CI.lo CI.hi Sig pvalue SE
#> x -> y 0.4929 -0.1774 0.9963 0.1200 0.2959
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - The 'ind' column shows the effects.
#>
#> == Effect(s) (x-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.4706 -0.1746 0.9343 0.1200 0.2779
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized effects.
#> - x-variable(s) standardized.
#>
#> == Effect(s) (y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.2452 -0.0899 0.4899 0.1200 0.1470
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized effects.
#> - y-variable(s) standardized.
#>
#> == Effect(s) (Both x-variable(s) and y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.2342 -0.0861 0.4555 0.1200 0.1375
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The standardized effects.
#>
#> ===================================================
#> | Notes |
#> ===================================================
#>
#> - For reference, the bootstrap confidence interval (and bootstrap
#> p-value, if requested) of the (unstandardize) direct effect is also
#> reported. The bootstrap p-value and the OLS t-statistic p-value can
#> be different.
#> - For the direct effects with either 'x'-variable or 'y'-variable, or
#> both, standardized, it is recommended to use the bootstrap confidence
#> intervals, which take into account the sampling error of the sample
#> standard deviations.
#> - The asymmetric bootstrap value for an effect is the same whether x
#> and/or y is/are standardized.
# Different control variables for m and y
# out <- q_serial_mediation(x = "x",
# y = "y",
# m = c("m1", "m2"),
# cov = list(m1 = "c1",
# m2 = c("c2", "c1"),
# y = "c2"),
# data = data_serial,
# R = 100,
# seed = 1234,
# parallel = FALSE,
# progress = FALSE)
# out
# ===== Parallel mediation
# Set R to 5000 or 10000 in real studies
# Remove 'parallel' or set it to TRUE for faster bootstrapping
# Remove 'progress' or set it to TRUE to see a progress bar
out <- q_parallel_mediation(x = "x",
y = "y",
m = c("m1", "m2"),
cov = c("c2", "c1"),
data = data_parallel,
R = 100,
seed = 1234,
parallel = FALSE,
progress = FALSE)
out
#>
#> =============== Parallel Mediation Model ===============
#>
#> Call:
#>
#> q_parallel_mediation(x = "x", y = "y", m = c("m1", "m2"), cov = c("c2",
#> "c1"), data = data_parallel, R = 100, seed = 1234, parallel = FALSE,
#> progress = FALSE)
#>
#> ===================================================
#> | Basic Information |
#> ===================================================
#>
#> Predictor(x): x
#> Outcome(y): y
#> Mediator(s)(m): m1, m2
#> Model: Parallel Mediation Model
#>
#> The regression models fitted:
#>
#> m1 ~ x + c2 + c1
#> m2 ~ x + c2 + c1
#> y ~ m1 + m2 + x + c2 + c1
#>
#> The number of cases included: 100
#>
#> ===================================================
#> | Regression Results |
#> ===================================================
#>
#>
#> Model:
#> m1 ~ x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 9.3926 6.6132 12.1720 -0.0000 1.4002 6.7081 0.0000 ***
#> x 0.8769 0.6498 1.1040 0.6049 0.1144 7.6647 0.0000 ***
#> c2 -0.0929 -0.3362 0.1503 -0.0595 0.1225 -0.7584 0.4501
#> c1 0.2519 0.0309 0.4728 0.1742 0.1113 2.2630 0.0259 *
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: m1, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.439. Adjusted R-square = 0.422. F(3, 96) = 25.082, p < .001
#>
#> Model:
#> m2 ~ x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 4.3796 1.7071 7.0520 0.0000 1.3463 3.2530 0.0016 **
#> x 0.2968 0.0784 0.5151 0.2516 0.1100 2.6977 0.0082 **
#> c2 -0.3125 -0.5464 -0.0786 -0.2457 0.1178 -2.6517 0.0094 **
#> c1 0.2746 0.0621 0.4870 0.2334 0.1070 2.5655 0.0118 *
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: m2, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.217. Adjusted R-square = 0.193. F(3, 96) = 8.875, p < .001
#>
#> Model:
#> y ~ m1 + m2 + x + c2 + c1
#>
#> Estimate CI.lo CI.hi betaS Std. Error t value Pr(>|t|) Sig
#> (Intercept) 2.6562 -3.6518 8.9641 0.0000 3.1770 0.8361 0.4052
#> m1 0.4864 0.1071 0.8658 0.3016 0.1911 2.5459 0.0125 *
#> m2 0.4713 0.0767 0.8658 0.2378 0.1987 2.3718 0.0197 *
#> x 0.2911 -0.2438 0.8260 0.1245 0.2694 1.0807 0.2826
#> c2 -0.0321 -0.4970 0.4327 -0.0127 0.2341 -0.1372 0.8912
#> c1 -0.0526 -0.4806 0.3755 -0.0225 0.2156 -0.2438 0.8079
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#> - BetaS are standardized coefficients with (a) only numeric variables
#> standardized and (b) product terms formed after standardization.
#> Variable(s) standardized is/are: y, m1, m2, x, c2, c1
#> - CI.lo and CI.hi are the 95.0% confidence levels of 'Estimate'
#> computed from the t values and standard errors.
#> R-square = 0.283. Adjusted R-square = 0.245. F(5, 94) = 7.420, p < .001
#>
#> ===================================================
#> | Indirect Effect Results |
#> ===================================================
#>
#> == Indirect Effect(s) ==
#> ind CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> y 0.4266 0.1859 0.7997 Sig 0.0000 0.1624
#> x -> m2 -> y 0.1399 0.0109 0.3563 Sig 0.0000 0.0909
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - The 'ind' column shows the indirect effects.
#>
#> == Indirect Effect(s) (x-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> y 0.4279 0.1818 0.8064 Sig 0.0000 0.1632
#> x -> m2 -> y 0.1403 0.0109 0.3644 Sig 0.0000 0.0927
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized indirect effects.
#> - x-variable(s) standardized.
#>
#> == Indirect Effect(s) (y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> y 0.1819 0.0786 0.3289 Sig 0.0000 0.0638
#> x -> m2 -> y 0.0596 0.0048 0.1505 Sig 0.0000 0.0371
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized indirect effects.
#> - y-variable(s) standardized.
#>
#> == Indirect Effect(s) (Both x-variable(s) and y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> m1 -> y 0.1825 0.0772 0.3245 Sig 0.0000 0.0646
#> x -> m2 -> y 0.0598 0.0048 0.1510 Sig 0.0000 0.0379
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The standardized indirect effects.
#>
#> ===================================================
#> | Total Indirect Effect Results |
#> ===================================================
#>
#> == Indirect Effect ==
#>
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: 0.5664
#> 95.0% Bootstrap CI: [0.2585 to 1.0374]
#> Bootstrap p-value: 0.0000
#> Bootstrap SE: 0.1865
#>
#> Computation of the Function of Effects:
#> (x->m1->y)
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> == Indirect Effect (‘x’ Standardized) ==
#>
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: 0.5682
#> 95.0% Bootstrap CI: [0.2569 to 1.0270]
#> Bootstrap p-value: 0.0000
#> Bootstrap SE: 0.1896
#>
#> Computation of the Function of Effects:
#> (x->m1->y)
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> == Indirect Effect (‘y’ Standardized) ==
#>
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: 0.2415
#> 95.0% Bootstrap CI: [0.1084 to 0.3976]
#> Bootstrap p-value: 0.0000
#> Bootstrap SE: 0.0691
#>
#> Computation of the Function of Effects:
#> (x->m1->y)
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> == Indirect Effect (Both ‘x’ and ‘y’ Standardized) ==
#>
#> Path: x -> m1 -> y
#> Path: x -> m2 -> y
#> Function of Effects: 0.2423
#> 95.0% Bootstrap CI: [0.1082 to 0.3971]
#> Bootstrap p-value: 0.0000
#> Bootstrap SE: 0.0711
#>
#> Computation of the Function of Effects:
#> (x->m1->y)
#> +(x->m2->y)
#>
#>
#> Percentile confidence interval formed by nonparametric bootstrapping
#> with 100 bootstrap samples.
#> Standard error (SE) based on nonparametric bootstrapping with 100
#> bootstrap samples.
#>
#>
#> ===================================================
#> | Direct Effect Results |
#> ===================================================
#>
#> == Effect(s) ==
#> ind CI.lo CI.hi Sig pvalue SE
#> x -> y 0.2911 -0.1761 0.6467 0.1800 0.1995
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - The 'ind' column shows the effects.
#>
#> == Effect(s) (x-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.2920 -0.1878 0.6656 0.1800 0.2017
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized effects.
#> - x-variable(s) standardized.
#>
#> == Effect(s) (y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.1241 -0.0726 0.2685 0.1800 0.0860
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The partially standardized effects.
#> - y-variable(s) standardized.
#>
#> == Effect(s) (Both x-variable(s) and y-variable(s) Standardized) ==
#> std CI.lo CI.hi Sig pvalue SE
#> x -> y 0.1245 -0.0762 0.2816 0.1800 0.0874
#>
#> - [CI.lo to CI.hi] are 95.0% percentile confidence intervals by
#> nonparametric bootstrapping with 100 samples.
#> - [pvalue] are asymmetric bootstrap p-values.
#> - [SE] are standard errors.
#> - std: The standardized effects.
#>
#> ===================================================
#> | Notes |
#> ===================================================
#>
#> - For reference, the bootstrap confidence interval (and bootstrap
#> p-value, if requested) of the (unstandardize) direct effect is also
#> reported. The bootstrap p-value and the OLS t-statistic p-value can
#> be different.
#> - For the direct effects with either 'x'-variable or 'y'-variable, or
#> both, standardized, it is recommended to use the bootstrap confidence
#> intervals, which take into account the sampling error of the sample
#> standard deviations.
#> - The asymmetric bootstrap value for an effect is the same whether x
#> and/or y is/are standardized.
# Different control variables for m and y
# out <- q_parallel_mediation(x = "x",
# y = "y",
# m = c("m1", "m2"),
# cov = list(m1 = "c1",
# m2 = c("c2", "c1"),
# y = "c2"),
# data = data_parallel,
# R = 100,
# seed = 1234,
# parallel = FALSE,
# progress = FALSE)
# out