library(rio)
library(lavaan)
library(semTools)
library(semhelpinghands)
5 Confirmatory Factor Analysis
You can download the R code used in this lab by right-clicking this link and selecting “Save Link As…” in the drop-down menu: confirmatoryfactoranalysis.R
5.1 Loading R packages
Load the required packages for this lab into your R environment:
5.2 Loading Data
Load the data into your environment. For this lab we will use the data that was also used in the book chapter example: a sample of 200 children who completed the KABC (Kaufman Assessment Battery for Children), which includes eight subtests: Hand Movements (hm), Number Recall (nr), Word Order (wo), Gestalt Closure (gc), Triangles (tr), Spatial Memory (sm), Matrix Analogies (ma), and Photo Series (ps).
You can download the data by right-clicking this link and selecting “Save Link As…” in the drop-down menu: data/kabc.csv. Make sure to save it in the folder you are using for this class.
<- import(file = "data/kabc.csv") kabc
5.3 Unidimensional CFA
Model Specification
Next, we will specify a single factor CFA using lavaan syntax. We will use a new operator, =~
, which tells lavaan
that we are specifying a reflective latent factor. You can give the factor any name you want and put it on the left side of =~
. All the indicators of the latent factor go on the right side of =~
with +
s in between:
<- '
mod1 CogA =~ hm + nr + wo + gc + tr + sm + ma + ps
'
Model Estimation
Next, we will estimate the model using different model identification constraints. We will start with lavaan
’s default: the reference variable method (or unit loading identification). To use this method, we do not need to add anything to our model estimation function, cfa
.
<- cfa(model = mod1, data = kabc) fit1a
To estimate the same model using a unit variance identification constraint, we can change the code as follows:
<- cfa(model = mod1, data = kabc,
fit1b std.lv = TRUE)
Finally, to estimate the same model using effect coding identification constraints, we can change the code as follows:
<- cfa(model = mod1, data = kabc,
fit1c effect.coding = TRUE)
Are all these models equivalent? We can use the net()
function to check:
net(fit1a, fit1b, fit1c)
If cell [R, C] is TRUE, the model in row R is nested within column C.
If the models also have the same degrees of freedom, they are equivalent.
NA indicates the model in column C did not converge when fit to the
implied means and covariance matrix from the model in row R.
The hidden diagonal is TRUE because any model is equivalent to itself.
The upper triangle is hidden because for models with the same degrees
of freedom, cell [C, R] == cell [R, C]. For all models with different
degrees of freedom, the upper diagonal is all FALSE because models with
fewer degrees of freedom (i.e., more parameters) cannot be nested
within models with more degrees of freedom (i.e., fewer parameters).
fit1a fit1b fit1c
fit1a (df = 20)
fit1b (df = 20) TRUE
fit1c (df = 20) TRUE TRUE
To quickly compare the parameter estimates across model identification options, we can use the group_by_models()
function from the semhelpinghands
package:
group_by_models(list("marker" = fit1a,
"variance" = fit1b,
"effect" = fit1c))
lhs op rhs est_marker est_variance est_effect
1 CogA =~ gc 0.659 1.265 0.699
2 CogA =~ hm 1.000 1.921 1.060
3 CogA =~ ma 0.883 1.696 0.936
4 CogA =~ nr 0.636 1.221 0.674
5 CogA =~ ps 1.166 2.240 1.237
6 CogA =~ sm 1.433 2.752 1.519
7 CogA =~ tr 0.963 1.850 1.021
8 CogA =~ wo 0.805 1.547 0.854
9 CogA ~~ CogA 3.690 1.000 3.282
10 gc ~~ gc 5.652 5.652 5.652
11 hm ~~ hm 7.812 7.812 7.812
12 ma ~~ ma 4.925 4.925 4.925
13 nr ~~ nr 4.240 4.240 4.240
14 ps ~~ ps 3.936 3.936 3.936
15 sm ~~ sm 9.979 9.979 9.979
16 tr ~~ tr 3.831 3.831 3.831
17 wo ~~ wo 5.975 5.975 5.975
A nice way to confirm that the models are identical is to compare the standardized estimates across the different options:
group_by_models(list("marker" = fit1a,
"variance" = fit1b,
"effect" = fit1c),
use_standardizedSolution = TRUE,
col_names = "est.std")
lhs op rhs est.std_marker est.std_variance est.std_effect
1 CogA =~ gc 0.470 0.470 0.470
2 CogA =~ hm 0.566 0.566 0.566
3 CogA =~ ma 0.607 0.607 0.607
4 CogA =~ nr 0.510 0.510 0.510
5 CogA =~ ps 0.749 0.749 0.749
6 CogA =~ sm 0.657 0.657 0.657
7 CogA =~ tr 0.687 0.687 0.687
8 CogA =~ wo 0.535 0.535 0.535
9 CogA ~~ CogA 1.000 1.000 1.000
10 gc ~~ gc 0.779 0.779 0.779
11 hm ~~ hm 0.679 0.679 0.679
12 ma ~~ ma 0.631 0.631 0.631
13 nr ~~ nr 0.740 0.740 0.740
14 ps ~~ ps 0.440 0.440 0.440
15 sm ~~ sm 0.569 0.569 0.569
16 tr ~~ tr 0.528 0.528 0.528
17 wo ~~ wo 0.714 0.714 0.714
Model Evaluation
Here, we will follow the steps described in the textbook.
Global Fit
First, we will test the exact fit of the model to the data using the Chi-square model test. We can find this statistic by including fit.measures = T
in the summary()
function and finding the section labeled Model Test User Model
. Note that I also set estimates = F
, so that the parameter estimates are not included in the output (mostly to reduce the length of the output):
summary(fit1a, fit.measures = T, estimates = F)
lavaan 0.6-19 ended normally after 36 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 16
Number of observations 200
Model Test User Model:
Test statistic 105.427
Degrees of freedom 20
P-value (Chi-square) 0.000
Model Test Baseline Model:
Test statistic 498.336
Degrees of freedom 28
P-value 0.000
User Model versus Baseline Model:
Comparative Fit Index (CFI) 0.818
Tucker-Lewis Index (TLI) 0.746
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -3812.592
Loglikelihood unrestricted model (H1) -3759.878
Akaike (AIC) 7657.183
Bayesian (BIC) 7709.956
Sample-size adjusted Bayesian (SABIC) 7659.267
Root Mean Square Error of Approximation:
RMSEA 0.146
90 Percent confidence interval - lower 0.119
90 Percent confidence interval - upper 0.174
P-value H_0: RMSEA <= 0.050 0.000
P-value H_0: RMSEA >= 0.080 1.000
Standardized Root Mean Square Residual:
SRMR 0.084
What does the Chi-square test tell us about the exact fit of this model?
Local Fit
To identify where the friction between the data and the model might be, we can look at local fit using the covariance residuals. It can be helpful to compare results across different methods, so we will use the normalized residuals and the correlation residuals:
residuals(fit1a, type = "normalized")
$type
[1] "normalized"
$cov
hm nr wo gc tr sm ma ps
hm 0.000
nr 1.331 0.000
wo 0.629 4.667 0.000
gc -0.777 -1.823 -1.274 0.000
tr -0.930 -1.098 -1.050 0.757 0.000
sm 0.367 -0.613 -0.970 -0.117 0.240 0.000
ma 0.608 0.138 -0.334 0.334 0.038 0.147 0.000
ps -0.448 -1.249 -0.402 0.890 0.804 0.230 -0.450 0.000
residuals(fit1a, type = "cor.bollen")
$type
[1] "cor.bollen"
$cov
hm nr wo gc tr sm ma ps
hm 0.000
nr 0.101 0.000
wo 0.047 0.397 0.000
gc -0.056 -0.130 -0.091 0.000
tr -0.069 -0.080 -0.077 0.057 0.000
sm 0.028 -0.045 -0.071 -0.009 0.019 0.000
ma 0.046 0.010 -0.025 0.025 0.003 0.011 0.000
ps -0.034 -0.092 -0.030 0.068 0.066 0.018 -0.035 0.000
What patterns can you see in the covariance residuals? Do you think a different CFA would be more appropriate?
5.5 Model Comparison
We can also use the Chi-square difference test to compare the original and modified CFA to each other. Here, the original one-factor CFA model is more constrained and the modified two-factor CFA model is less constrained. Here, a significant Chi-square difference indicates that the more constrained one-factor model fits significantly worse than the two-factor model. We can use the compareFit()
function from the semTools
package to compare the models using the Chi-square difference test:
<- compareFit(fit1a, fit2a)
comp12 summary(comp12)
################### Nested Model Comparison #########################
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fit2a 19 7592.1 7648.2 38.325
fit1a 20 7657.2 7710.0 105.427 67.102 0.5749 1 2.578e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
####################### Model Fit Indices ###########################
chisq df pvalue rmsea cfi tli srmr aic bic
fit2a 38.325† 19 .005 .071† .959† .939† .072† 7592.082† 7648.153†
fit1a 105.427 20 .000 .146 .818 .746 .084 7657.183 7709.956
################## Differences in Fit Indices #######################
df rmsea cfi tli srmr aic bic
fit1a - fit2a 1 0.075 -0.141 -0.194 0.011 65.102 61.804
Does the less constrained two-factor model result in an improvement in global fit?
5.6 Introducing Cross-Loadings or Residual Covariances
Even though the two-factor CFA fits better than the one-factor CFA, it’s Model Chi-square test is still significant, indicating that it is not an exact fit to the data. We can use the local fit information to help us decide what kind of theory-informed alterations might be reasonable to add. One option is to include a cross-loading of Hand Movements (which has its main loading on the Sequential factor): The task requires exact reproduction of a sequence of hand movements performed by the examiner, but there is also a clear visual-spatial component to the task that could reflect simultaneous processing. A second option is to include a residual covariance between Number Recall and Word Order. It might be that both these tasks require memorization, which may not be a component that they share with Hand Movements.
Model (re)Specification and Estimation
For the purpose of this lab, we will add the cross-loading:
<- '
mod3 Sequent =~ hm + nr + wo
Simult =~ gc + tr + sm + ma + ps + hm
'
We can estimate the model and save it in fit3a
:
<- cfa(model = mod3, data = kabc) fit3a
Model Evaluation
We will use the same sources of information to evaluate the fit of this modified model.
Global Fit
summary(fit3a, fit.measures = T, estimates = F)
lavaan 0.6-19 ended normally after 49 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 18
Number of observations 200
Model Test User Model:
Test statistic 18.108
Degrees of freedom 18
P-value (Chi-square) 0.449
Model Test Baseline Model:
Test statistic 498.336
Degrees of freedom 28
P-value 0.000
User Model versus Baseline Model:
Comparative Fit Index (CFI) 1.000
Tucker-Lewis Index (TLI) 1.000
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -3768.932
Loglikelihood unrestricted model (H1) -3759.878
Akaike (AIC) 7573.864
Bayesian (BIC) 7633.234
Sample-size adjusted Bayesian (SABIC) 7576.208
Root Mean Square Error of Approximation:
RMSEA 0.005
90 Percent confidence interval - lower 0.000
90 Percent confidence interval - upper 0.063
P-value H_0: RMSEA <= 0.050 0.859
P-value H_0: RMSEA >= 0.080 0.008
Standardized Root Mean Square Residual:
SRMR 0.035
What does the Chi-square test tell us about the exact fit of this model?
Local Fit
residuals(fit3a, type = "normalized")
$type
[1] "normalized"
$cov
hm nr wo gc tr sm ma ps
hm 0.000
nr 0.270 0.000
wo -0.270 0.000 0.000
gc -0.688 -1.380 -0.678 0.000
tr -0.713 -0.395 -0.130 0.286 0.000
sm 0.706 0.169 0.026 -0.443 -0.099 0.000
ma 1.044 0.955 0.682 0.154 -0.115 0.155 0.000
ps -0.181 -0.457 0.606 0.418 0.280 -0.091 -0.577 0.000
residuals(fit3a, type = "cor.bollen")
$type
[1] "cor.bollen"
$cov
hm nr wo gc tr sm ma ps
hm 0.000
nr 0.020 0.000
wo -0.020 0.000 0.000
gc -0.050 -0.098 -0.049 0.000
tr -0.053 -0.029 -0.010 0.022 0.000
sm 0.054 0.012 0.002 -0.033 -0.008 0.000
ma 0.079 0.071 0.050 0.011 -0.009 0.012 0.000
ps -0.014 -0.034 0.046 0.032 0.023 -0.007 -0.044 0.000
What patterns can you see in the covariance residuals? Do we need to consider any more adjustments?
Model Comparison
<- compareFit(fit3a, fit2a)
comp23 summary(comp23)
################### Nested Model Comparison #########################
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fit3a 18 7573.9 7633.2 18.108
fit2a 19 7592.1 7648.2 38.325 20.217 0.30998 1 6.913e-06 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
####################### Model Fit Indices ###########################
chisq df pvalue rmsea cfi tli srmr aic bic
fit3a 18.108† 18 .449 .005† 1.000† 1.000† .035† 7573.864† 7633.234†
fit2a 38.325 19 .005 .071 .959 .939 .072 7592.082 7648.153
################## Differences in Fit Indices #######################
df rmsea cfi tli srmr aic bic
fit2a - fit3a 1 0.066 -0.041 -0.06 0.037 18.217 14.919
Does the modified (less constrained) model result in an improvement in global fit?
5.7 Approximate Fit Indices
So far, we have ignored the approximate fit indices that are reported by lavaan
. We can get a quick overview of these indices across the three main models we have specified by using the compareFit
function:
<- compareFit(fit3a, fit2a, fit1a)
comp123 summary(comp123)
################### Nested Model Comparison #########################
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fit3a 18 7573.9 7633.2 18.108
fit2a 19 7592.1 7648.2 38.325 20.217 0.30998 1 6.913e-06 ***
fit1a 20 7657.2 7710.0 105.427 67.102 0.57490 1 2.578e-16 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
####################### Model Fit Indices ###########################
chisq df pvalue rmsea cfi tli srmr aic bic
fit3a 18.108† 18 .449 .005† 1.000† 1.000† .035† 7573.864† 7633.234†
fit2a 38.325 19 .005 .071 .959 .939 .072 7592.082 7648.153
fit1a 105.427 20 .000 .146 .818 .746 .084 7657.183 7709.956
################## Differences in Fit Indices #######################
df rmsea cfi tli srmr aic bic
fit2a - fit3a 1 0.066 -0.041 -0.060 0.037 18.217 14.919
fit1a - fit2a 1 0.075 -0.141 -0.194 0.011 65.102 61.804
The second table in the output includes commonly reported indices of approximate fit (e.g., RMSEA, CFI, SRMR).
Take a closer look at these indices across the three models and determine for each model if it fit well (a) approximately or (b) exactly.
5.8 Estimate Interpretation
Now that we’ve completed the model respecification cycle, we can interpret the results of our final model. First, we can look at the (unstandardized) parameter estimates:
summary(fit3a, std = T, rsquare = T)
lavaan 0.6-19 ended normally after 49 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 18
Number of observations 200
Model Test User Model:
Test statistic 18.108
Degrees of freedom 18
P-value (Chi-square) 0.449
Parameter Estimates:
Standard errors Standard
Information Expected
Information saturated (h1) model Structured
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
Sequent =~
hm 1.000 0.857 0.253
nr 2.285 0.777 2.941 0.003 1.958 0.818
wo 2.767 0.941 2.939 0.003 2.370 0.819
Simult =~
gc 1.000 1.345 0.500
tr 1.436 0.228 6.307 0.000 1.932 0.717
sm 2.074 0.340 6.095 0.000 2.790 0.666
ma 1.241 0.215 5.762 0.000 1.670 0.598
ps 1.727 0.266 6.500 0.000 2.324 0.777
hm 0.986 0.248 3.979 0.000 1.326 0.391
Covariances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
Sequent ~~
Simult 0.587 0.231 2.546 0.011 0.510 0.510
Variances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.hm 7.851 0.845 9.291 0.000 7.851 0.683
.nr 1.899 0.487 3.896 0.000 1.899 0.331
.wo 2.750 0.713 3.856 0.000 2.750 0.329
.gc 5.444 0.585 9.297 0.000 5.444 0.750
.tr 3.521 0.457 7.702 0.000 3.521 0.485
.sm 9.767 1.179 8.287 0.000 9.767 0.556
.ma 5.013 0.569 8.815 0.000 5.013 0.643
.ps 3.554 0.529 6.718 0.000 3.554 0.397
Sequent 0.734 0.490 1.499 0.134 1.000 1.000
Simult 1.810 0.526 3.444 0.001 1.000 1.000
R-Square:
Estimate
hm 0.317
nr 0.669
wo 0.671
gc 0.250
tr 0.515
sm 0.444
ma 0.357
ps 0.603
The unstandardized estimates are mostly used to evaluate significance of the different parameter estimates. When using CFA, the focus is typically on interpreting the standardized estimates. Standardized factor loadings’ interpretation depends on whether an indicator loads on one or multiple factors.
- One factor loading: can be interpreted as Pearson correlations between the indicators and their factor.
- Multiple factor (cross)loadings: can be interpreted as standardized regression estimates, indicating the expected change in the indicator in standard deviation units, for a one standard deviation increase in factor A, holding factor B constant.
In the code above, we also requested R-squared estimates (rsquare = T
). Ideally, factors should explain at least half of the variance in continuous indicators (R-square = .50), that would be excellent. Values near .40 are very good, near .30 are good, near .20 are fair, and near .10 are poor. What kind of R-square is acceptable depends on the purpose that the latent factor will be used for. If the factors are used to make high-stakes decisions you’d want the indicators to share a lot of common variance, which means higher R-squareds.
5.9 Reliability based on factor models
We can estimate the reliability/internal consistency with Coefficient omega, which can be interpreted in a similar manner as Cronbach’s alpha, but takes into account that the indicators are caused by a shared common factor with differing degrees of association (i.e., different factor loadings). We can use the compRelSEM()
function from the semTools package to compute coefficient Omega:
compRelSEM(fit3a)
Sequent Simult
0.559 0.737
How would you interpret the reliability of these two factors?
5.10 Summary
In this R lab, you were introduced to the steps involved in specifying, estimating, evaluating, comparing and interpreting the results of confirmatory factor analyses. Below, you’ll find a Bonus section that demonstrates how to include a residual covariance in a CFA model. In the next R Lab, you will learn all about structural equation modeling, combining the power of path analyses and CFAs.
5.11 Bonus: Adding a residual covariance
To add a residual covariance between two indicators, we use the ~~
operator:
<- '
mod4 Sequent =~ hm + nr + wo
Simult =~ gc + tr + sm + ma + ps
nr ~~ wo
'
We can estimate the model and save it in fit4a
:
<- cfa(model = mod4, data = kabc) fit4a
Model Evaluation
Global Fit
summary(fit4a, fit.measures = T, estimates = F)
lavaan 0.6-19 ended normally after 57 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 18
Number of observations 200
Model Test User Model:
Test statistic 18.108
Degrees of freedom 18
P-value (Chi-square) 0.449
Model Test Baseline Model:
Test statistic 498.336
Degrees of freedom 28
P-value 0.000
User Model versus Baseline Model:
Comparative Fit Index (CFI) 1.000
Tucker-Lewis Index (TLI) 1.000
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -3768.932
Loglikelihood unrestricted model (H1) -3759.878
Akaike (AIC) 7573.864
Bayesian (BIC) 7633.234
Sample-size adjusted Bayesian (SABIC) 7576.208
Root Mean Square Error of Approximation:
RMSEA 0.005
90 Percent confidence interval - lower 0.000
90 Percent confidence interval - upper 0.063
P-value H_0: RMSEA <= 0.050 0.859
P-value H_0: RMSEA >= 0.080 0.008
Standardized Root Mean Square Residual:
SRMR 0.035
Local Fit
residuals(fit4a, type = "normalized")
$type
[1] "normalized"
$cov
hm nr wo gc tr sm ma ps
hm 0.000
nr 0.270 0.000
wo -0.271 0.000 0.000
gc -0.688 -1.380 -0.678 0.000
tr -0.713 -0.395 -0.130 0.286 0.000
sm 0.706 0.169 0.026 -0.443 -0.099 0.000
ma 1.044 0.955 0.682 0.154 -0.115 0.155 0.000
ps -0.181 -0.457 0.606 0.418 0.280 -0.091 -0.577 0.000
residuals(fit4a, type = "cor.bollen")
$type
[1] "cor.bollen"
$cov
hm nr wo gc tr sm ma ps
hm 0.000
nr 0.020 0.000
wo -0.020 0.000 0.000
gc -0.050 -0.098 -0.049 0.000
tr -0.053 -0.029 -0.010 0.022 0.000
sm 0.054 0.012 0.002 -0.033 -0.008 0.000
ma 0.079 0.071 0.050 0.011 -0.009 0.012 0.000
ps -0.014 -0.034 0.046 0.032 0.023 -0.007 -0.044 0.000
Model Comparison
<- compareFit(fit4a, fit2a)
comp24 summary(comp24)
################### Nested Model Comparison #########################
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fit4a 18 7573.9 7633.2 18.108
fit2a 19 7592.1 7648.2 38.325 20.217 0.30998 1 6.913e-06 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
####################### Model Fit Indices ###########################
chisq df pvalue rmsea cfi tli srmr aic bic
fit4a 18.108† 18 .449 .005† 1.000† 1.000† .035† 7573.864† 7633.234†
fit2a 38.325 19 .005 .071 .959 .939 .072 7592.082 7648.153
################## Differences in Fit Indices #######################
df rmsea cfi tli srmr aic bic
fit2a - fit4a 1 0.066 -0.041 -0.06 0.037 18.217 14.919
Estimate Interpretation
Take a look at the estimates and note the differences in standardized estimates and R-squared across the two models (with cross-loading or with residual covariance).
summary(fit4a, std = T, rsquare = T)
lavaan 0.6-19 ended normally after 57 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 18
Number of observations 200
Model Test User Model:
Test statistic 18.108
Degrees of freedom 18
P-value (Chi-square) 0.449
Parameter Estimates:
Standard errors Standard
Information Expected
Information saturated (h1) model Structured
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
Sequent =~
hm 1.000 2.303 0.679
nr 0.566 0.107 5.284 0.000 1.303 0.544
wo 0.685 0.129 5.292 0.000 1.578 0.545
Simult =~
gc 1.000 1.345 0.500
tr 1.436 0.228 6.307 0.000 1.932 0.717
sm 2.074 0.340 6.095 0.000 2.790 0.666
ma 1.241 0.215 5.762 0.000 1.670 0.598
ps 1.727 0.266 6.500 0.000 2.324 0.777
Covariances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.nr ~~
.wo 2.584 0.516 5.007 0.000 2.584 0.531
Sequent ~~
Simult 2.372 0.504 4.703 0.000 0.766 0.766
Variances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.hm 6.200 1.070 5.792 0.000 6.200 0.539
.nr 4.033 0.509 7.922 0.000 4.033 0.704
.wo 5.879 0.743 7.912 0.000 5.879 0.703
.gc 5.444 0.585 9.297 0.000 5.444 0.750
.tr 3.521 0.457 7.702 0.000 3.521 0.485
.sm 9.767 1.179 8.287 0.000 9.767 0.556
.ma 5.013 0.569 8.815 0.000 5.013 0.643
.ps 3.554 0.529 6.718 0.000 3.554 0.397
Sequent 5.302 1.304 4.066 0.000 1.000 1.000
Simult 1.810 0.526 3.444 0.001 1.000 1.000
R-Square:
Estimate
hm 0.461
nr 0.296
wo 0.297
gc 0.250
tr 0.515
sm 0.444
ma 0.357
ps 0.603
Reliability based on factor models
compRelSEM(fit4a)
Sequent Simult
0.559 0.790