Skip to contents

In essence quadratic effects are just a special case of interaction effects – where a variable has an interaction effect with itself. Thus, all of the modsem methods can be used to estimate quadratic effects as well.

Here you can see a very simple example using the LMS-approach.

library(modsem)
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3

# Inner model
Y ~ X + Z + Z:X + X:X
'

est1Lms <- modsem(m1, data = oneInt, method = "lms")
summary(est1Lms)
#> Estimating null model
#> EM: Iteration =     1, LogLik =   -17831.87, Change = -17831.875
#> EM: Iteration =     2, LogLik =   -17831.87, Change =      0.000
#> 
#> modsem (version 1.0.2):
#>   Estimator                                         LMS
#>   Optimization method                         EM-NLMINB
#>   Number of observations                           2000
#>   Number of iterations                              119
#>   Loglikelihood                               -14687.61
#>   Akaike (AIC)                                 29439.22
#>   Bayesian (BIC)                               29618.45
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                    24
#>   Dimensions                                          1
#>   Total points of integration                        24
#>  
#> Fit Measures for H0:
#>   Loglikelihood                                  -17832
#>   Akaike (AIC)                                 35723.75
#>   Bayesian (BIC)                               35891.78
#>   Chi-square                                      17.52
#>   Degrees of Freedom (Chi-square)                    24
#>   P-value (Chi-square)                            0.826
#>   RMSEA                                           0.000
#>  
#> Comparative fit to H0 (no interaction effect)
#>   Loglikelihood change                          3144.26
#>   Difference test (D)                           6288.52
#>   Degrees of freedom (D)                              2
#>   P-value (D)                                     0.000
#>  
#> R-Squared:
#>   Y                                               0.596
#> R-Squared Null-Model (H0):
#>   Y                                               0.395
#> R-Squared Change:
#>   Y                                               0.200
#> 
#> Parameter Estimates:
#>   Coefficients                           unstandardized
#>   Information                                  expected
#>   Standard errors                              standard
#>  
#> Latent Variables:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   X =~ 
#>     x1               1.000                             
#>     x2               0.804      0.013   63.012    0.000
#>     x3               0.915      0.014   66.271    0.000
#>   Z =~ 
#>     z1               1.000                             
#>     z2               0.810      0.012   66.171    0.000
#>     z3               0.881      0.013   68.333    0.000
#>   Y =~ 
#>     y1               1.000                             
#>     y2               0.798      0.008  105.752    0.000
#>     y3               0.899      0.008  110.419    0.000
#> 
#> Regressions:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   Y ~ 
#>     X                0.673      0.031   21.540    0.000
#>     Z                0.570      0.029   19.947    0.000
#>     X:X             -0.007      0.021   -0.347    0.729
#>     X:Z              0.715      0.029   24.650    0.000
#> 
#> Intercepts:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     x1               1.023      0.019   52.949    0.000
#>     x2               1.215      0.017   73.568    0.000
#>     x3               0.919      0.018   50.461    0.000
#>     z1               1.013      0.024   42.060    0.000
#>     z2               1.207      0.020   60.231    0.000
#>     z3               0.917      0.021   42.638    0.000
#>     y1               1.046      0.036   29.319    0.000
#>     y2               1.228      0.029   42.324    0.000
#>     y3               0.962      0.032   29.737    0.000
#>     Y                0.000                             
#>     X                0.000                             
#>     Z                0.000                             
#> 
#> Covariances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   X ~~ 
#>     Z                0.199      0.024    8.388    0.000
#> 
#> Variances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     x1               0.160      0.008   18.899    0.000
#>     x2               0.163      0.007   23.854    0.000
#>     x3               0.165      0.008   21.059    0.000
#>     z1               0.167      0.009   18.501    0.000
#>     z2               0.160      0.007   23.074    0.000
#>     z3               0.158      0.007   21.192    0.000
#>     y1               0.160      0.009   17.916    0.000
#>     y2               0.154      0.007   23.035    0.000
#>     y3               0.163      0.008   20.571    0.000
#>     X                0.972      0.016   60.073    0.000
#>     Z                1.017      0.018   55.486    0.000
#>     Y                0.983      0.038   26.123    0.000

In this example we have a simple model with two quadratic effects and one interaction effect, using the QML- and double centering approach, using the data from a subset of the PISA 2006 data.

m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'

est2Dblcent <- modsem(m2, data = jordan)
est2Qml <- modsem(m2, data = jordan, method = "qml")
summary(est2Qml)
#> Estimating null model
#> Starting M-step
#> 
#> modsem (version 1.0.2):
#>   Estimator                                         QML
#>   Optimization method                            NLMINB
#>   Number of observations                           6038
#>   Number of iterations                              101
#>   Loglikelihood                              -110520.22
#>   Akaike (AIC)                                221142.45
#>   Bayesian (BIC)                              221484.44
#>  
#> Fit Measures for H0:
#>   Loglikelihood                                 -110521
#>   Akaike (AIC)                                221138.58
#>   Bayesian (BIC)                              221460.46
#>   Chi-square                                    1016.34
#>   Degrees of Freedom (Chi-square)                    87
#>   P-value (Chi-square)                            0.000
#>   RMSEA                                           0.005
#>  
#> Comparative fit to H0 (no interaction effect)
#>   Loglikelihood change                             1.07
#>   Difference test (D)                              2.13
#>   Degrees of freedom (D)                              3
#>   P-value (D)                                     0.546
#>  
#> R-Squared:
#>   CAREER                                          0.512
#> R-Squared Null-Model (H0):
#>   CAREER                                          0.510
#> R-Squared Change:
#>   CAREER                                          0.002
#> 
#> Parameter Estimates:
#>   Coefficients                           unstandardized
#>   Information                                  observed
#>   Standard errors                              standard
#>  
#> Latent Variables:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~ 
#>     enjoy1           1.000                             
#>     enjoy2           1.002      0.020   50.587    0.000
#>     enjoy3           0.894      0.020   43.669    0.000
#>     enjoy4           0.999      0.021   48.227    0.000
#>     enjoy5           1.047      0.021   50.400    0.000
#>   SC =~ 
#>     academic1        1.000                             
#>     academic2        1.104      0.028   38.946    0.000
#>     academic3        1.235      0.030   41.720    0.000
#>     academic4        1.254      0.030   41.828    0.000
#>     academic5        1.113      0.029   38.647    0.000
#>     academic6        1.198      0.030   40.356    0.000
#>   CAREER =~ 
#>     career1          1.000                             
#>     career2          1.040      0.016   65.180    0.000
#>     career3          0.952      0.016   57.838    0.000
#>     career4          0.818      0.017   48.358    0.000
#> 
#> Regressions:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~ 
#>     ENJ              0.523      0.020   26.286    0.000
#>     SC               0.467      0.023   19.884    0.000
#>     ENJ:ENJ          0.026      0.022    1.206    0.228
#>     ENJ:SC          -0.039      0.046   -0.851    0.395
#>     SC:SC           -0.002      0.035   -0.058    0.953
#> 
#> Intercepts:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     enjoy1           0.000      0.013   -0.008    0.994
#>     enjoy2           0.000      0.015    0.010    0.992
#>     enjoy3           0.000      0.013   -0.023    0.982
#>     enjoy4           0.000      0.014    0.008    0.993
#>     enjoy5           0.000      0.014    0.025    0.980
#>     academic1        0.000      0.016   -0.009    0.993
#>     academic2        0.000      0.014   -0.009    0.993
#>     academic3        0.000      0.015   -0.028    0.978
#>     academic4        0.000      0.016   -0.015    0.988
#>     academic5       -0.001      0.014   -0.044    0.965
#>     academic6        0.001      0.015    0.048    0.962
#>     career1         -0.004      0.017   -0.204    0.838
#>     career2         -0.004      0.018   -0.248    0.804
#>     career3         -0.004      0.017   -0.214    0.830
#>     career4         -0.004      0.016   -0.232    0.816
#>     CAREER           0.000                             
#>     ENJ              0.000                             
#>     SC               0.000                             
#> 
#> Covariances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~ 
#>     SC               0.218      0.009   25.477    0.000
#> 
#> Variances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     enjoy1           0.487      0.011   44.335    0.000
#>     enjoy2           0.488      0.011   44.406    0.000
#>     enjoy3           0.596      0.012   48.233    0.000
#>     enjoy4           0.488      0.011   44.561    0.000
#>     enjoy5           0.442      0.010   42.470    0.000
#>     academic1        0.645      0.013   49.813    0.000
#>     academic2        0.566      0.012   47.864    0.000
#>     academic3        0.473      0.011   44.319    0.000
#>     academic4        0.455      0.010   43.579    0.000
#>     academic5        0.565      0.012   47.695    0.000
#>     academic6        0.502      0.011   45.434    0.000
#>     career1          0.373      0.009   40.392    0.000
#>     career2          0.328      0.009   37.019    0.000
#>     career3          0.436      0.010   43.272    0.000
#>     career4          0.576      0.012   48.372    0.000
#>     ENJ              0.500      0.017   29.547    0.000
#>     SC               0.338      0.015   23.195    0.000
#>     CAREER           0.302      0.010   29.711    0.000

Note: The other approaches work as well, but might be quite slow depending on the number of interaction effects (particularly for the LMS- and constrained approach).