Dominance Analysis for Latent Variable Models: A Comparison of Methods With Categorical Indicators and Misspecified Models

Educational and Psychological Measurement, Ahead of Print.
Dominance analysis (DA) is a very useful tool for ordering independent variables in a regression model based on their relative importance in explaining variance in the dependent variable. This approach, which was originally described by Budescu, has recently been extended to use with structural equation models examining relationships among latent variables. Research demonstrated that this approach yields accurate results for latent variable models involving normally distributed indicator variables and correctly specified models. The purpose of the current simulation study was to compare the use of this DA approach to a method based on observed regression DA and DA when the latent variable model is estimated using two-stage least squares for latent variable models with categorical indicators and/or model misspecification. Results indicated that the DA approach for latent variable models can provide accurate ordering of the variables and correct hypothesis selection when indicators are categorical and models are misspecified. A discussion of implications from this study is provided.

The Impact and Detection of Uniform Differential Item Functioning for Continuous Item Response Models

Educational and Psychological Measurement, Volume 83, Issue 5, Page 929-952, October 2023.
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning [DIF]). With the increasing use of computer-based measurement, use of items with a continuous response modality is becoming more common. Models for use with these items have been developed and refined in recent years, but less attention has been devoted to investigating DIF for these continuous response models (CRMs). Therefore, the purpose of this simulation study was to compare the performance of three potential methods for assessing DIF for CRMs, including regression, the MIMIC model, and factor invariance testing. Study results revealed that the MIMIC model provided a combination of Type I error control and relatively high power for detecting DIF. Implications of these findings are discussed.