An Evaluation of Fit Indices Used in Model Selection of Dichotomous Mixture IRT Models

Educational and Psychological Measurement, Ahead of Print.
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike’s information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper’s information criterion (DIC), sample size adjusted BIC (SABIC), relative entropy, the integrated classification likelihood criterion (ICL-BIC), the adjusted Lo–Mendell–Rubin (LMR), and Vuong-Lo–Mendell–Rubin (VLMR). The accuracy of the fit indices was assessed for correct detection of the number of latent classes for different simulation conditions including sample size (2,500 and 5,000), test length (15, 30, and 45), mixture proportions (equal and unequal), number of latent classes (2, 3, and 4), and latent class separation (no-separation and small separation). Simulation study results indicated that as the number of examinees or number of items increased, correct identification rates also increased for most of the indices. Correct identification rates by the different fit indices, however, decreased as the number of estimated latent classes or parameters (i.e., model complexity) increased. Results were good for BIC, CAIC, DIC, SABIC, ICL-BIC, LMR, and VLMR, and the relative entropy index tended to select correct models most of the time. Consistent with previous studies, AIC and AICc showed poor performance. Most of these indices had limited utility for three-class and four-class mixture 3PL model conditions.

Words matter: The use of generic “you” in expressive writing in an oncology setting

Journal of Health Psychology, Ahead of Print.
The use of generic “you” (GY) in writing samples fosters psychological distancing and functions as a linguistic mechanism to facilitate emotion regulation. This method of creating psychological distance from the traumatic experience of cancer may be used by patients processing emotions. We used behavioral coding to analyze expressive writing samples collected from 138 cancer patients to examine the association between the use of “you” and cancer-related symptoms and psychological outcomes. Occurrences of GY were low, but our qualitative results showed how the use of GY could create a universal experience of cancer. The use of GY was not associated with cancer-related symptoms and depressive symptoms, but longitudinal analyses revealed that those using GY had fewer intrusive thoughts and avoidance behaviors across the follow-up period of 1, 4, and 10 months after the intervention. The development of psychological self-distancing prompts to use in writing interventions or as a clinical tool for cancer patients should be explored.

Measuring Unipolar Traits With Continuous Response Items: Some Methodological and Substantive Developments

Educational and Psychological Measurement, Ahead of Print.
In recent years, some models for binary and graded format responses have been proposed to assess unipolar variables or “quasi-traits.” These studies have mainly focused on clinical variables that have traditionally been treated as bipolar traits. In the present study, we have made a proposal for unipolar traits measured with continuous response items. The proposed log-logistic continuous unipolar model (LL-C) is remarkably simple and is more similar to the original binary formulation than the graded extensions, which is an advantage. Furthermore, considering that irrational, extreme, or polarizing beliefs could be another domain of unipolar variables, we have applied this proposal to an empirical example of superstitious beliefs. The results suggest that, in certain cases, the standard linear model can be a good approximation to the LL-C model in terms of parameter estimation and goodness of fit, but not trait estimates and their accuracy. The results also show the importance of considering the unipolar nature of this kind of trait when predicting criterion variables, since the validity results were clearly different.