Joint Quantitative Brownbag

Speaker

Dr. Mijke Rhemtulla

Dr. Mijke Rhemtulla\ Department of Psychology\ University of California David

Dr. Rhemtulla is Associate professor of Quantitative Psychology at UC Davis and director. Before moving to Davis, she was Assistant professor at the University of Amsterdam, and before that a postdoctoral researcher at the University of Kansas, a visiting researcher at the University of Texas, a graduate student in Developmental Psychology at the University of British Columbia, and an undergraduate student of psychology at the University of Alberta. Her research is largely focused on structural equation models, which allow researchers to represent unobservable psychological constructs via their hypothesized relations with observed (e.g., survey) variables. Some of the questions she has studied include: What is the best way to model difficult data, like ordinal variables and missing data? How can we plan for missing data in a way that minimizes bias and maximizes efficiency? What goes wrong when the relations between observed and latent variables are not the same as those represented by the model? And what tools can we use to diagnose these problems? She has been an associate editor at the British Journal for Mathematical and Statistical Psychology and Advances in Methods and Practices for Psychological Science, and a guest editor for a special issue of Psychometrika on Network Psychometrics in Action. She is a UC Davis Chancellor’s Fellow.

Title

Consequences of Mistaking the Measurement Model in SEM, Alternatives to Common Factors, and a Method for Model Selection

Abstract

Much of the appeal of structural equation models lies in their capacity to account for measurement error by modeling abstract constructs (extraversion, quality of life, school readiness) as latent common factors. This feature of SEMs has led researchers across the social sciences to use latent variable SEMs with little awareness of the assumptions that the reflective measurement model requires. But the choice of measurement model carries implications about the structure of data and about data-construct associations: An incorrect model can change the meaning of the construct and render structural relations uninterpretable. When a common factor model is mis-applied, structural model coefficients can be (highly) biased, and this bias can arise even when model fit is perfect. Recent developments allow for two alternative measurement models to be implemented in SEM: a composite score model with user-defined weights, and a composite score model with model-estimated weights. In this talk I discuss the problem and present the alternative measurement models, and I propose a statistical test that may help to identify the most appropriate measurement model given data.