Bruno Zumbo

From Wikipedia, the free encyclopedia
Bruno D. Zumbo
BornApril 1966 (age 55)
NationalityCanadian
OccupationProfessor
Years active1987 - to the present
[first published scientific paper appeared in 1987]
Known forDifferential Item Functioning (DIF);
Validity Theory;
Robust and Nonparametric Methods (e.g., Ordinal Reliability Indices);
Pratt Indices for variable ordering;
Classical Test Theory and Measurement Error Models;
TitleProfessor & Distinguished University Scholar;
Tier 1- Canada Research Chair in Psychometrics and Measurement;
Paragon UBC Professor of Psychometrics & Measurement
Academic background
Alma materCarleton University
University of Alberta
Doctoral advisorDonald W. Zimmerman
Academic work
DisciplineMathematical Science, Psychometrics, Statistics
Sub-disciplinePsychometrics, Measurement
InstitutionsUniversity of British Columbia, Vancouver BC
Websitehttp://faculty.educ.ubc.ca/zumbo/cv.htm

Bruno D. Zumbo (born 1966) is an applied mathematician working at the intersection of the mathematical sciences and the behavioral, social and health sciences. He is currently Professor and Distinguished University Scholar, the Tier 1 Canada Research Chair in Psychometrics and Measurement, and the Paragon UBC Professor of Psychometrics & Measurement[1] at University of British Columbia.

His body of research in the mathematical sciences reflects a wide range of research in mathematics and statistics aimed at developing and exploring the properties and applications of mathematical structures of measurement, survey design, testing, and assessment. Many of these projects are conducted with a select group of local, national, and international collaborators as well as graduate students and postdoctoral researchers. The mission of his research funding is to invest in supporting training through research involvement of the next generation of social and health scientists, psychometricians, mathematical scientists, and professional research scientists at a portfolio of governmental departments, as well as non-governmental research institutes, and survey, testing and assessment organizations.

He currently teaches in the graduate Measurement, Evaluation, & Research Methodology Program with an additional appointment in the Institute of Applied Mathematics, and earlier also in the Department of Statistics, at the University of British Columbia (UBC) in Vancouver, British Columbia, Canada. Prior to arriving at UBC in 2000, he held professorships in the Departments of Psychology and of Mathematics at the University of Northern British Columbia (1994-2000), and earlier in the Faculty of Education with adjunct appointment in the Department of Mathematics at the University of Ottawa (1990-1994).

Biography[]

He was born and raised in Edmonton, Alberta, one of six children (including a twin sister) of immigrant parents from Reggio di Calabria, Italy. He speaks English and Italian (including the Calabrese dialect), and reads English, Italian and French.

He was a first-generation university graduate. He completed his B.Sc. at the University of Alberta (Edmonton, AB) and his MA and Ph.D. from Carleton University (Ottawa, ON). His studies reflected his curiosity and diverse interests. With an eye to fostering his talents and interests, the University of Alberta allowed him to take graduate (Masters and PhD) courses in statistics and mathematics while eventually completing his B.Sc.. Likewise, in graduate school his professors allowed him to pursue and integrate mathematics, philosophy of science and Psychology.

His interests continue to be focused on mathematical sciences of measurement and scientific methodology with a blend of mathematics, social sciences like psychology, philosophy of science and measurement in science. His doctoral dissertation titled "Statistical Methods to Overcome Nonindependence of Coupled Data in Significance Testing" was under the direction of Prof. Donald W. Zimmerman (Carleton University, Ottawa).

Program of research[]

Professor Zumbo is an internationally recognized mathematical scientist, psychometrician, and measurement theorist. His program of research has had wide influence in psychometric and validity theories, measurement invariance and test bias, educational and behavioral statistics, and language testing.[2]

Overview[]

He is known for his contributions in the fields of statistics, psychometrics, validity theory, and studies of the mathematical basis of classical test theory, item response theory, and measurement error models.

  • Over the last nearly 30 years his interdisciplinary program of research has emerged to have broad interdisciplinary impact and as such is well-recognized in a variety of disciplines including psychometrics and measurement, statistics, language testing, educational research, quality of life and well being, health and human development.
  • His research has branched to many areas in statistics, measurement, and scientific methodology. He takes a problem-solving point of view to his research in that the research is not tied to any one area of statistics or measurement. His interests in mathematics focused mostly in analysis (e.g., real analysis, measure theory, metric spaces), with some secondary interests in applications of number theory, abstract algebra and linear algebra. He continues to learn and integrate new ideas and areas of mathematics (and philosophy) as necessary to solve a new scientific problem.
  • As is evident from the list of publications and awards, his program of research is noted for his having addressed cross-disciplinary recurring controversial topics such as theories of measurement validity, measurement invariance, the role of levels (or scales) of measurement in statistics, and the role of hypothesis testing in empirical studies and particularly their lack of robustness to violations of assumptions. His contributions to these debates have highlighted his orientation from the intersection of mathematics, measurement and statistical science with consideration given to philosophy of science and scientific methodology.
  • His program of research is actively engaged in psychometrics for language testing, quality of life and wellbeing, and health and human development. This applied work, in the end, feeds his basic program of research in research methodology and measurement.[3][2]

Measure-theoretic psychometric and measurement theory[]

A noteworthy contribution to the field is the pioneering work initiated by Donald Zimmerman in 1975 in Psychometrika and continued by Zimmerman and Zumbo in a series of collaborations dating back to 1990. This line of research has continued with a series of collaborations by Bruno Zumbo and Ed Kroc more fully articulating what Zumbo has termed as measure-theoretic psychometric and measurement theory (e.g., Zimmerman, 1975; Zumbo & Zimmerman, 1993; Zimmerman & Zumbo, 2001; Kroc & Zumbo, 2018; Zumbo & Kroc, 2019; Kroc & Zumbo, 2020).

Measure-theoretic psychometric and measurement theory emerges from applications of mathematical analysis and measure theory that allows for a precise, succinct, and mathematically coherent specification of test theory and resolves many inconsistencies and apparent paradoxes in conventional psychometric theory as developed, for example, in the seminal work of Lord and Novick (1968). In addition, in the measure theoretic formulation of psychometric and measurement theory (building upon measure-theoretic probability), one can speak of the general concept of measures instead of the restrictive notions of density or mass functions; and avoiding any entanglements about Stevens' scales or levels of measurement, and being limited to discrete or continuous random variables and ignoring attributes or phenomena that are mixtures of continuous and discrete random variables (Zumbo & Zimmerman, 1993; Zumbo & Kroc, 2019; Kroc & Zumbo, 2020).

In this general line of research of a measure-theoretic measurement theory, Kroc (2020) introduced random-variable-valued measurements; which can be used to characterize measurement protocols and response process error when sample data are not deterministic.

Measurement Invariance, Exchangeability, and Bias & Differential Functioning of Tests and Surveys[]

A noteworthy contribution has been the body of research focused on the analysis of the psychometric concepts of measurement invariance, exchangeability, and bias & differential item (survey question or assessment task) functioning. This body of research sometimes travels under the name "differential item or test functioning (DIF / DTF)."

This program of research started as a study of the paradox that is measurement invariance (e.g., Li and Zumbo 2009; Rupp and Zumbo 2003, 2004, 2006; Zimmerman and Zumbo 2001; Zumbo and Gelin, 2005; Zumbo and Rupp 2004; Zumbo 1999, 2007a, 2007b, 2008, 2009). On the one hand, under a mathematical lens, it is a trivial identity but on the other hand, under a historical and conceptual lens, it is probably the most important property of latent variable measurement models and item response theory (IRT), in particular. Furthermore, according to much of the contemporary psychometric literature, invariance is what sets IRT apart from classical test theory (CTT) models.

Mathematically, IRT parameter invariance is a simple identity for parameters that are on the same scale. Yet the latent scale in IRT models is arbitrary, so that unequated sets of model parameters are invariant only up to a set of linear transformations specific to a given IRT model. Note that the properties of specific objectivity and scale representation in terms of additive conjoint measurement for the Rasch model (Perline, Wright, & Wainer, 1979) do not imply that the Rasch scale is nonarbitrary; they merely imply that it has certain desirable measurement properties. Its metric remains fundamentally arbitrary, however, because the latent variable is a statistical construction (see, e.g., Lord, 1980, p. 36).

As Rupp and Zumbo (2006) state, the "... process of clarifying what is meant by parameter invariance and demystifying its status, which is often misperceived as a “mysterious” property that all IRT models seem to possess by definition across an almost infinite range of examinee populations and measurement conditions. Put differently, if thorough discussions about inferential limits and generalizability are desired, this article shows that the mathematical foundations of parameter invariance as a fundamental property of measurement cannot be ignored." Research on score equating, differential item functioning (DIF), and item parameter drift (IPD) deals with lack of measurement invariance LOI and the effects introduced thereby.

Draper–Lindley–de Finetti Framework and the Explanation-Focused View of Validity and Validation[]

What has emerged alongside the body of research on exchangeability and invariance are (i) the Draper-Lindley-de Finetti framework focused on inferences about items (or tasks) and persons made in assessment and surveys described in Zumbo (2007) for the articulation of the bounds of the claims from our surveys and assessments, and (ii) Zumbo's explanation-focused view of test validity (e.g., Zumbo, 2007, 2009, 2017; Zumbo and Gelin, 2005; Zumbo et al., 2015). These both invoke what Zumbo (2015[4]) refers to as an in vivo view of testing and assessment rather than the more widely received in vitro view. Doing so, necessitates an ecological model of item responding and test performance (Zumbo et al., 2015).

Scholarly interests[]

Awards and recognition[]

  • Tier 1 - Canada Research Chair in Psychometrics and Measurement, held at the University of British Columbia, awarded in 2020. The first term of the CRC is for seven years, 2020-2027. Building on his body of research on measurement invariance and differential item/task/test functioning, the theme of this CRC is Equity and Fairness at the Nexus of Data Science, Digital Innovation, and Social Justice. The research involves developing and refining new mathematical and data science methods situated in Professor Zumbo’s ecological theory and his emerging paradigm of item and survey responding.
  • Paragon UBC Professorship in Psychometrics and Measurement, renewed for 2015-2024, initially selected by UBC to lead the $1.8-million project to enhance UBC's standing as a global leader in research and training in the field of the statistical science of measurement.[1] Please see the report "Looking Back, Looking Ahead: Outcomes, Impacts and Next Steps," which highlights the achievements of the first UBC-Paragon partnership & plans for ongoing collaboration. Check out the final report of the first research initiative online: link
  • Centenary Medal of Distinction, awarded in 2019 by the UBC School of Nursing recognizing having "brought high honour to the School... [having] significantly advanced the School's vision, mission and mandate." This award reflects nearly 25 years of collaborative psychometric and statistical research with professors of nursing, and having significantly contributed to the training of advanced nursing and health researchers. Announcement of the medal.
  • Pioneer in the Psychometrics of Quality of Life. In 2018 he was honored with this distinction by the International Society for Quality of Life Studies (ISQOLS) for having made substantial contributions to measurement and research methodology impacting on the field of Quality of Life Research. Announcement and a brief paper published in 2019 in recognition of this distinction and review his contributions to the field written by Professors Gadermann and Sawatzky in the journal Applied Research in Quality of Life.
  • Distinguished University Scholar. Awarded in 2017 this prestigious distinction recognizes exceptional members of UBC faculty who has distinguished themselves internationally in research and/or teaching and learning.
  • UBC Killam Research Prize, Senior Category. Awarded in 2016/2017 winners are nominated by internal and external colleagues and adjudicated at the University level by the President's Faculty Research Major-Awards Committee.
  • UBC Killam Teaching Prize, 2011/2012 . The university wide prize is awarded annually, from the Killam Endowment Fund, to faculty nominated by students, colleagues, and alumni in recognition of excellence in teaching.
  • Fellow of the American Educational Research Association (AERA), awarded in 2011. Professor Zumbo is the first University of British Columbia (UBC) professor to be selected an AERA Fellow. As AERA notes, Fellows are selected in recognition of their exceptional scientific or scholarly contributions to education research and substantial research accomplishments. AERA Fellows are known both nationally and internationally for their outstanding contributions to education research. American Educational Research Association
  • Research Fellow Award by the International Society for Quality of Life Studies, ISQOLS, awarded in 2010. Professor Zumbo was one of just three scholars in the world who received recognition as a 2010 Research Fellow of ISQOLS. Election to the status of Research Fellow is an indication of a scholar making a substantial contribution to quality of life (QoL) research.
  • Samuel J. Messick Memorial Lecture Award -- awarded in 2005. The award is given in honor of the late Samuel J. Messick, a distinguished research scientist at Educational Testing Service.
  • Excellence in Teaching Award -- University of Northern British Columbia university-wide award, 1998.
  • SSHRCC Research Fellow 1989-1990 Social Sciences and Humanities Research Council of Canada Research Fellow.

Selected publications[]

  • (2020). The role of item distributions on reliability estimation: The case of Cronbach's coefficient alpha. Educational and Psychological Measurement, 80(5), 825–846. (with Astivia, O.L.O., & Kroc, E.)
  • (2020). Assembled validity: rethinking Kane’s argument-based approach in the context of International Large-Scale Assessments (ILSAs). Assessment in Education: Principles, Policy & Practice, 27:6, 588-606. (with Addey, C., & Maddox, B.)
  • (2020). A transdisciplinary view of measurement error models and the variations of X=T+E. Journal of Mathematical Psychology. (with Kroc, E.)
  • (2020). A Propensity Score Method for Investigating Differential Item Functioning in Performance Assessment. Educational and Psychological Measurement, 80(3), 476–498. (with Chen, M.Y., & Liu, Y.)
  • (2019). A note on the solution multiplicity of the Vale-Maurelli intermediate correlation equation. Journal of Educational and Behavioral Statistics, 44(2), 127-143. (with Astivia, O.L.O.)
  • (2018). Calibration of measurements. Journal of Modern Applied Statistical Methods, 17(2), 2-28. (with Kroc, E.)
  • (2018). In defense of Pratt's variable importance axioms: A response to Gromping. Wiley Interdisciplinary Reviews (WIREs): Computational Statistics, 10, pp. 1–10. (with Thomas, D.R., & Kwan, E.)
  • (2018). On the Solution Multiplicity of the Fleishman Method and its Impact in Simulation Studies. British Journal of Mathematical and Statistical Psychology, 71(3), 437-458. (with Astivia, O.L.O.)
  • (2018). A Note on Using the Nonparametric Levene Test When Population Means Are Unequal. Practical Assessment, Research & Evaluation, 23(13), 1-11. (with Shear, B.R., & Nordstokke, D.W.)
  • (2018). The use of latent variable mixture models to identify invariant items in test construction. Quality of Life Research, 27, pp. 1745–1755. (with Sawatzky, R., Russell, L. B., Sajobi. T. T., Lix. L. M., & Kopec, J. A.)
  • (2018). Scoping Review of Response Shift Methods: Current Reporting Practices, and Recommendations. Quality of Life Research, 27, pp. 1133–1146. (with Sajobi, T.T., Brambhatt, R., Lix, L.M., & Sawatzky, R. )
  • (2017). Trending Away From Routine Procedures, Towards an Ecologically Informed 'In Vivo' View of Validation Practices. Measurement: Interdisciplinary Research and Perspectives, 15:3-4, pp. 137–139.
  • (2017). Population Models and Simulation Methods: The Case of the Spearman Rank Correlation. British Journal of Mathematical and Statistical Psychology, 70, pp. 347–367. (with Astivia, O.L.O.)
  • (2017). Understanding and Investigating Response Processes in Validation Research. New York, NY: Springer. (edited with Hubley, A.M.) Book information at Springer Press' website please click https://www.springer.com/us/book/9783319561288.
  • (2016). Validity as a Pragmatist Project: A Global Concern with Local Application. In Vahid Aryadoust, and Janna Fox (Eds.), Trends in Language Assessment Research and Practice (pp. 555–573). Newcastle: Cambridge Scholars Publishing. (with Stone, J.)
  • (2015). Resolving the Issue of How Reliability is Related to Statistical Power: Adhering to Mathematical Definitions. Journal of Modern Applied Statistical Methods, 14, 9-26. (with Zimmerman, D. W.)
  • (2015). A Methodology for Zumbo's Third Generation DIF Analyses and the Ecology of Item Responding. Language Assessment Quarterly, 12, 136-151. (with Liu, Y., Wu, A.D., Shear, B.R., Astivia, O.L.O. & Ark, T.K.)
  • (2014). Validity and Validation in Social, Behavioral, and Health Sciences. New York, NY: Springer. (edited with Chan, E.K.H.). Book information at Springer Press' website please click https://www.springer.com/social+sciences/wellbeing+%26+quality-of-life/book/978-3-319-07793-2.
  • (2012). Difference Scores from the Point of View of Reliability and Repeated Measures ANOVA: In Defense of Difference Scores for Data Analysis. Educational and Psychological Measurement, 72, 37-43. (with Thomas, D. R.)
  • (2012). Estimating ordinal reliability for Likert-type and ordinal item response data: A conceptual, empirical, and practical guide. Practical Assessment, Research & Evaluation, 17(3), 1-13. (with Gadermann, A. M., & Guhn, M.)
  • (2008). A Method for Simulating Multivariate Non-normal Distributions with Specified Standardized Cumulants and Intraclass Correlation Coefficients. Communications in Statistics: Simulation and Computation, 37, 617-628. (with Headrick, T. C.)
  • (2008). Statistical Methods for Investigating Item Bias in Self-Report Measures, [The University of Florence Lectures on Differential Item Functioning]. Universita degli Studi di Firenze, Florence, Italy.
  • (2007). Validity: Foundational Issues and Statistical Methodology. In C.R. Rao and S. Sinharay (Eds.) Handbook of Statistics, Vol. 26: Psychometrics, (pp. 45–79). Elsevier Science B.V.: The Netherlands.
  • (2007). Three generations of differential item functioning (DIF) analyses: Considering where it has been, where it is now, and where it is going. Language Assessment Quarterly, 4, 223-233.
  • (2007). Ordinal Versions of Coefficients Alpha and Theta For Likert Rating Scales. Journal of Modern Applied Statistical Methods, 6, 21-29. (with Gadermann, A. M., & Zeisser, C.)
  • (2005). On Optimizing Multi-Level Designs: The Concern For Power Under Budget Constraints. Australian & New Zealand Journal of Statistics, 47, 219-229. (with Headrick, T. C.)
  • (2005). Embedding IRT In Structural Equation Models: A Comparison With Regression Based On IRT Scores. Structural Equation Modeling, 12, 263-277. (with Lu, I. R. R., Thomas, D. R.)
  • (2004). To Bayes or Not to Bayes, From Whether to When: Applications of Bayesian Methodology to Modeling. Structural Equation Modeling, 11, 424-451. (with Rupp, A. A., Dey, D. K.)
  • (2004). Responsible Modeling of Measurement Data For Appropriate Inferences: Important Advances in Reliability and Validity Theory. In David Kaplan (Ed.), The SAGE Handbook of Quantitative Methodology for the Social Sciences (pp. 73–92). Thousand Oaks, CA: Sage Press. (with Rupp, A. A.)
  • (2003). Comparison of Aligned Friedman Rank and Parametric Methods for Testing Interactions in Split-Plot Designs. Computational Statistics and Data Analysis, 42, 569-593. (with Beasley, T. M.)
  • (2003). Does Item-Level DIF Manifest Itself in Scale-Level Analyses?: Implications for Translating Language Tests. Language Testing, 20, 136-147
  • (2001). The Geometry of Probability, Statistics, and Test Theory. International Journal of Testing, 1, 283-303. (with Zimmerman, D. W.)
  • (1999). A handbook on the theory and methods of differential item functioning (DIF): Logistic regression modeling as a unitary framework for binary and Likert-type (ordinal) item scores. Ottawa, ON: Directorate of Human Resources Research and Evaluation, Canadian Department of National Defense.
  • (1999). An overview and some observations on the psychometric models used in computer-adaptive language testing. In M. Chalhoub-Deville (Ed.), Issues in computer-adaptive testing of reading proficiency, (pp. 216–228). Cambridge UK: Cambridge University Press. (with MacMillan, P. D.)
  • (1998). A note on misconceptions concerning prospective and retrospective power. Journal of the Royal Statistical Society, Series D (The Statistician), 47, 385-388. (with Hubley, A. M.)
  • (1998). (Ed.) Validity Theory and the Methods Used in Validation: Perspectives from the Social and Behavioral Sciences. [Special issue of the journal Social Indicators Research: An International and Interdisciplinary Journal for Quality-of-Life Measurement, Volume 45, No. 1-3, 509 pages]
  • (1996). Using a measure of variable importance to investigate the standardization of discriminant coefficients. Journal of Educational & Behavioral Statistics, 21, 110-130. (with Thomas, D. R.)
  • (1993). Effect of nonindependence of sample observations on parametric and nonparametric statistical tests. Communications in Statistics: Simulation and Computation, 22, 779-789. (with Zimmerman, D. W., Williams, R. H.)
  • (1993). Coefficient alpha as an estimate of test reliability under violation of two assumptions. Educational & Psychological Measurement, 53, 33-49. (with Zimmerman, D. W., & Lalonde, C.)
  • (1990). CAI as an adjunct to teaching introductory statistics: Affect mediates learning. Journal of Educational Computing Research, 6, 29 -40. (with Varnhagen, C. K.)
  • (1988). Implicit ordinal number knowledge tasks as predictors for number line comprehension: A validation study. Educational and Psychological Measurement, 48, 219-230. (with Kingma, J.)
  • (1987). Relationship between seriation, transitivity, and explicit ordinal number comprehension. Perceptual and Motor Skills, 65, 559-569. (with Kingma, J.)

See Scholarly Interests above for a full list of publications

References[]

  1. ^ a b "Bruno Zumbo". ubc.ca. Retrieved March 6, 2017.
  2. ^ a b Zumbo, Bruno. "Bruno Zumbo". faculty.educ.ubc.ca. Retrieved 2017-08-21.
  3. ^ a b "Bruno D. Zumbo". Retrieved March 6, 2017.
  4. ^ Zumbo, B.D. (2015, November). Consequences, Side Effects and the Ecology of Testing: Keys to Considering Assessment ‘In Vivo’. Keynote address, the annual meeting of the Association for Educational Assessment – Europe (AEA-Europe), Glasgow, Scotland. Video URL https://brunozumbo.com/?page_id=31
Retrieved from ""