Psychological Tests

 

We develop and evaluate psychological tests and questionnaires on the basis of classical test theory and probabilistic test theory. The instruments include knowledge tests for specific domains, computer-based tests of cognitive performance, and questionnaires assessing relevant constructs of social perception. In addition, we use advanced models of item response theory to analyze response tendencies and to test assumptions on the interpretation of different response formats. Another focus of research aims at probabilistic test models for the analysis of change over time and for the evaluation of intervention effects.

Research Grants from the German Research Council

  • DFG-Projekt "Vereinbarkeit von Konstruktabdeckung und eindimensionaler statistischer Modellierung durch Item-Response-Theory-Modelle mit lokalen Itemabhängigkeiten und eine Anwendung in der Persönlichkeitspsychologie" [Integration of construct coverage and uni-dimensional item response models with local item dependencies, and an application to personality psychology] (Grant DO 2035/1, 2016-2019).

Selected Publications

  • Böckenholt, U. & Meiser, T. (2017). Response style analysis with threshold and multiprocess IRT models: A review and tutorial. British Journal of Mathematical and Statistical Psychology, 70, 159-181.
  • Doebler, A. & Holling, H. (2016). A processing speed test based on rule-based item generation: An analysis with the Rasch poisson counts model. Learning and Individual Differences, 52, 121-128.
  • Goldhammer, F., Steinwascher, M., Kroehne, U., & Naumann, J. (2017). Modelling individual response time effects between and within experimental speed conditions: A GLMM approach for speeded tests. British Journal of Mathematical and Statistical Psychology. 70(2), 238-256.
  • Heckeroth, J. & Boywitt, C. D. (2017). Examining authenticity: An initial exploration of the suitability of handwritten electronic signatures. Forensic Science International, 275, 144-154.
  • Jasper, F. & Wagener, D. (2013). M-PA. Mathematiktest für die Personalauswahl. Göttingen. Hogrefe.
  • Machunsky, M., & Meiser, T. (2006). Personal Need for Structure als differentialpsychologisches Konstrukt in der Sozialpsychologie: Psychometrische Analyse und Validierung einer deutschsprachigen PNS-Skala. Zeitschrift für Sozialpsychologie, 37, 87-97.
  • Meiser, T., Hoeffler, D., & Jasper, F. (2012). Ein symmetrisches Rating-Scale-Modell für Fragebogen mit invertierten Items. In W. Kempf & R. Langeheine (Hrsg.), Item-Response-Modelle in der sozialwissenschaftlichen Forschung (S. 150-170). Berlin: Regener.
  • Meiser, T., & Machunsky, M. (2008). The personal structure of personal need for structure: A mixture-distribution Rasch analysis. European Journal of Psychological Assessment, 24, 27-34.
  • Meiser, T., & Steinwascher, M. (2014). Different kinds of interchangeable methods in multitrait-multimethod analysis: A note on the multilevel CFA-MTMM model by Koch et al. (2014). Frontiers in Psychology: Quantitative Psychology and Measurement. DOI: 10.3389/fpsyg.2014.00615
  • Meiser, T., Steinwascher, M., & Plieninger, H. (2016). Rasch models for measuring change. Wiley StatsRef: Statistics Reference Online, 1-8.DOI: 10.1002/9781118445112.stat06384.pub2
  • Plieninger, H. (2016). Mountain or molehill: A simulation study on the impact of response styles. Educational and Psychological Measurement, 77, 32-53.
  • Plieninger, H. & Meiser, T. (2014). Validity of multi-process IRT models for separating content and response styles. Educational and Psychological Measurement. 74, 875-899. 
  • Schweizer, K., Steinwascher, M., Moosbrugger, H. & Reiss, S. (2011). The structure of research methodology competency in higher education and the role of teaching teams and course temporal distance. Learning and Instruction, 21, 68-76.
  • Wagener, D. (2013). C-PA. Computerwissenstest für die Personalauswahl. Göttingen: Hogrefe.