Abstract
This study set out to investigate intellectual domains as well as the use of measurement and validation methods in language assessment research and second language acquisition published in English in peer-reviewed journals. Using Scopus, we created two datasets: a dataset of core journals consisting of 1,561 articles published in four language assessment journals, and a dataset of general journals consisting of 3,175 articles on language assessment published in the top journals of SLA and applied linguistics. We applied document co-citation analysis to detect thematically distinct research clusters. Next, we coded citing papers in each cluster based on an analytical framework for measurement and validation. We found that the focus of the core journals was more exclusively on reading and listening comprehension assessment, facets of speaking and writing performance such as raters and validation, as well as feedback, corpus linguistics, and washback. By contrast, the primary focus of assessment research in the general journals was on vocabulary, oral proficiency, essay writing, grammar, and reading. The secondary focus was on affective schemata, awareness, memory, language proficiency, explicit vs. implicit language knowledge, language or semantic awareness, and semantic complexity. With the exception of language proficiency, this second area of focus was absent in the core journals. It was further found that the majority of citing publications in the two datasets did not carry out inference-based validation on their instruments before using them. More research is needed to determine what motivates authors to select and investigate a topic, how thoroughly they cite past research, and what internal and external factors lead to the sustainability of a Research Topic in language assessment.
Keywords No keywords specified (fix it)
Categories (categorize this paper)
DOI 10.3389/fpsyg.2020.01941
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy


Upload a copy of this paper     Check publisher's policy     Papers currently archived: 69,257
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Construct Validity in Psychological Tests.Lee J. Cronbach & P. E. Meehl - 1956 - In Herbert Feigl & Michael Scriven (eds.), Minnesota Studies in the Philosophy of Science. , Vol. pp. 1--174.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

The Validity of National Curriculum Assessment.Gordon Stobart - 2001 - British Journal of Educational Studies 49 (1):26 - 39.
Econometric Approaches to the Measurement of Research Productivity.Cinzia Daraio - 2019 - In Wolfgang Glänzel, Henk F. Moed, Ulrich Schmoch & Mike Thelwall (eds.), Springer Handbook of Science and Technology Indicators. Springer Verlag. pp. 633-666.
The Appeal to Robustness in Measurement Practice.Alessandra Basso - 2017 - Studies in History and Philosophy of Science Part A 65:57-66.
Measurement Units and Theory Construction.Warren W. Tryon - 1996 - Journal of Mind and Behavior 17 (3):213-228.
Reconsidering the Construct Validity of “Political Knowledge”.Craig M. Burnett - 2016 - Critical Review: A Journal of Politics and Society 28 (3-4):265-286.

Analytics

Added to PP index
2020-09-05

Total views
2 ( #1,443,235 of 2,499,869 )

Recent downloads (6 months)
1 ( #417,749 of 2,499,869 )

How can I increase my downloads?

Downloads

Sorry, there are not enough data points to plot this chart.

My notes