posted on 2021-06-08, 22:14authored byKaterina Young
Many English language practitioners around the world do not possess expertise in language assessment and resort to an easy solution of “adopting/adapting” existing, standardized instruments published by testing organizations. These tools are limited in their transferability to other settings because they may not fit the local purpose, student population, or teacher experience and lead to unfair decisions about individuals. This study offers a solution in form of a locally designed rating scale for the specific purpose of professional email writing in an EFL business college context in the Czech Republic. It compares the local scale’s validity and reliability with a standardized, Cambridge Assessment scale quantitatively through a multi-faceted Rasch analysis and qualitatively through concurrent and retrospective verbal protocols. The study also explores the effect of teacher involvement and background on scoring and construct validity. The findings indicate that this locally developed scale is equivalent to the standardized scale in measuring students’ writing ability with the advantage of its increased construct validity due to its greater comprehensibility and locally derived content. Additionally, teacher involvement in the scale design does not increase scoring validity, but teacher background variables such as nationality/country of education, other professional work experience, and other scale training seem to impact it. Furthermore, the teachers’ collaboration on the scale design, which raises their assessment literacy, and the awareness of their own background influence the content of the local scale become aspects of situationally specific, construct validity. The study contributes to the field of language testing with detailed documentation of a local scale design for classroom achievement purposes. It encourages local educators and global testing agencies to increase the construct validity of their assessment tools by making them environmentally authentic and to foster fair assessment practices in various linguistically and culturally different assessment contexts.