University of Leicester
Browse
- No file added yet -

Reliability and Validity of a Locally Designed Rating Scale: The Case of a Czech Business College

Download (12.88 MB)
thesis
posted on 2021-06-08, 22:14 authored by Katerina Young
Many English language practitioners around the world do not possess expertise in language assessment and resort to an easy solution of “adopting/adapting” existing, standardized instruments published by testing organizations. These tools are limited in their transferability to other settings because they may not fit the local purpose, student population, or teacher experience and lead to unfair decisions about individuals. This study offers a solution in form of a locally designed rating scale for the specific purpose of professional email writing in an EFL business college context in the Czech Republic. It compares the local scale’s validity and reliability with a standardized, Cambridge Assessment scale quantitatively through a multi-faceted Rasch analysis and qualitatively through concurrent and retrospective verbal protocols. The study also explores the effect of teacher involvement and background on scoring and construct validity. The findings indicate that this locally developed scale is equivalent to the standardized scale in measuring students’ writing ability with the advantage of its increased construct validity due to its greater comprehensibility and locally derived content. Additionally, teacher involvement in the scale design does not increase scoring validity, but teacher background variables such as nationality/country of education, other professional work experience, and other scale training seem to impact it. Furthermore, the teachers’ collaboration on the scale design, which raises their assessment literacy, and the awareness of their own background influence the content of the local scale become aspects of situationally specific, construct validity. The study contributes to the field of language testing with detailed documentation of a local scale design for classroom achievement purposes. It encourages local educators and global testing agencies to increase the construct validity of their assessment tools by making them environmentally authentic and to foster fair assessment practices in various linguistically and culturally different assessment contexts.

History

Supervisor(s)

Glenn Fulcher; Agneta Svalberg

Date of award

2021-04-26

Author affiliation

School of Education

Awarding institution

University of Leicester

Qualification level

  • Doctoral

Qualification name

  • PhD

Language

en

Usage metrics

    University of Leicester Theses

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC