Full text loading...
, Fazri Nur Yusuf1
, Emi Emilia1
and Wawan Gunawan1
Abstract
Since 2024, English as a Foreign Language (EFL) teaching and assessment in Indonesian primary and secondary schools have aimed for B1 Common European Framework of Reference (CEFR) proficiency. However, studies regarding aligning teacher training with CEFR-based assessment design are rare. Consequently, teacher training institutions, which previously did not pay attention to the issue, were not ready to integrate this target into assessment design courses. To fill the gap, this study leverages corpus analysis to maintain the difficulty level of the test in accordance with the targeted CEFR level. This study investigates formative test items created by 28 pre-service teachers (PTs) in a Designing Assessment Course. The alignment of CEFR vocabulary and difficulty levels in the developed test items was scrutinized. Using two corpus analysis tools, 26,487 tokens from receptive skill tests were compared with 5,354 tokens from the CEFR. The results showed that both test types were dominated by very easy (A1) and easy (A2) levels, with limited representation of medium (B1), difficult (B2), and very difficult (C1 and C2) items. Listening items had 68.51% CEFR-aligned vocabulary, mostly A1 (55.98%). Similarly, reading items had 68.19% CEFR-aligned vocabulary, with A1 dominating (51.70%). These findings suggest that the test items do not fully align with B1 proficiency. The very easy and easy levels limit the tests’ effectiveness in assessing students’ achievement and higher-level language skills which in turn may weaken the test validity. The findings urge education institutions to integrate corpus literacy into assessment design. The test analysis in this study was a relatively simple procedure but significant for difficulty level investigation. The procedure can be duplicated for assessment design in EFL classrooms and research settings.
Article metrics loading...
Full text loading...
References
Data & Media loading...