28/07/2023
There are many ways to assess someone’s ability to speak English, but can AI do it reliably?
Researchers in the University of Cambridge have developed cutting edge automated speech evaluation technology which combines the speed and flexibility of AI with the expert judgement of a trained human examiner.
“The AI-driven technology needed to achieve reliable speaking assessment has to be smart enough to recognise and assess a range of accents and levels of ability in people’s speaking patterns,” explains Dr Jing Xu. He continued, “The Cambridge approach is to exploit the full potential of AI, while recognising its limitations. That’s why we’ve developed a hybrid-marking approach, which keeps a human in the loop at all times to ensure the quality of AI-based assessment.”
Dr Xu is the lead author of a ground-breaking paper which explores the ways of evaluating automated language assessment. The paper introduces a method new to the language testing community, called ‘Limits of Agreement’, and argues for its advantages over the existing methods in automarker evaluation. The paper was recently awarded the runner up prize in the 2021 ILTA Best Article Award. ILTA is a prestigious group of language testing and assessment scholars which aim to promote the improvement of language testing throughout the world.
The paper: Assessing L2 English speaking using automated scoring technology: Examining automarker reliability. Published in Assessment in Education: Principles, Policy, & Practice It was written by Dr Jing Xu, Edmund Jones, V. Laxton, and Dr Evelina Galaczi.