Describe your role and involvement in Linguaskill.
I am particularly involved in the new Linguaskill Speaking test. In this role, I design and conduct empirical research to investigate potential threats to test validity and collect validity evidence (or counter-evidence) that supports (or disproves) the interpretations and uses of the Linguaskill test scores in various contexts. Based on this research, I advise on test design and ensure the accuracy of public information about Linguaskill.
What makes Linguaskill different from other tests?
Linguaskill is unique in terms of its application of cutting-edge technologies in language assessment. The Linguaskill combined Reading and Listening test is computer adaptive, meaning that test content is tailored to a candidate’s performance in order to obtain a precise estimate of their language ability. The Linguaskill Writing test is enhanced by its automarking technology that performs nearly instantaneous evaluation of free writing.
What have you learned when developing Linguaskill?
I’ve learned about the importance of increasing stakeholder understanding of the technologies applied to language assessment. For example, the concepts of computer-adaptive testing and automated scoring in Linguaskill are not immediately understandable to teachers, learners and other test users. This means we need to explain and demystify the ‘black box’ behind the scenes to ensure that the test will be used appropriately and that it will have a positive impact on language teaching and learning.
Now that Linguaskill is in the market, what are you most satisfied with in terms of the product and market adoption?
I was happy to hear from many English language learners who participated in Linguaskill trials that the test assessed the essential English language skills necessary for daily communication, and that taking the test on a computer did not affect their performance.
How do you see Linguaskill developing over the next 2–3 years?
Linguaskill will keep seeking innovative ways to integrate cutting-edge technologies with English language assessment.
What are you working on now?
I am currently working on research projects related to the quality assurance of automated scoring. I am also writing a research paper about a prototype automated speaking test.
How do you see computer-based testing changing in the future, particularly with the use of AI?
With the increasing use of AI, computer-based testing will become more personalised and learner centred. In addition to indicating levels of proficiency, AI will be capable of making an accurate diagnosis of language learners’ strengths and weaknesses, and therefore enable the creation of tailored teaching materials and learning activities.
At the same time it will help computer-based testing become less intrusive. For example, low-stakes assessment may be performed by AI while learners are studying the target language on the computer. In short, it is foreseeable that language learning and assessment will become seamlessly blended in the near future.
Are there other key trends that you see impacting language learning and testing over the next five years?
Rapid advances in technology are likely to have an enormous impact on the way that language learning and assessment products are designed. A high-profile trend that is likely to emerge in the next five years is AI teachers. These will greatly reduce a human teacher’s workload by helping them grade homework, design classroom and extracurricular activities, perform formative and summative assessments, and track student progress.