When did you become involved in Linguaskill?
The origins of Linguaskill go back to the mid-1990s when we began working on the idea of online, automated, modular tests. The aim was to democratise testing, making it quicker and more efficient. Since then Linguaskill has developed to provide a generic, multilevel test that serves the needs of all audiences, from education to business.
What makes Linguaskill different from other tests?
Linguaskill is used to measure what has been learned, as opposed to score-driven alternatives, which can measure how well you have practised for the test. This means it is much more consistent and mission specific across the four skills of listening, reading, writing and speaking.
What have you learned when developing Linguaskill?
That ease of use is key. Many of the features within it, such as computer assisted testing, automated writing and automated speaking testing are not necessarily new to us at Cambridge English, but we’ve learned how to deploy them in ways that are simple to administer, and can scale to hundreds of thousands of people, reliably and seamlessly.
How do you see Linguaskill developing over the next 2–3 years?
It is clearly early days for the product – we have successfully launched it, and we have created a stable and reliable platform. The holy grail of language testing is being able to personalise tests for each client group, so that they measure what matters to them. Through Linguaskill’s flexibility we are able to move towards this personalisation, so we now have to work with our partners and customers to develop and modify it to meet individual needs.
How do you see computer-based testing changing in the future, particularly with the use of artificial intelligence (AI)?
AI will help close the gap between the actual situations where we use language and the context of taking a test. Test conditions are not real life, so the ideal for language testing is being able to observe people doing the real thing, creating much more specific tests, personalised to them. AI can assist here, particularly when coupled with the mobile devices we all carry around with us. First-generation testing technology just migrated tests from pen and paper – AI lets you go much further and deliver more adaptive, personalised and efficient testing.
Are there other key trends that you see impacting language learning and testing over the next five years?
I see two key trends. Firstly, merging language testing into learning, so that you can see how people perform in a real-world context will give a greater richness, delivered in a more effective way.
Secondly, use of AI means we’re going to see a virtuous combination of teachers and machines. We will always need teachers to provide human guidance and support, but we can use the robotic strengths of machines to help them deliver more effective learning. For example, if you have a class of 40 people, a teacher clearly can’t mark 40 writing tasks every day. A computer can. The teacher can then look at the analytics and enhance content and support either for the whole class or at an individual level, improving learning outcomes.