Sunday 18 October 2020

Recently I was interviewed for the PIE Live Event, talking about the impact of COVID-19 on English language testing for admission to higher education. I am concerned that one of the consequences of the pandemic is that students are starting university or college without the language skills they need. Institutions are accepting short online tests that do not adequately assess the academic language skills needed for higher education. Students may pass the tests but struggle with and fail their courses because they are held back by poor communicative ability.

Presently I am a senior researcher for the IELTS partnership. I was an IELTS teacher for a long time. It was challenging work preparing students for international study or employment. I was always interested to know how well they coped with university after taking IELTS.

This interest led to a PhD investigating how Chinese and Japanese students learn and acquire the skills for academic writing. I wanted to explore the relationship between language test preparation, testing and subsequent study at university. So, I began my own research into this area and spent some months with IELTS students in China and Japan. I followed up with the students when they came to study in the UK to see how they were coping with the demands of their degrees. I found that it was really challenging for them. Adapting to an unfamiliar academic context is rarely easy for anyone. It can be impossible without the appropriate level of English proficiency and associated academic language skills.

Due to COVID-19 demand for testing has temporarily outstripped supply, and universities and colleges are accepting tests they would not have considered before the pandemic. As test centres closed worldwide the need for testing remained, and there has been a move to short online tests. The major issue for me is that these tests are technology driven rather than following the basic principles of language assessment. Furthermore, I see lots of issues with what is tested, how it is assessed and the restricted range of language elicited. The spectrum of assessed productive and receptive skills is too narrow and shallow, and some task types lack real-world relevance. This is not an arcane debate between assessment academics. It is imperative that students have the language skills they need to succeed. If you fail to test properly, you are preparing students to fail at college or university.

It worries me that an exam used for university entrance does not even begin to test the reading skills needed at university: in-depth comprehension, skimming and scanning, inferencing and extracting information from multiple texts. You cannot test reading, at higher education level, if an exam only tests learners’ reading ability at a sentence level. 

Assessing vocabulary is not a sufficient test of reading especially if the tasks, themselves, are problematic. A large vocabulary may be indicative of reading ability but identifying pseudowords (as one short online test does) to show a knowledge of the structure of English words does not test reading. On top of this, I cannot think of any possible occasions at university or in everyday life when you would need to identify pseudowords. Testing for pseudowords is a very poor proxy for testing reading skills.

I have similar concerns about proxy tasks for listening comprehension. Being asked to listen to a sentence and then transcribe it does not demonstrate comprehension. Unfortunately, word for word transcription is not useful for taking notes in lectures. A good note taker listens in order to identify essential information, key concepts and the overall argument. Simply put, dictation tasks are outdated because their usefulness is limited.

With regard to speaking, there are similar issues. Having a machine to auto-mark pronunciation when a sentence is read aloud ignores essential academic speaking skills like discourse management, turn taking and spontaneity. Testing the required speaking skills for higher education is more difficult to achieve without the involvement of a human examiner.

The amount of writing the test taker has to produce, in short online tests is negligible: one or two sentences to describe a picture and 50 words in response to a discussion point. This is simply not enough evidence of a student’s ability and cannot elicit a wide enough set of skills. In comparison, the IELTS writing paper is 60 minutes long and asks students to produce pieces of writing of, at least, 150 and 250 words.

This brings us to a fundamental issue with such short online tests: less time means fewer skills. Short online tests, are indeed, too short to test the skills you would expect from a test now being used for academic purposes. 

The pandemic is changing how tests are taken, but the English language skills needed for higher education remain the same. They need to be tested as comprehensively as possible. With language testing, even in a crisis, you cannot sacrifice quality for availability without sacrificing your students’ future academic success.

I think there is an analogy here with driving tests. Driving test centres had to close because of the virus but there was no discussion of accepting a short online test as a substitute road test. Such a test would not open up or ‘democratise’ driving tests. It would make them dangerous.

Tony Clark, IELTS Senior Researcher, Cambridge Assessment English

Tony Clark is a Senior Research Manager at Cambridge Assessment English, managing research on the IELTS test. Since joining Cambridge English in 2018, he has been responsible for the IELTS Joint-funded Research Programme and the Caroline Clapham Master’s Award, acted as Permanent Secretary/Chair of the IELTS Joint Research Committee, and led on a number of high profile cross-partner research projects, in addition to several standard setting workshops. His other principal research interests include Chinese and Japanese educational contexts, academic writing for overseas learning, test preparation, pedagogy, diagnostic assessment and lexical studies.

Tony’s PhD (University of Bristol) received a British Council Research Assessment Award in 2014, and in 2015/2016 he was a recipient of the Newton Fund Scholarship. He has also contributed to research projects on language and admissions testing, language acquisition and test development, collaborating with Bristol University, Coventry University, Swansea University and Assessment Europe. Prior to entering full-time research, he was a test preparation instructor for the British Council (Japan and Morocco) and worked as a teacher in Italy, Hungary, and the UK. His first degree was in Philosophy and English Literature (University of Edinburgh).