Second language acquisition

Accreditation – who needs it?

As avid readers of this blog will know, I’m responsible for a company called Net Languages that has been developing and delivering Web-based language courses for over 18 years. During this time we’ve established ourselves as a reputable company that knows what it’s doing and delivers an effective and reliable service.

One of our sales representatives recently suggested that it would make it easier for him to compete with some of the many new-comers to our market if our courses were accredited by a reputable university – preferably from an English speaking country. He’s probably right. We all know that the word ‘university’ has almost magical properties.

That said, I honestly doubt there is a single university out there that knows as much about second language acquisition and how to deliver effective Web-based language courses as we do. So if we decide we need ‘accreditation’ what we’re really talking about is a straightforward commercial arrangement i.e. paying for the respectability that the word ‘university’ conveys.

As most universities are struggling to make ends meet, it shouldn’t be too difficult to find one interested in the idea of charging us a fee to add their seal of approval to our courses – even if they don’t know too much about it.

Organisations like the British Council, the Instituto Cervantes, EAQUALS, or International House provide meaningful accreditation to bricks and mortar language schools, as most (if not all) of these organisations do know what they’re doing. They perform rigorous inspection visits, evaluate schools’ performance and help raise standards. But the field of Web-based language teaching is rather less well catered for.

Perhaps I should start an independent accreditation scheme for Web-based language courses. But I’ll probably just go and find a university.

There’s lots of rules …

The 2015 edition of IH Barcelona’s ELT Conference featured a world-class line up of speakers. Coincidentally, three of the plenary speakers, Michael Swan, Scott Thornbury and Martin Parrott, all talked about a similar issue: how the English language is changing and what impact these changes might have on the language that we teach in our classrooms. The consensus of opinion of all three speakers was that while English has always been subject to change, the pace of change is increasing fast and the TEFL industry is lagging behind.

One example: we have all been told that we should use ‘less’ when we speak of uncountable nouns and ‘fewer’ for countable objects. But according to Martin Parrot, this distinction was unheard of until the 18th century. Previously ‘less’ was used for all nouns. A few centuries later and ‘less’ is evidently reclaiming its right to be used on every occasion, although students who write ‘less cars’ in an end of course exam are still likely to be marked down.

Would a student be marked down for saying “I so don’t agree with you” or “I was sat there for hours” in an oral exam? Probably not. What about a written exam? Probably yes.

Martin Parrott, who’s talk was entitled ‘The Tyranny of TEFL speak’ made the point that most English language course book writers seems oblivious to most of these changes and consistently produce a version of English that essentially reflects the way university educated, middle class people living in the Home Counties spoke in the 1970s and 80s.

So how should this natural evolution of language impact our classroom teaching? Should we accept any utterances that are commonly used, however much they might grate on our ingrained sense of correctness? Should we teach students how people actually speak in this day and age, but warn them that certain commonly used words and expressions shouldn’t be used in exams? That might be one solution, but it doesn’t feel quite right.

As Scott made abundantly clear, all languages change over time and globalisation has hastened the changes.  That said, not all languages are quite as amorphous as English seems to be. In some cases this is because the natural process of change is corralled by institutions which seek to keep some semblance of control. Spanish is overseen by a collection of highly prestigious academics and authors who collectively make up the Real Academia Española. These eminent minds meet periodically to discuss which changes to Spanish are acceptable and which are not.  Whatever they say goes. Students taking the Instituto Cervantes’ Spanish language exams don’t therefore have to navigate the fast expanding grey areas that students of English are increasingly faced with. If the Real Academia says something is admissible, that’s fine. Otherwise it just ain’t.

Cultural historians may like to consider why Spain has an official body of language overseers whose role is to determine what is and isn’t allowed in Spanish, whereas the free market seems to hold much greater sway in England, at least outside the “tyranny” of most EFL course books and exams. But that’s a debate which goes way beyond the scope of this blogpost, innit?

Ours is not to reason why …

According to Viktor Mayer-Schönberger and Kenneth Cukier, authors of a book called ‘Big Data’ (first published by John Murray in 2013) the standard scientific method which we have all taken to be sacrosanct for well over half a century, is fast being displaced by the analysis of data which is now available on a previously inconceivable scale.

The idea, in a nutshell, is this: while knowledge can still be advanced by researchers coming up with a theory which is subsequently tested in a verifiable way (the standard method), it can now advance much more rapidly (and much less expensively)  by looking for correlations in the mass of data that analysts now have access to. In other words, knowledge based on investigating ‘why’ such and such happens is being supplanted by knowledge based on ‘what’ happens, irrespective of the ‘why’.

An example from the book: by tracking 16 different data streams from premature babies (heart rate, respiration rate, blood pressure, etc.) computers are able to detect subtle changes that may indicate a problem, long before doctors or nurses become aware of it. The system relies not on causality, but on correlations. It tells what, not why. And it saves lives.

Another example, this time closer to home: Big Data has already transformed the translation business. By analysing the entire content of the Internet, Google has built a corpus of billions of sentences, which enables its computers to predict the probability that one word follows another with ever-increasing accuracy. By 2012 its dataset covered more than 60 languages and by using English as a bridge, it can even translate from Hindi into Catalan (for example).

Statisticians at Microsoft’s machine-translation unit apparently like to joke that the quality of their translations improves every time a linguist leaves the team.

What about the language teaching business?  How might the Big Data revolution impact our industry?

One obvious example: Big Data ought to be able to help our marketing teams identify where best to spend our hard earned cash.  Around one third of Amazon’s sales are now generated by its computer driven, personalised recommendations systems: ‘If you liked this, you may like ….’ Just imagine we could target all our promotion at those market sectors most likely to respond positively. There are almost certainly companies out there that could analyse data generated by search engines, online shopping, social networks and so on, and point us in the right direction. I don’t know if we could afford to hire this sort of expertise. But I’m not sure if we can afford to ignore it either.

What about the thorny issue of second language acquisition theory?  Is such a fiendishly complex subject susceptible to this sort of data analysis?  Would it be possible to devise a way of collecting enough data to suggest how language learners the world over could study more effectively?

The industry’s exam boards must have masses of data squirreled away, but even if they were prepared to share it, how useful would it be? Well, it could indicate where results are improving and where they’re going downhill.  It could also provide evidence on those student profiles that tend to be most successful. Come to think of it, data from exam boards might even help debunk some of the wilder claims put forward by our industry’s ‘miracle method’ operators.

But of course the danger of focusing on data from outcomes (i.e. exam results) is that we end up accidentally reinforcing the ‘teach for the test’ paradigm that already influences our classrooms to a more than healthy extent. Some form of all-encompassing continuous assessment would generate more useful data. This is something that a number of Web-based language schools already claim to offer, albeit on a small scale. Would it be possible to define and agree on a set of metrics that would enable us to measure progress in language learning accurately and continuously, on a very broad scale, without undermining the effectiveness and creativity of our teachers? It’s a big ask. But it’s an enticing idea.