Resisting Temptation
By Leda Lampropoulou, Head of Research
LanguageCert, 24 September 2025
2025 is the Year of the Concordance Study. Six studies have been published so far and I’m aware of two more on the way: one in plain sight and one under wraps. Test comparison is having its moment in the sun. This is a good sign for those of us working in English language assessment. The field is investing in the kind of research that helps higher education institutions, governments and test takers make better-informed decisions.
Concordance studies offer an evidence-based bridge between different tests. They can help a university admissions officer decide whether to accept a new test and what minimum score to set for a given course. Studies can guide a policy maker in aligning language requirements and help a student decide which test is best for them.
To make such decisions, test score users must be able to interpret concordance studies. Score equivalence tables are not the alpha and the omega. Users need to be able to see the ‘workings out’ beneath the data and critically examine what lies behind the curtain.
The principles I keep coming back to:
From 2022 to 2024, LANGUAGECERT conducted a concordance study to analyse and compare the test design, content and scores of LANGUAGECERT Academic and IELTS Academic. The framework for our study was Knoch and Fan’s nineteen principles for good practice in concordance studies (2024, pp. 691-693). Six of the principles are about how results are published and used. Out of those, two in particular stick in my mind:
- Alert test users to exercise caution in interpreting and using concordance results
- Provide test user-focused guidelines and recommendations
They continue that test providers should "provide clear guidelines to test users around the level of confidence in the concordance results at different score levels" (p. 689).
I like the above because it shifts the conversation and responsibility from simply presenting a concordance table to helping test users understand a table’s limitations as a sole source of evidence.
Resisting the temptation of the one table solution
Let’s be honest. There’s a certain comfort in a neat equivalence table. It’s tempting to think: LANGUAGECERT Academic X = IELTS Academic Y = problem solved. But reality is invariably messier.
The LANGUAGECERT Academic scoring system works on a fine-grained scale of 0 to 100, aligned with the Common European Framework of Reference (CEFR). The integrity of this alignment is ensured through ongoing research and validation, and the alignment has also been independently verified by Ecctis. This integrity is essential because a test’s measurement scale must be accurate and consistent across levels to ensure the test is fair and reliable.
Other tests use broader measurement bands and tests can vary in the level of precision at different levels, especially at the top and bottom of their scales. This means fewer test takers score there so there is less data to draw on when conducting a concordance study. Insufficient data at a particular level is a common challenge in concordance research and reflects the distribution of scores in the test being compared, rather than any shortcoming in the concordance methodology, and is an issue affecting all studies published this year.
The above is why good practice in concordance studies suggests that test users should not rely solely on the score equivalence tables when setting score requirements. Tables point the way, but they can’t give the complete picture.
The LANGUAGECERT approach
When we published our LANGUAGECERT Academic - IELTS Academic concordance study, we followed the good practice principles, embraced their spirit and wanted to go beyond ticking boxes.
So we:
- Included a detailed contents comparison.
- Reported very strong correlations on which the rest of the analysis is based.
- Reported overall and per-skill equivalences.
- Showed the population size behind each relevant IELTS band and half-band in our study.
- Included the standard error of measurement so users could see where confidence in the numbers is strongest and where it’s weaker.
When LANGUAGECERT advises test users on comparing tests and setting scores, we don’t just point to the concordance table and say ‘there you go’. We’ve triangulated evidence from complementary sources of validation and offer a recommended score linking between LANGUAGECERT Academic scores and IELTS Academic scores that includes evidence from:
- Our concordance study
- The LCA to CEFR mapping – constantly verified internally and independently validated by Ecctis
- Regulatory standards – the test score thresholds used by UK Visas and Immigration
This combination of evidence gives me the confidence that LANGUAGECERT is providing the best information from which universities and colleges can set cut scores, and make high-stakes decisions.
Concordance tables don’t make decisions; people do
The principles of good practice for concordance studies tell us that a good study is made up of many parts, not just equivalence tables. To reiterate, tables are a powerful tool for test users if they are seen as part of the bigger picture. Knoch and Fan are explicit that test providers should publish and make available guidance to help test users see the entirety of this bigger picture. Hopefully, I have gone some way to meeting this requirement, because when it comes to concordance studies, the evidence matters, but its interpretation and ultimate application to life-changing decision-making matter more.
* Knoch, U., & Fan, J. (2024). Test score comparison tables: How well are they serving test users? Language Testing, 41(3), 681-693. https://doi.org/10.1177/02655322241239348