fr 
  • EN
  • ES
  • FR
${countryLabel} 
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}
  • ${countryLabel}

LanguageCert Ongoing Research Programme

LanguageCert Ongoing Research Programme

LanguageCert’s ongoing research programme focuses on the development, validation, calibration, and performance analysis of LanguageCert test materials. The programme uses Classical Test Theory (CTT) and Rasch Measurement (RM) statistics to ensure accurate standard setting, item calibration, and overall test quality. In addition, qualitative studies, such as candidate responses and reactions to the examination, are undertaken. The research programme aims to continuously refine the test material performance and fitness-for-purpose, and enhance and ensure reliability and validity. Research findings are regularly publicized on the LanguageCert website as well as in reputable peer-reviewed journals.
Statistical methods we employ
Classical Test Theory (CTT)

a. Reliability Analysis: The research programme utilises classical reliability measures, such as Cronbach’s alpha, to evaluate the internal consistency of the test.

b. Item Facility: CTT statistics are used to determine the facility of items in all tests.

c. Item Discrimination: CTT, such as the point-biserial correlation, measure the extent to which items differentiate between high and low performers. This analysis identifies items that effectively discriminate between individuals with different language proficiency levels.

Item Response theory (IRT) and Rasch analysis

a. Item Calibration: The research programme uses IRT, such as the Rasch model, to calibrate items and establish their difficulty and discrimination parameters.

b. Test Equating: IRT facilitates test equating, which ensures score comparability across different test forms or administrations. Equating ensures that test takers who take different versions of the test are measured on the same scale.

c. Person Ability and Item Difficulty Measurement: IRT enables the estimation of person abilities and item difficulties on a common scale, allowing for effective measurement of language proficiency.

Standard Setting

Angoff Method: The research programme employs the Angoff method, a widely used standard setting technique, where panels of experts review items and estimate the minimum level of proficiency required to answer them correctly.

Performance Analysis

a. Differential Item Functioning (DIF): The research programme investigates DIF to identify potential biases in item functioning based on test takers' characteristics (e.g., gender, ethnicity, first language). This analysis ensures fairness and equity in the test administration.

b. Item and Test Analysis: The ongoing research programme conducts in-depth item and test analyses to identify problematic items, assess the test's psychometric properties, and gather evidence of validity.

Long-term research and validation programme

LanguageCert plans and conducts research studies using various methodologies which include interviews with raters and examiners, interviews with test users (primary stakeholders), expert panels and statistical analyses in an on-going programme. External validation is achieved through independent reviews (e.g., Ecctis, ALTE, CRELLA) of the LanguageCert qualifications. Studies currently in progress include: CEFR Referencing; alignment to the Canadian Language Benchmarks (CLB); and confirming the reliability, consistency, and the positive effect of the decisions made through the LanguageCert tests for test stakeholders.

Concordance Study

The Centre for Research in English Language Learning and Assessment (CRELLA) at the University of Bedfordshire has been commissioned to conduct a concordance study to support the use of the LanguageCert General and LanguageCert Academic tests. The study is overseen by a Concordance Studies Review panel, which consists of a team of leading academics. The concordance study has included comparisons between the content of the LanguageCert Academic and General tests and their counterparts, widely accepted for the same purposes: IELTS Academic and IELTS General Training. It also involves the collection of test score data from test takers who have taken both LanguageCert and IELTS tests. Over time, LanguageCert expects to extend the concordance study to the full range of international tests recognised for similar purposes.

The study has confirmed that the LanguageCert and IELTS tests cover similar content, using similar task types to represent the language needs of students (Academic) or those shared by many other groups of migrants (General/ General Training). Although our empirical concordance work is not yet complete, data collected thus far show that performance on the two LanguageCert tests is highly predictive of performance on their IELTS counterparts with very strong correlations between the results: =.87 for Academic and =.89 for General.

September 2023 / Score distribution summary statistics

 

Μean

SD

Skew

Kurtosis

Min  Max
LanguageCert Academic

63.05

14.48

-0.20

-0.03

13

 96

IELTS Academic

6.24

0.95

-0.22

-0.04

2.5

 8.5
LanguageCert General 67.59 12.50 -0.83
1.31 18  89
IELTS General Training  6.76 1.06 -0.58
0.17 3  9
Note: For the Academic cohort, the sample size was 654. For the General cohort, the sample size was 181.

Correlations 

 

Overall r

Reading r

Writing r

Listening r

Speaking r
Academic (n = 654)

.87

.79

.69

.70

.73

General (n = 181)

.89

.77

.76

.79

.74

Note: r = correlation. All correlations were statistically significant at the p < .001 level.

The strong direct correlations (r > .7) for Overall performance indicate that LanguageCert exams perform in accordance with language tests designed for similar purposes, meeting or outperforming correlations between alternative tests as reported above and IELTS. Subscales for both Academic and General display moderately strong (r > .6) to strong correlations.

Over time, LanguageCert expects to extend the concordance study to the full range of international tests recognised for similar purposes and will be publishing the findings in extensive reports.

Download the concordance report here.