Artigo Revisado por pares

Automatic assessment of oral language proficiency and listening comprehension

2009; Elsevier BV; Volume: 51; Issue: 10 Linguagem: Inglês

10.1016/j.specom.2009.03.002

ISSN

1872-7182

Autores

Febe de Wet, Christa van der Walt, Thomas Niesler,

Tópico(s)

Speech and dialogue systems

Resumo

This paper describes an attempt to automate the large-scale assessment of oral language proficiency and listening comprehension for fairly advanced students of English as a second language. The automatic test is implemented as a spoken dialogue system and consists of a reading as well as a repeating task. Two experiments are described in which different rating criteria were used by human judges. In the first experiment, proficiency was scored globally for each of the two test components. In the second experiment, various aspects of proficiency were evaluated for each section of the test. In both experiments, rate of speech (ROS), goodness of pronunciation (GOP) and repeat accuracy were calculated for the spoken utterances. The correlation between scores assigned by human raters and these three automatically derived measures was determined to assess their suitability as proficiency indicators. Results show that the more specific rating instructions used in the second experiment improved intra-rater agreement, but made little difference to inter-rater agreement. In addition, the more specific rating criteria resulted in a better correlation between the human and the automatic scores for the repeating task, but had almost no impact in the reading task. Overall, the results indicate that, even for the narrow range of proficiency levels observed in the test population, the automatically derived ROS and accuracy scores give a fair indication of oral proficiency.

Referência(s)
Altmetric
PlumX