Do end-to-end speech recognition models care about context?

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Open-article

    Forlagets udgivne version, 394 KB, PDF-dokument

The two most common paradigms for end-to-end speech recognition are connectionist temporal classification (CTC) and attention-based encoder-decoder (AED) models. It has been argued that the latter is better suited for learning an implicit language model. We test this hypothesis by measuring temporal context sensitivity and evaluate how the models perform when we constrain the amount of contextual information in the audio input. We find that the AED model is indeed more context sensitive, but that the gap can be closed by adding self-attention to the CTC model. Furthermore, the two models perform similarly when contextual information is constrained. Finally, in contrast to previous research, our results show that the CTC model is highly competitive on WSJ and LibriSpeech without the help of an external language model.

OriginalsprogEngelsk
TitelProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Vol/bind2020-October
ForlagInternational Speech Communication Association (ISCA)
Publikationsdato2020
Sider4352-4356
DOI
StatusUdgivet - 2020
Begivenhed21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 - Shanghai, Kina
Varighed: 25 okt. 202029 okt. 2020

Konference

Konference21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020
LandKina
ByShanghai
Periode25/10/202029/10/2020
SponsorAlibaba Group, Amazon Alexa, Apple, et al., Intel, Magic Data

Antal downloads er baseret på statistik fra Google Scholar og www.ku.dk


Ingen data tilgængelig

ID: 254726027