Use este identificador para citar ou linkar para este item: http://repositorio.ufla.br/jspui/handle/1/59506
Título: A data-centric approach for portuguese speech recognition: language model and its implications
Palavras-chave: Automatic speech recognition
Language model
Brazilian portuguese
Wav2vec2
KenLM
Data do documento: 2023
Editor: Institute of Electrical and Electronics Engineers
Citação: ALVARENGA, J. P. R.; MERSCHMANN, L. H. de C.; LUZ, E. J. da S. A data-centric approach for portuguese speech recognition: language model and its implications. IEEE Latin America Transactions, [S.l.], v. 21, n. 4, p. 546-556, 2023.
Resumo: Recent advances in Automatic Speech Recognition have made it possible to achieve a quality never seen before in the literature, both for languages with abundant data, such as English, which has a large number of studies and for the Portuguese language, which has a more limited amount of resources and studies. The most recent advances address speech recognition problems with Transformers based models, which have the capability to perform the speech recognition task directly from the raw signal, without the need for manual feature extraction. Some studies have already shown that it is possible to further improve the quality of the transcription of these models using language models within the decoding stage, however, the real impact of such language models is still not clear, especially for the Brazilian Portuguese scenario. Also, it is known that the quality of the data used for training the models is of paramount importance, however, there are few works in the literature addressing this issue. This work explores the impact of language models applied to Portuguese speech recognition both in terms of data quality and computational performance, with a data-centric approach. We propose an approach to measure similarity between datasets and, thus, assist in decision-making during training. The approach indicates paths for the advancement of the state-of-the-art aiming at Portuguese speech recognition, showing that it is possible to reduce the size of the language model by 80% and still achieve error rates around 7.17% for the Common Voice dataset. The source code is available at https://github.com/joaoalvarenga/language-model-evaluation.
URI: https://latamt.ieeer9.org/index.php/transactions/article/view/7464
http://repositorio.ufla.br/jspui/handle/1/59506
Aparece nas coleções:DCC - Artigos publicados em periódicos

Arquivos associados a este item:
Não existem arquivos associados a este item.


Os itens no repositório estão protegidos por copyright, com todos os direitos reservados, salvo quando é indicado o contrário.

Ferramentas do administrador