END-TO-END UZBEK-RUSSIAN SPEECH TRANSLATION WITH SELF-SUPERVISED PRETRAINING

Authors

  • Sukhrob Avezov Sobirovich PhD, Lecturer in the Department of Russian Language and Literature Bukhara State University

Keywords:

End-to-end speech translation, Uzbek-Russian, self-supervised pretraining, wav2vec 2.0, XLS-R, knowledge distillation, code-switching, low-resource.

Abstract

In this article we study end-to-end Uzbek→Russian speech translation under realistic low-resource and code-switching conditions. We couple a wav2vec-style encoder pre-trained on unlabeled audio with a Transformer decoder, add multi-task ASR/CTC objectives, and distill from a strong cascade teacher. Script-aware tokenization and data augmentation reduce sparsity. On conversational and broadcast tests the model improves BLEU/chrF at fixed latency and yields fewer morphology and NE errors.

Downloads

Published

2025-09-27

Issue

Section

Articles

How to Cite

END-TO-END UZBEK-RUSSIAN SPEECH TRANSLATION WITH SELF-SUPERVISED PRETRAINING. (2025). Web of Teachers: Inderscience Research , 3(9), 79-82. https://webofjournals.com/index.php/1/article/view/5127