A non-autoregressive Chinese-Braille translation approach with CTC loss optimization

Hailong Yu, Wei Su, Yi Yang, Lei liu, Yongna Yuan, Yingchun Xie, Tianyuan Huang

Research output: Contribution to journalReview articlepeer-review

Abstract

The rise of Neural Machine Translation (NMT) models opens doors for translating Chinese text into Braille, improving information access for visually impaired individuals. However, current NMT models, often based on encoder–decoder architectures, utilize sequential rather than parallel processing in the decoder. This autoregressive decoding hinders architectures like the Transformer from fully leveraging their training speed advantages during inference. While the Transformer excels in parallel training, its inference time complexity remains O(T2), where T represents sequence length. This bottleneck becomes particularly significant when translating Braille, known for its long character sequences. We propose a non-autoregressive Chinese-to-Braille translation model that solely employs the encoder architecture along with Connectionist Temporal Classification (CTC) loss to generate complete Braille sequences simultaneously. This approach significantly improves inference speed, achieving a substantial acceleration compared to autoregressive models during inference with a time complexity of O(1). Remarkably, alongside increased inference speed, translation accuracy also improves. By incorporating a pre-training technique, our method achieves a remarkable BLEU Score of 95.10% with a limited dataset of only 2k Chinese-Braille training pairs.

Original languageEnglish (US)
Article number126356
JournalExpert Systems With Applications
Volume269
DOIs
StatePublished - Apr 15 2025

All Science Journal Classification (ASJC) codes

  • General Engineering
  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A non-autoregressive Chinese-Braille translation approach with CTC loss optimization'. Together they form a unique fingerprint.

Cite this