TY - JOUR
T1 - A non-autoregressive Chinese-Braille translation approach with CTC loss optimization
AU - Yu, Hailong
AU - Su, Wei
AU - Yang, Yi
AU - liu, Lei
AU - Yuan, Yongna
AU - Xie, Yingchun
AU - Huang, Tianyuan
N1 - Publisher Copyright:
© 2025 The Authors
PY - 2025/4/15
Y1 - 2025/4/15
N2 - The rise of Neural Machine Translation (NMT) models opens doors for translating Chinese text into Braille, improving information access for visually impaired individuals. However, current NMT models, often based on encoder–decoder architectures, utilize sequential rather than parallel processing in the decoder. This autoregressive decoding hinders architectures like the Transformer from fully leveraging their training speed advantages during inference. While the Transformer excels in parallel training, its inference time complexity remains O(T2), where T represents sequence length. This bottleneck becomes particularly significant when translating Braille, known for its long character sequences. We propose a non-autoregressive Chinese-to-Braille translation model that solely employs the encoder architecture along with Connectionist Temporal Classification (CTC) loss to generate complete Braille sequences simultaneously. This approach significantly improves inference speed, achieving a substantial acceleration compared to autoregressive models during inference with a time complexity of O(1). Remarkably, alongside increased inference speed, translation accuracy also improves. By incorporating a pre-training technique, our method achieves a remarkable BLEU Score of 95.10% with a limited dataset of only 2k Chinese-Braille training pairs.
AB - The rise of Neural Machine Translation (NMT) models opens doors for translating Chinese text into Braille, improving information access for visually impaired individuals. However, current NMT models, often based on encoder–decoder architectures, utilize sequential rather than parallel processing in the decoder. This autoregressive decoding hinders architectures like the Transformer from fully leveraging their training speed advantages during inference. While the Transformer excels in parallel training, its inference time complexity remains O(T2), where T represents sequence length. This bottleneck becomes particularly significant when translating Braille, known for its long character sequences. We propose a non-autoregressive Chinese-to-Braille translation model that solely employs the encoder architecture along with Connectionist Temporal Classification (CTC) loss to generate complete Braille sequences simultaneously. This approach significantly improves inference speed, achieving a substantial acceleration compared to autoregressive models during inference with a time complexity of O(1). Remarkably, alongside increased inference speed, translation accuracy also improves. By incorporating a pre-training technique, our method achieves a remarkable BLEU Score of 95.10% with a limited dataset of only 2k Chinese-Braille training pairs.
UR - http://www.scopus.com/inward/record.url?scp=85214314596&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214314596&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2024.126356
DO - 10.1016/j.eswa.2024.126356
M3 - Review article
AN - SCOPUS:85214314596
SN - 0957-4174
VL - 269
JO - Expert Systems With Applications
JF - Expert Systems With Applications
M1 - 126356
ER -