Pré-Publication, Document De Travail Année : 2025

LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking Tokens

Résumé

Large reasoning models (LRMs) have led to new possibilities in terms of problem-solving, through the devising of a natural language thought process prior to answering a query. While their capabilities are well known across mathematics and coding tasks, their impact on the task of machine translation (MT) remains under-explored. In this work, we explore the benefits of the generation of intermediate tokens when performing MT across multiple language pairs of different levels of resourcedness and multiple setups. We find that “thinking tokens” do not help LRMs better perform MT. This result generalizes to models fine-tuned to reason before translating using distilled chain of thought (CoT) inspired by human translators’ practices. Specifically, fine-tuning a model with synthetic CoT explanations detailing how to translate step-by-step does not outperform standard input-output fine-tuning. However, constructing the CoT based on MT prompting strategies results in improvements. Our findings underscore that the contribution of a CoT during fine-tuning highly depends on the presence of translation attempts in them. More broadly, our results suggest that using a teacher to refine target translations or to expand parallel corpora is more impactful than distilling their CoT explanations into “thinking” MT models.

Fichier principal
Vignette du fichier
2026___LLM_Reasoning_for_Machine_Translation.pdf (1.19 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-05318507 , version 1 (09-12-2025)
hal-05318507 , version 2 (29-01-2026)

Licence

Identifiants

Citer

Armel Randy Zebaze, Rachel Bawden, Benoît Sagot. LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking Tokens. 2025. ⟨hal-05318507v2⟩
82 Consultations
274 Téléchargements

Altmetric

Partager

  • More