ByT5-Small Singlish → Sinhala
Romanized Singlish to Sinhala script translation using a LoRA fine-tuned ByT5-Small model. Adapters are merged into the base weights — no PEFT required at inference.
Training Approach
The model is trained using a 3-phase curriculum on top of google/byt5-small with LoRA (r=32, α=64) applied to all attention and feed-forward projection layers.
Phase 1 — Phonetic Foundation: 700K samples from the phonetic corpus. The model learns the core character-level mapping between romanized Singlish and Sinhala script at a higher learning rate (5e-5). Validated on
phonetic_test.csv.Phase 2 — Augmentation + Adhoc Data: A mix of 300K phonetic samples and the adhoc corpus (oversampled 2×). A custom augmenter synthetically mutates 30% of inputs — swapping vowels (
a↔e,i↔ee), substituting consonants (th↔t↔d,v↔w), and elongating characters — to simulate real-world romanization inconsistency. Learning rate drops to 2.5e-5. Validation shifts toadhoc_test.csv.Phase 3 — Specialization: A final low-LR pass (1.5e-5) over the adhoc data alone, no augmentation. Sharpens output on the target distribution.
Best model selection across all phases is driven by lowest CER (Character Error Rate). Early stopping with a patience of 3 is armed at every phase. After training, the LoRA adapters are merged into the base model weights.
The choice of ByT5 over subword-based models is motivated by the orthographic instability of Singlish romanization — byte-level processing eliminates tokenizer fragility and out-of-vocabulary issues entirely, consistent with the approach discussed in Sumanathilaka et al. (2025) for romanized Sinhala transliteration.
Usage
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("savinugunarathna/ByT5-Small-fine-tuned2")
model = AutoModelForSeq2SeqLM.from_pretrained("savinugunarathna/ByT5-Small-fine-tuned2")
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device).eval()
def translate(text: str) -> str:
prompt = f"translate Singlish to Sinhala: {text}"
inputs = tokenizer(prompt, return_tensors="pt", max_length=512, truncation=True).to(device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=128, num_beams=4)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Interactive loop — type 0 to exit
while True:
user_input = input("Singlish: ").strip()
if user_input == "0":
break
if user_input:
print(f"Sinhala: {translate(user_input)}\n")
Citation
If you use this model, please cite:
@misc{gunarathna2025byt5singlish,
title={ByT5-Small Singlish to Sinhala: A Three-Phase Curriculum Approach with LoRA Fine-Tuning},
author={Gunarathna, Savinu},
year={2025},
howpublished={Hugging Face Model Hub},
note={\url{https://huggingface.co/savinugunarathna/ByT5-Small-fine-tuned2}}
}
References
@article{sumanathilaka2025swa,
title={Swa-bhasha Resource Hub: Romanized Sinhala to Sinhala Transliteration Systems and Data Resources},
author={Sumanathilaka, Deshan and Perera, Sameera and Dharmasiri, Sachithya and Athukorala, Maneesha and Herath, Anuja Dilrukshi and Dias, Rukshan and Gamage, Pasindu and Weerasinghe, Ruvan and Priyadarshana, YHPP},
journal={arXiv preprint arXiv:2507.09245},
year={2025}
}
@inproceedings{Nsina2024,
author={Hettiarachchi, Hansi and Premasiri, Damith and Uyangodage, Lasitha and Ranasinghe, Tharindu},
title={{NSINA: A News Corpus for Sinhala}},
booktitle={The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
year={2024},
month={May},
}
@article{ranasinghe2022sold,
title={SOLD: Sinhala Offensive Language Dataset},
author={Ranasinghe, Tharindu and Anuradha, Isuri and Premasiri, Damith and Silva, Kanishka and Hettiarachchi, Hansi and Uyangodage, Lasitha and Zampieri, Marcos},
journal={arXiv preprint arXiv:2212.00851},
year={2022}
}
- Downloads last month
- 11
Model tree for savinugunarathna/ByT5-Small-fine-tuned2
Base model
google/byt5-small