rugpt3medium_based_on_gpt2 (ONNX)

This is an ONNX version of ai-forever/rugpt3medium_based_on_gpt2. It was automatically converted and uploaded using this Hugging Face Space.

Usage with Transformers.js

See the pipeline documentation for text-generation: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TextGenerationPipeline


rugpt3medium_based_on_gpt2

The model architecture design, pretraining, and evaluation are documented in our preprint: A Family of Pretrained Transformer Language Models for Russian.

The model was pretrained with sequence length 1024 using the Transformers library by the SberDevices team on 80B tokens for 3 epochs. After that, the model was finetuned with the context size of 2048 tokens.

Total training time was around 16 days on 64 GPUs.
The final perplexity on the test set is 17.4.

Authors

Cite us

@misc{zmitrovich2023family,
      title={A Family of Pretrained Transformer Language Models for Russian}, 
      author={Dmitry Zmitrovich and Alexander Abramov and Andrey Kalmykov and Maria Tikhonova and Ekaterina Taktasheva and Danil Astafurov and Mark Baushenko and Artem Snegirev and Tatiana Shavrina and Sergey Markov and Vladislav Mikhailov and Alena Fenogenova},
      year={2023},
      eprint={2309.10931},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for onnx-community/rugpt3medium_based_on_gpt2-ONNX

Quantized
(1)
this model

Paper for onnx-community/rugpt3medium_based_on_gpt2-ONNX