Instructions to use ai4bharat/MultiIndicParaphraseGeneration with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ai4bharat/MultiIndicParaphraseGeneration with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration") model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/MultiIndicParaphraseGeneration") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 69b010ba7cc5db164bea424f3b0b4658d5d02b1b733f236edcaa800d8e4fda1b
- Size of remote file:
- 976 MB
- SHA256:
- eaa1d198c97b09cae2f917b1656eb7123c7675a149ab261893779512ac46b7d6
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.