Text Classification
Transformers
PyTorch
xlm-roberta
language classification
text-embeddings-inference
Instructions to use nikitast/multilang-classifier-roberta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nikitast/multilang-classifier-roberta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="nikitast/multilang-classifier-roberta")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("nikitast/multilang-classifier-roberta") model = AutoModelForSequenceClassification.from_pretrained("nikitast/multilang-classifier-roberta") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 42267750c38b45c21c5ed16c2e6aa2fd32ffd5bc61e1ea9c25d7f080b7da0c41
- Size of remote file:
- 1.11 GB
- SHA256:
- 3aef48228be32cf779956809f0e6bd4316ab1a0fc3b567755ca0317561049f70
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.