Instructions to use bertin-project/bertin-base-random with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bertin-project/bertin-base-random with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="bertin-project/bertin-base-random")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-base-random") model = AutoModelForMaskedLM.from_pretrained("bertin-project/bertin-base-random") - Notebooks
- Google Colab
- Kaggle
This is a RoBERTa-base model trained from scratch in Spanish.
The training dataset is mc4 subsampling documents to a total of about 50 million examples. Sampling is random.
This model has been trained for 230.000 steps (early stopped before the 250k intended steps).
Please see our main card for more information.
This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.
Team members
- Downloads last month
- 17