Bolmo 7B
We introduce Bolmo, the first family of competitive fully open byte-level language models (LMs) at the 1B and 7B parameter scales.
These models are byteified using a short additional training procedure which starts from pretrained models in the Olmo series.
We are releasing all code, checkpoints, and associated training details.
See our technical report for details: https://allenai.org/papers/bolmo.
Installation
Bolmo was tested with transformers 4.57.3 and Python 3.11:
pip install transformers>=4.57.3
Bolmo additionally requires the xlstm package (which needs Python>=3.11):
pip install xlstm==2.0.4
Inference
You can use Bolmo with the standard HuggingFace transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
bolmo = AutoModelForCausalLM.from_pretrained("allenai/Bolmo-7B", trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("allenai/Bolmo-7B", trust_remote_code=True)
message = ["Language modeling is "]
input_ids = tokenizer(message, return_tensors="pt")["input_ids"].to(device)
# `max_new_tokens` is the amount of bytes to generate
response = bolmo.generate(input_ids, max_new_tokens=256, do_sample=True, temperature=0.1)
print(tokenizer.decode(response[0], skip_special_tokens=True))
Model Description
- Developed by: Allen Institute for AI (Ai2)
- Model type: a byte-level autoregressive language model.
- Language(s) (NLP): English
- License: This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's Responsible Use Guidelines.
- Contact: Press:
[email protected] - Date cutoff: Dec. 2024.
Model Sources
- Data: https://huggingface.co/datasets/allenai/bolmo_mix
- Code: https://github.com/allenai/bolmo-core
- Paper: https://allenai.org/papers/bolmo
Bias, Risks, and Limitations
Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from Bolmo or any LLM are often inaccurate, so facts should be verified.
Citation
Forthcoming!
- Downloads last month
- 3
Model tree for allenai/Bolmo-7B
Base model
allenai/Olmo-3-1025-7B