Bolmo 1B

We introduce Bolmo, the first family of competitive fully open byte-level language models (LMs) at the 1B and 7B parameter scales.

These models are byteified using a short additional training procedure which starts from pretrained models in the Olmo series.

We are releasing all code, checkpoints, and associated training details.

See our technical report for details: https://allenai.org/papers/bolmo.

Name Model Starting Point
Bolmo 1B (you are here) Bolmo-1B OLMo-2-1B
Bolmo 7B Bolmo-7B Olmo-3-7B

Installation

Bolmo was tested with transformers 4.57.3 and Python 3.11:

pip install transformers>=4.57.3

Bolmo additionally requires the xlstm package (which needs Python>=3.11):

pip install xlstm==2.0.4

Inference

You can use Bolmo with the standard HuggingFace transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"
bolmo = AutoModelForCausalLM.from_pretrained("allenai/Bolmo-1B", trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("allenai/Bolmo-1B", trust_remote_code=True)

message = ["Language modeling is "]
input_ids = tokenizer(message, return_tensors="pt")["input_ids"].to(device)

# `max_new_tokens` is the amount of bytes to generate
response = bolmo.generate(input_ids, max_new_tokens=256, do_sample=True, temperature=0.1)
print(tokenizer.decode(response[0], skip_special_tokens=True))

Model Description

  • Developed by: Allen Institute for AI (Ai2)
  • Model type: a byte-level autoregressive language model.
  • Language(s) (NLP): English
  • License: This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's Responsible Use Guidelines.
  • Contact: Press: [email protected]
  • Date cutoff: Dec. 2024.

Model Sources

Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from Bolmo or any LLM are often inaccurate, so facts should be verified.

Citation

Forthcoming!

Downloads last month
5
Safetensors
Model size
1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for allenai/Bolmo-1B

Finetuned
(5)
this model

Dataset used to train allenai/Bolmo-1B

Collection including allenai/Bolmo-1B