Ecotopia Citizens 24B (LoRA Adapter)

This model is a LoRA fine-tune of mistralai/Mistral-Small-Instruct-2409.

Note: This repo contains the unmerged LoRA adapter. To use:

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-Small-Instruct-2409", torch_dtype="float16", device_map="auto")
model = PeftModel.from_pretrained(base, "mistral-hackaton-2026/ecotopia-citizens-24b-merged")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-Small-Instruct-2409")

For merged weights, load and call model.merge_and_unload() on a machine with ≥48GB RAM.

Adapter source: mistral-hackaton-2026/ecotopia-citizens-small-22b

Downloads last month
2
Safetensors
Model size
22B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mistral-hackaton-2026/ecotopia-citizens-24b-merged

Adapter
(12)
this model
Adapters
1 model