πŸ¦™ LLaMA 3.2 1B β€” Financial Agent (Fine-tuned with Unsloth)

This model is a fine-tuned version of LLaMA 3.2 1B Instruct on a financial transaction recording dataset. It can understand and generate structured actions for expense logging and income tracking.

Training details

  • Base model: meta-llama/Llama-3.2-1B-Instruct
  • Fine-tuning: PEFT (LoRA) via Unsloth
  • Dataset: finbot_augmented_fixed.jsonl
  • Parameters: r=32, alpha=64, dropout=0.05

Usage

from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained("raditioah/llama-3.2-1B-financial-agent")
FastLanguageModel.for_inference(model)
print(model.chat(tokenizer, "Catat pengeluaran makan siang 25000"))
Downloads last month
1
Safetensors
Model size
1B params
Tensor type
F32
Β·
BF16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for raditioah/financial-llama-1b-instruct-merged

Quantized
(353)
this model