Gemma-3-1B Editorial Analyzer (Q4_K_M)

πŸ” Model Overview

This is a highly specialized fine-tune of Google's Gemma-3-1B-it, engineered specifically for high-throughput, low-latency editorial analysis on commodity hardware (CPU).

It is designed to function as the core intelligence layer of an automated news aggregation pipeline, processing 20+ articles every 5 minutes within a strict compute budget (Hugging Face Free Tier / 2 vCPU).

πŸ› οΈ Engineering Methodology (The Training Journey)

To achieve the required throughput and accuracy on limited hardware, the model was developed through a rigorous 3-stage iterative process:

Phase 1: Baseline Transfer Learning

  • Goal: Establish ground truth for editorial tone and bias detection.
  • Dataset: 525 pairs of raw article content vs. human-curated editorial JSON.
  • Result: Standard LoRA fine-tuning (Rank=8). Loss stabilized from 3.62 β†’ 2.14. The model learned the JSON syntax but struggled with rare edge cases.

Phase 2: Synthetic Data Augmentation

  • Goal: Solve class imbalance (biased vs. unbiased articles) and improve generalization.
  • Method: Applied Programmatic Data Augmentation (inspired by SDV/Data Synthesizer principles) to statistically oversample under-represented classes (e.g., highly biased political articles).
  • Impact: Dataset expanded from 525 β†’ 1,021 rows. This targeted "scattering" of data points prevented the model from overfitting to the majority class ("neutral" tone).

Phase 3: "Headless" Schema Decoupling (Final Architecture)

  • Goal: Maximize token generation speed for CPU inference.
  • Innovation: A Schema-Agnostic / Headless output strategy.
  • The Logic: Instead of forcing the LLM to generate expensive JSON tokens ({ "sentiment": ... }), the model was retrained to output a highly compressed, ordered string stream (e.g., Politics ||| 5 ||| True).
  • Benefit: This decoupled the reasoning logic from the formatting layer. The serialization to JSON is handled by the high-speed Python application layer, reducing the token load on the LLM by ~40% and ensuring valid results even if the System Prompt schema evolves.

πŸ“Š Technical Specifications

  • Architecture: Gemma 3 (1.1B Parameters)
  • Fine-Tuning: LoRA (Rank 8, Alpha 16) via Unsloth
  • Quantization: GGUF 4-bit Medium (Q4_K_M)
  • Context Window: 2048 tokens
  • Inference Speed: ~150-200ms/token (CPU optimized)

πŸ’» Usage (Python)

Requires llama-cpp-python. The logic below handles the "Headless" reconstruction:

from llama_cpp import Llama
from huggingface_hub import hf_hub_download
import json

# 1. Download & Load
model_path = hf_hub_download(
    repo_id="YOUR_USERNAME/gemma-3-1b-editorial-analyzer",
    filename="model.gguf"
)
llm = Llama(model_path=model_path, n_ctx=2048, n_threads=2, verbose=False)

# 2. Strict Input Format
article = "The Prime Minister announced..."
prompt = f"<ARTICLE>\n{article}\n</ARTICLE>"

# 3. Efficient Inference
output = llm(prompt, max_tokens=256, stop=["<eos>"], echo=False)
raw_stream = output['choices'][0]['text']

# 4. The "Headless" Reconstruction
# Assumes structure: Category | Sentiment | Biased? | Scale | Summary
parts = raw_stream.split(" ||| ")
result = {
    "category": parts[0].strip(),
    "sentiment": int(parts[1]),
    "is_biased": parts[2].strip() == "True",
    "summary": parts[4].strip()
}
print(json.dumps(result, indent=2))
Downloads last month
206
GGUF
Model size
1.0B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Zap11/gemma-3-1b-editorial-analyzer

Quantized
(159)
this model