Instructions to use RedHatAI/gemma-3-4b-it-quantized.w8a8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use RedHatAI/gemma-3-4b-it-quantized.w8a8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="RedHatAI/gemma-3-4b-it-quantized.w8a8") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("RedHatAI/gemma-3-4b-it-quantized.w8a8") model = AutoModelForImageTextToText.from_pretrained("RedHatAI/gemma-3-4b-it-quantized.w8a8") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use RedHatAI/gemma-3-4b-it-quantized.w8a8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "RedHatAI/gemma-3-4b-it-quantized.w8a8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/gemma-3-4b-it-quantized.w8a8", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/RedHatAI/gemma-3-4b-it-quantized.w8a8
- SGLang
How to use RedHatAI/gemma-3-4b-it-quantized.w8a8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "RedHatAI/gemma-3-4b-it-quantized.w8a8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/gemma-3-4b-it-quantized.w8a8", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "RedHatAI/gemma-3-4b-it-quantized.w8a8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RedHatAI/gemma-3-4b-it-quantized.w8a8", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use RedHatAI/gemma-3-4b-it-quantized.w8a8 with Docker Model Runner:
docker model run hf.co/RedHatAI/gemma-3-4b-it-quantized.w8a8
gemma-3-4b-it-quantized.w8a8
Model Overview
- Model Architecture: google/gemma-3-4b-it
- Input: Vision-Text
- Output: Text
- Model Optimizations:
- Weight quantization: INT8
- Activation quantization: INT8
- Release Date: 6/4/2025
- Version: 1.0
- Model Developers: RedHatAI
Quantized version of google/gemma-3-4b-it.
Model Optimizations
This model was obtained by quantizing the weights of google/gemma-3-4b-it to INT8 data type, ready for inference with vLLM >= 0.8.0.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset
from transformers import AutoProcessor
# Define model name once
model_name = "RedHatAI/gemma-3-4b-it-quantized.w8a8"
# Load image and processor
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
# Build multimodal prompt
chat = [
{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "What is the content of this image?"}]},
{"role": "assistant", "content": []}
]
prompt = processor.apply_chat_template(chat, add_generation_prompt=True)
# Initialize model
llm = LLM(model=model_name, trust_remote_code=True)
# Run inference
inputs = {"prompt": prompt, "multi_modal_data": {"image": [image]}}
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
# Display result
print("RESPONSE:", outputs[0].outputs[0].text)
vLLM also supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created with llm-compressor by running the code snippet below:
Model Creation Code
import base64
from io import BytesIO
import torch
from datasets import load_dataset
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.transformers import oneshot
# Load model.
model_id = "google/gemma-3-4b-it"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto",
)
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Oneshot arguments
DATASET_ID = "neuralmagic/calibration"
DATASET_SPLIT = {"LLM": "train[:512]"}
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
ds = ds.shuffle(seed=42)
dampening_frac=0.05
def data_collator(batch):
assert len(batch) == 1, "Only batch size of 1 is supported for calibration"
item = batch[0]
collated = {}
import torch
for key, value in item.items():
if isinstance(value, torch.Tensor):
collated[key] = value.unsqueeze(0)
elif isinstance(value, list) and isinstance(value[0][0], int):
# Handle tokenized inputs like input_ids, attention_mask
collated[key] = torch.tensor(value)
elif isinstance(value, list) and isinstance(value[0][0], float):
# Handle possible float sequences
collated[key] = torch.tensor(value)
elif isinstance(value, list) and isinstance(value[0][0], torch.Tensor):
# Handle batched image data (e.g., pixel_values as [C, H, W])
collated[key] = torch.stack(value) # -> [1, C, H, W]
elif isinstance(value, torch.Tensor):
collated[key] = value
else:
print(f"[WARN] Unrecognized type in collator for key={key}, type={type(value)}")
return collated
# Recipe
recipe = [
GPTQModifier(
targets="Linear",
ignore=["re:.*lm_head.*", "re:.*embed_tokens.*", "re:vision_tower.*", "re:multi_modal_projector.*"],
sequential_update=True,
sequential_targets=["Gemma3DecoderLayer"],
dampening_frac=dampening_frac,
)
]
SAVE_DIR=f"{model_id.split('/')[1]}-quantized.w8a8"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
trust_remote_code_model=True,
data_collator=data_collator,
output_dir=SAVE_DIR
)
Evaluation
The model was evaluated using lm_evaluation_harness for OpenLLM v1 text benchmark. The evaluations were conducted using the following commands:
Evaluation Commands
OpenLLM v1
lm_eval \
--model vllm \
--model_args pretrained="<model_name>",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=<n>,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True,enforce_eager=True \
--tasks openllm \
--batch_size auto
Accuracy
| Category | Metric | google/gemma-3-4b-it | RedHatAI/gemma-3-4b-it-quantized.w8a8 | Recovery (%) |
|---|---|---|---|---|
| OpenLLM V1 | ARC Challenge | 56.57% | 56.31% | 99.55% |
| GSM8K | 76.12% | 72.93% | 95.82% | |
| Hellaswag | 74.96% | 74.35% | 99.19% | |
| MMLU | 58.38% | 57.58% | 98.63% | |
| Truthfulqa (mc2) | 51.87% | 51.60% | 99.49% | |
| Winogrande | 70.32% | 71.11% | 101.12% | |
| Average Score | 64.70% | 63.98% | 98.89% | |
| Vision Evals | MMMU (val) | 39.89% | 40.44% | 101.38% |
| ChartQA | 50.76% | 49.80% | 98.11% | |
| Average Score | 45.33% | 45.12% | 99.74% |
- Downloads last month
- 469