DeepSeek-V2-Lite-FP8-BLOCK-padded
This model is a FP8 block-quantized version of deepseek-ai/DeepSeek-V2-Lite with padding support for non-divisible dimensions.
Overview
- Base Model: deepseek-ai/DeepSeek-V2-Lite (16B parameters)
- Quantization: FP8_BLOCK (128x128 block structure)
- Purpose: Demonstrates FP8 block quantization with weight padding for models with dimensions not evenly divisible by block size
Key Feature: Block Quantization Padding
DeepSeek-V2-Lite has intermediate_size=10944, which is not divisible by the block size of 128. This model uses weight padding to handle this:
- Original
intermediate_size: 10944 - Padded
intermediate_size: 11008 (86 × 128)
The padding is applied during quantization and the config.json reflects the padded dimensions for vLLM compatibility.
Usage with vLLM
from vllm import LLM, SamplingParams
llm = LLM(
model="Etelis/DeepSeek-V2-Lite-FP8-BLOCK-padded",
trust_remote_code=True,
tensor_parallel_size=1,
)
sampling_params = SamplingParams(max_tokens=100, temperature=0.7)
output = llm.generate(["Hello, world!"], sampling_params)
print(output[0].outputs[0].text)
Requirements: H100 or newer GPU (SM 8.9+) for FP8 block quantization support.
Quantization Recipe
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
MODEL_ID = "deepseek-ai/DeepSeek-V2-Lite"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID,
torch_dtype="auto",
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, trust_remote_code=True)
# FP8 block quantization - ignore layers with composite dimensions
recipe = QuantizationModifier(
targets="Linear",
scheme="FP8_BLOCK",
ignore=["lm_head", "re:.*kv_a_proj_with_mqa.*"]
)
oneshot(model=model, recipe=recipe)
model.save_pretrained("DeepSeek-V2-Lite-FP8-BLOCK-padded")
tokenizer.save_pretrained("DeepSeek-V2-Lite-FP8-BLOCK-padded")
Created With
- llm-compressor (with padding support)
- compressed-tensors (PR #547)
Ignored Layers
lm_head: Not quantized (standard practice)kv_a_proj_with_mqa: Has composite dimensions (512 + 64 = 576) that cannot be safely padded
License
This model inherits the DeepSeek Model License from the base model.
- Downloads last month
- 3
Model tree for Etelis/DeepSeek-V2-Lite-FP8-BLOCK-padded
Base model
deepseek-ai/DeepSeek-V2-Lite