BigLove Klein 2 โ€” All Variants

Note:

Since someone is once again claiming ownership over a model they don't own, I took precautions ahead of time. The BF16 version is safely stored in my other repository [Granddyser/BigLoveKleinFp8]. This repo will be fully restored within approximately 14 days, just like last time.

All quantized versions of FLUX.2-klein-base-9B by Black Forest Labs, based on the BigLove Klein 2 finetune.

Available Files

File Format Size Use Case
bigLove_klein2_Bf16.safetensors BF16 ~18 GB Full precision, best quality
bigLove_klein2_bf16_pruned.safetensors BF16 (pruned) ~18 GB Pruned weights, slightly faster
bigLove_klein2_fp8_pruned.safetensors FP8 (pruned) ~9 GB Good balance of quality & VRAM
bigLove_klein2_nf4.safetensors NF4 ~5 GB Low VRAM, fast inference
bigLove_klein2.gguf GGUF varies For GGUF-compatible loaders

Usage

ComfyUI

Place the desired model file in your ComfyUI/models/diffusion_models/ (or unet) folder and select it in the appropriate loader node.

Diffusers

from diffusers import FluxPipeline
import torch

pipe = FluxPipeline.from_pretrained(
    "Granddyser/biglove-klein2-fp8",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

image = pipe(
    prompt="your prompt here",
    num_inference_steps=4,
    guidance_scale=0.0,
).images[0]

image.save("output.png")

Acknowledgments

Special thanks to SubtleShader for the motivation.

License

FLUX.2-klein-base-9B is licensed by Black Forest Labs. Inc. under the FLUX.2-klein-base-9B Non-Commercial License. Copyright Black Forest Labs. Inc.

Downloads last month
2,359
GGUF
Model size
9B params
Architecture
flux
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Granddyser/biglove-klein2

Quantized
(6)
this model