Vers3Dynamics Nuclear-Expert

From 26 Million kg of Ore to Mushroom Cloud โ€” A Llama-3.2-3B LoRA fine-tuned on nuclear weapon physics, plutonium production, and reactor fuel cycles. Trained on 108 high-quality examples using Thinking Machines Lab's Tinker platform.

Capabilities

  • Yield Calculations: "What's the yield for a 15 kg Pu pit?" โ†’ "59 kt TNT, fireball ~80 m radius."
  • Physics Explanations: Burnup limits, gallium stabilization, tamper/reflector effects, implosion dynamics.
  • Dramatic & Educational: Responses blend awe with responsibility โ€” e.g., "The pit compresses in microseconds... but this is simulation only."

Warning: Educational/research use only. No classified info or weapon instructions. Based on declassified IAEA/DOE sources.

Usage

from peft import PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch

# Load base + LoRA
base = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Llama-3.2-3B",
    torch_dtype=torch.bfloat16,
    device_map="auto"
)
model = PeftModel.from_pretrained(base, "ciaochris/Nuclear-Expert-LoRA-3B")

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B")

# Pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# Query
messages = [{"role": "user", "content": "Yield for a 12 kg plutonium pit?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = pipe(prompt, max_new_tokens=200, temperature=0.7)
print(output[0]["generated_text"])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support