Home baked
Collection
Home-baked merges and tunes. • 12 items • Updated
How to use rAIfle/Interim-MN-fp16 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="rAIfle/Interim-MN-fp16")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rAIfle/Interim-MN-fp16")
model = AutoModelForCausalLM.from_pretrained("rAIfle/Interim-MN-fp16")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use rAIfle/Interim-MN-fp16 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "rAIfle/Interim-MN-fp16"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "rAIfle/Interim-MN-fp16",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/rAIfle/Interim-MN-fp16
How to use rAIfle/Interim-MN-fp16 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "rAIfle/Interim-MN-fp16" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "rAIfle/Interim-MN-fp16",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "rAIfle/Interim-MN-fp16" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "rAIfle/Interim-MN-fp16",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use rAIfle/Interim-MN-fp16 with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rAIfle/Interim-MN-fp16 to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for rAIfle/Interim-MN-fp16 to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for rAIfle/Interim-MN-fp16 to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="rAIfle/Interim-MN-fp16",
max_seq_length=2048,
)How to use rAIfle/Interim-MN-fp16 with Docker Model Runner:
docker model run hf.co/rAIfle/Interim-MN-fp16
This one was a doozy. Three restarted runs of which one failed at the last possible moment... Anyway, the result is decent enough and passes my own Nala-testing so it's not too bad. Not gonna benchmark this myself but if anyone feels the need to then I'd be happy to include numbers here.
Probably best if using normal Nemo settings, no weird stuff. (Or just use the recommended ones in the Recommended Settings repo)