Text Generation
Transformers
PyTorch
Safetensors
English
mistral
Generated from Trainer
conversational
text-generation-inference
Instructions to use HuggingFaceH4/zephyr-7b-alpha with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HuggingFaceH4/zephyr-7b-alpha with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="HuggingFaceH4/zephyr-7b-alpha") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-alpha") model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-alpha") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HuggingFaceH4/zephyr-7b-alpha with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HuggingFaceH4/zephyr-7b-alpha" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/zephyr-7b-alpha", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/HuggingFaceH4/zephyr-7b-alpha
- SGLang
How to use HuggingFaceH4/zephyr-7b-alpha with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/zephyr-7b-alpha" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/zephyr-7b-alpha", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HuggingFaceH4/zephyr-7b-alpha" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceH4/zephyr-7b-alpha", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use HuggingFaceH4/zephyr-7b-alpha with Docker Model Runner:
docker model run hf.co/HuggingFaceH4/zephyr-7b-alpha
[AUTOMATED] Model Memory Requirements
pinned๐โค๏ธ 7
#21 opened over 2 years ago
by
model-sizer-bot
Question about HuggingFaceH4/zephyr-7b-alpha and its base model
#43 opened 3 months ago
by
dqdw
Potential Inconsistencies Model and Base Model License
#41 opened 12 months ago
by
yueyangchen
major problem couldnt find a fix
#40 opened about 1 year ago
by
AladinBroDev
Stopped to work
2
#39 opened about 2 years ago
by
yinbtologie
Am I the only one who gets the input also in the output of the API ? Is it possible to fix that? (example below)
#38 opened about 2 years ago
by
Ohajoui
Update README.md
#37 opened about 2 years ago
by
falan42
Hugging Face's Zephyr Settings and System Prompt
#36 opened over 2 years ago
by
ParanoidPosition
How to apply PEFT in this model
2
#35 opened over 2 years ago
by
vivkhandelwal
Why the chosen rewards are negative?
#34 opened over 2 years ago
by
GeneZC
Incorrect Token Count in Generated Response
#33 opened over 2 years ago
by
Zekri123
Adding Evaluation Results
#32 opened over 2 years ago
by
leaderboard-pr-bot
Zephyr 7b 128k?
1
#31 opened over 2 years ago
by
TornButter
High GPU RAM usage
7
#30 opened over 2 years ago
by
Alealejandrooo
Is this model commercially usable?
2
#28 opened over 2 years ago
by
AayushShah
ImportError
1
#26 opened over 2 years ago
by
kris-123
How can I teach another language by applying the Fine-tune process to this language model?
๐ 6
#25 opened over 2 years ago
by
ahmetab06
Train it with custom data
1
#24 opened over 2 years ago
by
Neulich
tokenization
#23 opened over 2 years ago
by
caterpillarman
Sample code not working
๐ 3
1
#22 opened over 2 years ago
by
dhanilka
Hosted Inference API
#20 opened over 2 years ago
by
pioneerchae
Model answering with all newlines?
2
#19 opened over 2 years ago
by
jamesbraza
Is zephyr removed too?
1
#18 opened over 2 years ago
by
caimakerg
Not working in Text Generation Web UI
2
#17 opened over 2 years ago
by
ialhabbal
Better or worse than Mistral?
5
#16 opened over 2 years ago
by
ehalit
Add pipeline_tag=conversational for Hosted inference API/demo to work
#15 opened over 2 years ago
by
notpushkin
APIs & Fine-tuning ( both Domain Adaptation and instruction fine-tuning) on $/1000 token basis
๐ 1
5
#12 opened over 2 years ago
by
KrishnaKaasyap
KeyError: 'mistral'
๐ 3
5
#9 opened over 2 years ago
by
ebowwa
Slow generation
1
#6 opened over 2 years ago
by
darkandpure
Training/finetuning code?
๐ 1
3
#5 opened over 2 years ago
by
milsunone
Translation capability broken
5
#4 opened over 2 years ago
by
arogov
Prompt format?
2
#3 opened over 2 years ago
by
SinanAkkoyun