Text Generation
Transformers
PyTorch
gpt_bigcode
code
Eval Results (legacy)
text-generation-inference
Instructions to use WizardLMTeam/WizardCoder-15B-V1.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use WizardLMTeam/WizardCoder-15B-V1.0 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="WizardLMTeam/WizardCoder-15B-V1.0")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("WizardLMTeam/WizardCoder-15B-V1.0") model = AutoModelForCausalLM.from_pretrained("WizardLMTeam/WizardCoder-15B-V1.0") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use WizardLMTeam/WizardCoder-15B-V1.0 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "WizardLMTeam/WizardCoder-15B-V1.0" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WizardLMTeam/WizardCoder-15B-V1.0", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/WizardLMTeam/WizardCoder-15B-V1.0
- SGLang
How to use WizardLMTeam/WizardCoder-15B-V1.0 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "WizardLMTeam/WizardCoder-15B-V1.0" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WizardLMTeam/WizardCoder-15B-V1.0", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "WizardLMTeam/WizardCoder-15B-V1.0" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WizardLMTeam/WizardCoder-15B-V1.0", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use WizardLMTeam/WizardCoder-15B-V1.0 with Docker Model Runner:
docker model run hf.co/WizardLMTeam/WizardCoder-15B-V1.0
Inference's speed
#19
by VlaTal - opened
I load model in 8bit and it fit at all in my GPU, but GPU don't work at all. GPU doesn't even heats a lot, like with other models. But the CPU works on 100% percent. And the inference speed is about 2-3 tokens/s.
Here`s a code which I use for loading and inferece:
model = 'WizardLM/WizardCoder-15B-V1.0'
def load_model(model = model):
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model, device_map=device_map, load_in_8bit = True)
return tokenizer, model
tokenizer, model = load_model(model)
generation_config = GenerationConfig(
temperature=0.0,
top_p=0.95,
top_k=50,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
prompt_template = f'''
Below is an instruction that describes a task. Write a response that appropriately completes the request
### Instruction: {prompt}
### Response:'''
inputs = tokenizer(prompt_template, return_tensors="pt").to("cuda")
generated_ids = model.generate(**inputs, generation_config=generation_config, max_new_tokens=3000)
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(outputs[0])