Instructions to use Tensoic/Kan-LLaMA-7B-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Tensoic/Kan-LLaMA-7B-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Tensoic/Kan-LLaMA-7B-base")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Tensoic/Kan-LLaMA-7B-base") model = AutoModelForCausalLM.from_pretrained("Tensoic/Kan-LLaMA-7B-base") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Tensoic/Kan-LLaMA-7B-base with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Tensoic/Kan-LLaMA-7B-base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tensoic/Kan-LLaMA-7B-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Tensoic/Kan-LLaMA-7B-base
- SGLang
How to use Tensoic/Kan-LLaMA-7B-base with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Tensoic/Kan-LLaMA-7B-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tensoic/Kan-LLaMA-7B-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Tensoic/Kan-LLaMA-7B-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Tensoic/Kan-LLaMA-7B-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Tensoic/Kan-LLaMA-7B-base with Docker Model Runner:
docker model run hf.co/Tensoic/Kan-LLaMA-7B-base
Kannada LLaMA 7B
For those who are interested in a deeper understanding of the Kannada LLaMA 7B model, including its development process, applications, and technical specifications, Tensoic has published an extensive blog post. This blog post provides valuable insights into the model's creation and its potential impact on natural language processing tasks involving the Kannada language. To read this informative and detailed blog post, please follow this link: Tensoic's Kannada LLaMA blog post.
The blog is an excellent resource for anyone looking to gain a comprehensive understanding of the model, whether you are a student, researcher, or a professional in the field of machine learning and language processing.
In summary, this repository serves as a gateway to accessing the sharded version of the Kannada LLaMA 7B model and provides links to the original model and an informative blog post for a more in-depth exploration. We encourage all interested parties to explore these resources to fully appreciate the capabilities and advancements represented by the Kannada LLaMA 7B model.
- Downloads last month
- 4