Instructions to use frameai/Loxa-4B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use frameai/Loxa-4B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="frameai/Loxa-4B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("frameai/Loxa-4B") model = AutoModelForCausalLM.from_pretrained("frameai/Loxa-4B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use frameai/Loxa-4B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "frameai/Loxa-4B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "frameai/Loxa-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/frameai/Loxa-4B
- SGLang
How to use frameai/Loxa-4B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "frameai/Loxa-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "frameai/Loxa-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "frameai/Loxa-4B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "frameai/Loxa-4B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use frameai/Loxa-4B with Docker Model Runner:
docker model run hf.co/frameai/Loxa-4B
Laxo-4B: A Fast and Accurate Open-Source Language Model
Model Description:
Laxo-4B is a powerful and efficient open-source language model developed with a focus on speed and accuracy. Boasting a total accuracy of 96%, Laxo-4B delivers high-performance across a range of hardware, including higher-version CPUs and GTX graphic cards, making it a versatile choice for various NLP tasks. This model excels in tasks such as:
- Text Generation: Create realistic and engaging text for diverse applications.
- Text Classification: Categorize text into predefined categories with high precision.
- Question Answering: Provide accurate and comprehensive answers to complex questions.
- Translation: Translate text between languages with fluency and accuracy.
- Summarization: Condense lengthy text into concise and informative summaries.
Model Training:
Laxo-4B was trained on a massive dataset of text and code, encompassing a wide variety of sources to ensure its comprehensive understanding of language. The training process leveraged advanced techniques to optimize for both performance and efficiency. Specific details about the training data and methodology are available upon request.
Intended Uses & Limitations:
Laxo-4B is intended for research and development purposes in the field of natural language processing. While the model demonstrates high accuracy, it's crucial to acknowledge potential limitations:
- Bias: Like all language models, Laxo-4B may exhibit biases present in the training data. Users should be aware of this and employ appropriate mitigation strategies.
- Factual Inaccuracies: While striving for accuracy, the model may occasionally generate factually incorrect information. Verification of outputs is recommended, especially in critical applications.
- Resource Intensive: Despite optimizations, running Laxo-4B may require substantial computational resources depending on the task and hardware.
How to Use:
Laxo-4B can be easily integrated into your projects. Here's a basic example of how to use the model for text generation:
from transformers import pipeline
generator = pipeline('text-generation', model='frameai/Loxa-4B')
text = generator("Write a short story about a robot learning to love:", max_length=10000, num_return_sequences=1)
print(text[0]['generated_text'])
- Downloads last month
- 16