Instructions to use BrainboxAI/code-il-E4B-safetensors with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BrainboxAI/code-il-E4B-safetensors with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="BrainboxAI/code-il-E4B-safetensors") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("BrainboxAI/code-il-E4B-safetensors") model = AutoModelForImageTextToText.from_pretrained("BrainboxAI/code-il-E4B-safetensors") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use BrainboxAI/code-il-E4B-safetensors with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "BrainboxAI/code-il-E4B-safetensors" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BrainboxAI/code-il-E4B-safetensors", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/BrainboxAI/code-il-E4B-safetensors
- SGLang
How to use BrainboxAI/code-il-E4B-safetensors with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "BrainboxAI/code-il-E4B-safetensors" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BrainboxAI/code-il-E4B-safetensors", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "BrainboxAI/code-il-E4B-safetensors" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "BrainboxAI/code-il-E4B-safetensors", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use BrainboxAI/code-il-E4B-safetensors with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BrainboxAI/code-il-E4B-safetensors to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for BrainboxAI/code-il-E4B-safetensors to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for BrainboxAI/code-il-E4B-safetensors to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="BrainboxAI/code-il-E4B-safetensors", max_seq_length=2048, ) - Docker Model Runner
How to use BrainboxAI/code-il-E4B-safetensors with Docker Model Runner:
docker model run hf.co/BrainboxAI/code-il-E4B-safetensors
Code-IL E4B — Safetensors
Safetensors (16-bit) variant of code-il-E4B — for HuggingFace Transformers, further fine-tuning, or conversion to other runtimes.
What this is
The safetensors version of the BrainboxAI code-il-E4B on-device coding assistant.
Use this variant if you want to:
- Load the model with HuggingFace
transformers - Continue fine-tuning on your private codebase
- Convert to ONNX or another deployment format
- Integrate into a framework that does not support GGUF
If you want to run the model for inference on developer hardware, use the GGUF variant with Ollama or llama.cpp instead.
Full documentation
Training details, dataset composition, evaluation, limitations, and citation are all in the main model card:
https://huggingface.co/BrainboxAI/code-il-E4B
Quick usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BrainboxAI/code-il-E4B-safetensors")
model = AutoModelForCausalLM.from_pretrained(
"BrainboxAI/code-il-E4B-safetensors",
torch_dtype="auto",
device_map="auto",
)
messages = [
{"role": "user", "content": "Implement binary search in TypeScript with full edge-case handling."},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=1024, temperature=0.2, top_p=0.95)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Continued fine-tuning
This is the right variant to use if you want to further fine-tune the model on your company's internal codebase — starting from code-il-E4B-safetensors preserves the coding behavior already baked in, while letting you layer in domain-specific patterns.
License
Apache 2.0.
Author
Built by Netanel Elyasi, founder of BrainboxAI.
For custom coding-model fine-tuning on private corpora, contact: netanele@brainboxai.io.
- Downloads last month
- 444