Model Card: Intellix
Intellix is a high-capacity, fine-tuned large language model (LLM) designed specifically for enterprise-grade applications.
1. Model Details
- Model Developer: Mediusware
- Model Date: March 2026
- Model Version: 1.0.0
- Model Type: Causal Language Model (Fine-tuned via PEFT/LoRA and GGUF quantized)
- Base Model: Proprietary Business-Oriented Foundation (intellix-base)
- License: Proprietary (Mediusware)
2. Intended Use
Primary Intended Uses
- Enterprise Communication: Drafting professional emails, client updates, and internal memos.
- Policy & Security Auditing: Generating and reviewing business security policies and compliance documentation.
- Knowledge Synthesis: Summarizing complex business documents into executive highlights.
- Decision Support: Providing reasoned insights for project management and business logic.
Primary Intended Users
- Business professionals and executives.
- IT security and compliance officers.
- Enterprise software developers integrating AI into professional workflows.
Out-of-Scope Use Cases
- Non-professional or casual conversational use.
- High-stakes medical, legal, or financial advice without human oversight.
- Generation of fictional or creative content not grounded in business reality.
3. Factors
Relevant Factors
- Professional Tone: The model is evaluated based on its ability to maintain a consistent, corporate-ready voice.
- Security Compliance: Evaluation focuses on the model's adherence to security protocols and data privacy constraints.
- Accuracy: Minimization of hallucinations in professional contexts (e.g., policy drafting).
Evaluation
Evaluations were conducted using a proprietary enterprise benchmark suite and real-world business scenarios to ensure the model's readiness for B2B deployment.
4. Metrics
Model Performance Measures
- Throughput: Measured in tokens per second (TPS) for real-time responsiveness.
- Latency: Time-to-first-token (TTFT) and total response time.
- Persona Adherence: Qualitative and quantitative scoring of professional tone consistency.
5. Evaluation Results
Quantitative Performance (March 2026)
Tested on Q8_0 GGUF via optimized local inference.
| Metric | Performance Value |
|---|---|
| Average Throughput | 196.08 tokens/sec |
| Average Latency | 0.68 seconds |
| Peak Throughput | 199.48 tokens/sec |
| Model Footprint | 2.0 GB |
6. Training Data
Data Sources
The model was fine-tuned on a massive, curated dataset including:
- Professional business correspondence and templates.
- Industry-standard security policies and compliance manuals.
- Technical documentation for enterprise software.
- High-quality project management logs and reports.
Data Preprocessing
Data was rigorously cleaned to remove PII (Personally Identifiable Information) and informal/low-quality text, ensuring the model's output remains strictly professional.
7. Quantitative Analysis
Benchmark Scenarios
The following scenarios were used to validate the model's business intelligence:
- Scenario A: Draft a secure data handling policy for a fintech startup.
- Scenario B: Summarize a 50-page internal audit report into 5 key action items.
- Scenario C: Write a professional apology to a high-value client for a project delay.
8. Fine-Tuning Process
Methodology
mw-intellix was fine-tuned using the Unsloth library for memory-efficient and fast training. The process utilized LoRA (Low-Rank Adaptation) to adapt the base architecture to specialized business domains without compromising the model's general intelligence.
Hyperparameters
The following hyperparameters were used during the fine-tuning phase:
| Parameter | Value |
|---|---|
| PEFT Type | LoRA |
| LoRA Rank (r) | 16 |
| LoRA Alpha | 16 |
| LoRA Dropout | 0.0 |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Precision | bfloat16 |
| Optimizer | AdamW |
| Learning Rate | 2e-4 |
| Epochs | 3 |
Hardware Requirements
- Training: Single A100 (40GB) or H100 (80GB) recommended. Suitable for consumer GPUs like RTX 3090/4090 using Unsloth 4-bit loading.
- Inference: Minimum 8GB VRAM (Full) / 2GB VRAM (Q8_0 GGUF).
10. How to Use
A. Local Inference via Ollama (Recommended)
Intellix is highly optimized for local execution using Ollama.
- Prepare the Modelfile: Use the provided
Modelfilein this repository which includes the correctrepeat_penalty(1.5) andstoptokens to prevent loops. - Create the Model:
ollama create intellix -f Modelfile - Run:
ollama run intellix
Model Parameters for Stability:
repeat_penalty: 1.5temperature: 0.7stop: ["<|im_start|>", "<|im_end|>", "User:", "Assistant:"]
B. Inference via Transformers (Python)
For research or programmatic access, use the transformers library.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "mediusware-ai/intellix"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Using the ChatML Template
messages = [
{"role": "system", "content": "You are Intellix, a professional AI assistant developed by Mediusware."},
{"role": "user", "content": "Tell me about Mediusware's US presence."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, repetition_penalty=1.5)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
11. Ethical Considerations
Data Privacy
Designed for Local-First Deployment. When used via Ollama or GGUF, business data never leaves the local infrastructure, ensuring 100% data residency and privacy.
Safety Guardrails
- Professionalism Filter: Fine-tuned to avoid informal, casual, or inappropriate language.
- Hallucination Mitigation: Specialized training to prioritize "I don't know" or factual grounding over creative extrapolation in sensitive business contexts.
11. Caveats and Recommendations
- Human-in-the-loop: While highly accurate, users should always review critical business outputs (e.g., security policies) before implementation.
- Language Bias: Optimized primarily for Business English; performance in other languages may vary.
Contact & Support
For custom enterprise deployments or inquiries, visit mediusware.com.
Model Architecture
- Base: intellix-base
- Parameters: 1.54B
- Hidden Size: 1536
- Context Length: 131,072 tokens
- Downloads last month
- 638
8-bit