stellaray777/1000s-websites
Viewer • Updated • 40.6k • 31
How to use stellaray777/1000s-websites with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct")
model = PeftModel.from_pretrained(base_model, "stellaray777/1000s-websites")A fine-tuned version of DeepSeek Coder 6.7B specifically trained to generate website designs based on design specifications and requirements.
This model is a LoRA fine-tuned version of deepseek-ai/deepseek-coder-6.7b-base trained on a dataset of website designs. The model has been specialized to understand design requirements (industry, tone, layout, etc.) and generate appropriate HTML/CSS/JavaScript implementations for brand-specific websites.
deepseek-ai/deepseek-coder-6.7b-basestellaray777/1000s-websitesr: 8alpha: 16target_modules: ["q_proj", "v_proj"]lora_dropout: 0.05This model is designed for:
pip install transformers torch peft bitsandbytes accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model and tokenizer
base_model = "deepseek-ai/deepseek-coder-6.7b-base"
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_4bit=True,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(base_model)
# Load LoRA weights
model = PeftModel.from_pretrained(model, "stellaray777/1000s-websites")
# Prepare input
messages = [
{
"role": "system",
"content": "You are a senior creative front-end engineer who designs brand-specific websites."
},
{
"role": "user",
"content": "Industry: Healthcare\nTone: Professional, Trustworthy\nPage type: Landing page\nLayout: Grid-based\nPhoto usage: Medium\nTask: Design the website based on the provided HTML structure and styling."
}
]
# Generate response
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=2048, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Use the provided test script:
# From Hugging Face Hub
python src/test_trained_model.py
# From local cloned model
python src/test_trained_model.py --local
For questions, issues, or contributions, please refer to the main project repository.
If you use this model, please cite:
@misc{deepseek-coder-website-design,
title={DeepSeek Coder 6.7B - Website Design Fine-tuned},
author={Stellaray777},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/stellaray777/1000s-websites}}
}
Base model
deepseek-ai/deepseek-coder-6.7b-base