How to use Hieuman/GenC-LlaMa with Transformers:
# Load model directly from transformers import AutoTokenizer, LlamaEmbeddingLM tokenizer = AutoTokenizer.from_pretrained("Hieuman/GenC-LlaMa") model = LlamaEmbeddingLM.from_pretrained("Hieuman/GenC-LlaMa")
Chat template
Files info