YAML Metadata Warning:The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
BigWeave v6 90B
A Goliath-120b style frankenmerge of Xwin-LM-70b-v0.1 and Euryale-1.3-70b. The goal is to find other merge combinations that work well.
The version number is for me to keep track of the merges, only results that seem to work reasonably well are kept/published.
Prompting Format
Vicuna and Alpaca.
Merge process
The models used in the merge are Xwin-LM-70b-v0.1 and Euryale-1.3-70b.
The layer mix:
- range 0, 12
Xwin
- range 9, 14
Euryale
- range 12, 62
Xwin
- range 54, 71
Euryale
- range 62, 80
Xwin
Acknowledgements
@Xwin-LM For creating Xwin
@Sao10K For creating Euryale
@alpindale For creating the original Goliath
@chargoddard For developing mergekit.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 67.47 |
| AI2 Reasoning Challenge (25-Shot) | 65.36 |
| HellaSwag (10-Shot) | 87.21 |
| MMLU (5-Shot) | 68.04 |
| TruthfulQA (0-shot) | 57.96 |
| Winogrande (5-shot) | 81.69 |
| GSM8k (5-shot) | 44.58 |
- Downloads last month
- 90
Model tree for llmixer/BigWeave-v6-90b
Collection including llmixer/BigWeave-v6-90b
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard65.360
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.210
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard68.040
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard57.960
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard81.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard44.580