-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 17 • 2 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 15 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 3.19k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 10 • 1
NM Testing
company
AI & ML interests
None defined yet.
Recent Activity
View all activity
-
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4
Text Generation • 8B • Updated • 22 • 1 -
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation • 8B • Updated • 14 -
RedHatAI/Sparse-Llama-3.1-8B-2of4
Text Generation • 8B • Updated • 44 • 62 -
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4-FP8-dynamic
Text Generation • 8B • Updated • 19 • 1
-
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation • 8B • Updated • 2.8k • • 24 -
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation • 8B • Updated • 6.18k • • 8 -
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation • 47B • Updated • 49 • 3 -
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation • 71B • Updated • 1.17k • • 13
-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 17 • 2 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 15 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 3.19k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 10 • 1
Collection of State-of-the-art FP8 Block Quantized Models
Models used by https://github.com/vllm-project/speculators CI system
-
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4
Text Generation • 8B • Updated • 22 • 1 -
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation • 8B • Updated • 14 -
RedHatAI/Sparse-Llama-3.1-8B-2of4
Text Generation • 8B • Updated • 44 • 62 -
RedHatAI/Sparse-Llama-3.1-8B-gsm8k-2of4-FP8-dynamic
Text Generation • 8B • Updated • 19 • 1
-
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation • 8B • Updated • 2.8k • • 24 -
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation • 8B • Updated • 6.18k • • 8 -
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation • 47B • Updated • 49 • 3 -
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation • 71B • Updated • 1.17k • • 13