Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mratsim
/
MiniMax-M2.1-FP8-INT4-AWQ

Text Generation
Safetensors
llm-compressor
minimax_m2
fp8
awq
conversational
vllm
code
devops
software engineering
engineer
developer
architect
stem
agent
custom_code
compressed-tensors
Model card Files Files and versions
xet
Community
10
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

New version for M2.5

🔥 3
5
#10 opened 2 days ago by
blobbybob

nvfp4

12
#9 opened 3 days ago by
festr2

Could you cook a similar version for Step 3.5?

👀 1
7
#8 opened 7 days ago by
bigstorm

Kind request: GLM-4.7-Flash

2
#6 opened 27 days ago by
dehnhaide

ValueError: Unsupported weight strategy=block, supported strategies are [<QuantizationStrategy.CHANNEL: 'channel'>, <QuantizationStrategy.TENSOR: 'tensor'>]

8
#5 opened about 1 month ago by
ablueleaf

Should I tuned for this warning?

1
#4 opened about 1 month ago by
bash99

vLLM Config

2
#3 opened about 1 month ago by
tclausen

Looking forward to trying this!

🤯 2
17
#2 opened about 1 month ago by
dnhkng

Thanks!

🔥 2
1
#1 opened about 1 month ago by
kbuettner
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs