Instructions to use 8bit-coder/alpaca-7b-nativeEnhanced with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Adapters
How to use 8bit-coder/alpaca-7b-nativeEnhanced with Adapters:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("undefined") model.load_adapter("8bit-coder/alpaca-7b-nativeEnhanced", set_active=True) - Notebooks
- Google Colab
- Kaggle
13b, 30b?
May I be the first to be "that guy" and request a 13b version and a 30b version?
Very excited to work with the conversation aspects of this model. Thanks for putting it together, it is so very appriciated.
I agree with this thread where the author suggests that quantizing to 4 bits using GPTQ using all the latest GPTQ features would be the way to go.
We're actually working on a 13b and 30b version. It's taking longer than expected due to the hardware limitations of using only 8 A100 80GB GPUs.
Still so stoked about this! I think alpaca-*-nativeEnhanced may be a good base model to target for loras. I'm also curious if anything has come along in the past 20 days that is vastly and obviously superior. If not, I'm still very stoked for a 13b!