Kernels Community Monitor
Live kernel build table plus optional Grafana metrics deck.
None defined yet.
The Kernel Hub allows Python libraries and applications to load optimized compute kernels directly from the Hugging Face Hub.
You can think of it as the Model Hub, but for low-level, high-performance code (kernels) that accelerate specific operations, often on GPUs.
Instead of manually managing complex dependencies, dealing with compilation flags, or building libraries like Triton or CUTLASS from source, the kernels library lets you fetch and run pre-compiled, optimized kernels on demand.
The Kernel Hub team maintains two core repos:
kernelsThe main repository containing:
Documentation:
https://huggingface.co/docs/kernels/
kernels-communityA repository that contains the source code for all of the kernels-community kernels
Source code:
https://github.com/huggingface/kernels-community
Kernels published on the Hub are designed to be:
PYTHONPATHLearn more about the Kernel Hub and the kernels library by reading the docs:
https://huggingface.co/docs/kernels/index