sglang_v0.5.2/pytorch_2.8.0/third_party/fbgemm/fbgemm_gpu
hailin c8e8c1e9ff . 2025-09-20 16:09:34 +08:00
..
bench . 2025-09-20 16:09:34 +08:00
cmake . 2025-09-20 16:09:34 +08:00
codegen . 2025-09-20 16:09:34 +08:00
docs . 2025-09-20 16:09:34 +08:00
experimental . 2025-09-20 16:09:34 +08:00
fbgemm_gpu . 2025-09-20 16:09:34 +08:00
include/fbgemm_gpu . 2025-09-20 16:09:34 +08:00
src . 2025-09-20 16:09:34 +08:00
test . 2025-09-20 16:09:34 +08:00
CMakeLists.txt . 2025-09-20 16:09:34 +08:00
FbgemmGpu.cmake . 2025-09-20 16:09:34 +08:00
README.md . 2025-09-20 16:09:34 +08:00
requirements.txt . 2025-09-20 16:09:34 +08:00
requirements_genai.txt . 2025-09-20 16:09:34 +08:00
setup.py . 2025-09-20 16:09:34 +08:00

README.md

FBGEMM_GPU

FBGEMM_GPU-CPU CI FBGEMM_GPU-CUDA CI FBGEMM_GPU-ROCm CI

FBGEMM_GPU (FBGEMM GPU Kernels Library) is a collection of high-performance PyTorch GPU operator libraries for training and inference. The library provides efficient table batched embedding bag, data layout transformation, and quantization supports.

FBGEMM_GPU is currently tested with CUDA 12.4 and 11.8 in CI, and with PyTorch packages (2.1+) that are built against those CUDA versions.

See the full Documentation for more information on building, installing, and developing with FBGEMM_GPU, as well as the most up-to-date support matrix for this library.

Join the FBGEMM_GPU Community

For questions, support, news updates, or feature requests, please feel free to:

For contributions, please see the CONTRIBUTING file for ways to help out.

License

FBGEMM_GPU is BSD licensed, as found in the LICENSE file.