sglang_v0.5.2/sglang/sgl-kernel
hailin cc76bab27e first commit 2025-09-15 10:32:17 +08:00
..
benchmark first commit 2025-09-15 10:32:17 +08:00
cmake first commit 2025-09-15 10:32:17 +08:00
csrc first commit 2025-09-15 10:32:17 +08:00
include first commit 2025-09-15 10:32:17 +08:00
python/sgl_kernel first commit 2025-09-15 10:32:17 +08:00
tests first commit 2025-09-15 10:32:17 +08:00
.clang-format first commit 2025-09-15 10:32:17 +08:00
CMakeLists.txt first commit 2025-09-15 10:32:17 +08:00
LICENSE first commit 2025-09-15 10:32:17 +08:00
Makefile first commit 2025-09-15 10:32:17 +08:00
README.md first commit 2025-09-15 10:32:17 +08:00
THIRDPARTYNOTICES.txt first commit 2025-09-15 10:32:17 +08:00
build.sh first commit 2025-09-15 10:32:17 +08:00
pyproject.toml first commit 2025-09-15 10:32:17 +08:00
pyproject_cpu.toml first commit 2025-09-15 10:32:17 +08:00
pyproject_rocm.toml first commit 2025-09-15 10:32:17 +08:00
rename_wheels.sh first commit 2025-09-15 10:32:17 +08:00
setup_rocm.py first commit 2025-09-15 10:32:17 +08:00

README.md

SGL Kernel

Kernel Library for SGLang

PyPI

Installation

For CUDA 12.1 and above:

pip3 install sgl-kernel

For CUDA 11.8:

pip3 install sgl-kernel -i https://docs.sglang.ai/whl/cu118

Build from source

Development build:

make build

Note:

The sgl-kernel is rapidly evolving. If you experience a compilation failure, try using make rebuild.

Build with ccache

# or `yum install -y ccache`.
apt-get install -y ccache
# Building with ccache is enabled when ccache is installed and CCACHE_DIR is set.
export CCACHE_DIR=/path/to/your/ccache/dir
export CCACHE_BACKEND=""
export CCACHE_KEEP_LOCAL_STORAGE="TRUE"
unset CCACHE_READONLY
python -m uv build --wheel -Cbuild-dir=build --color=always .

Configuring CMake Build Options

Cmake options can be configuring by adding -Ccmake.define.<option>=<value> to the uv build flags. For example, to enable building FP4 kernels, use:

python -m uv build --wheel -Cbuild-dir=build -Ccmake.define.SGL_KERNEL_ENABLE_FP4=1 --color=always .

See CMakeLists.txt for more options.

Parallel Build

We highly recommend you build sgl-kernel with Ninja. Ninja can automatically build sgl-kernel in parallel. And if you build the sgl-kernel with cmake, you need to add CMAKE_BUILD_PARALLEL_LEVEL and limit the nvcc threads to a single thread by setting SGL_KERNEL_COMPILE_THREADS=1 for parallel build like:

CMAKE_BUILD_PARALLEL_LEVEL=$(nproc) python -m uv build --wheel -Cbuild-dir=build \
-Ccmake.define.SGL_KERNEL_COMPILE_THREADS=1 --color=always .

⚠️ Compilation Issue with sgl-kernel and CUDA 12.6

When compiling sgl-kernel with FlashAttention on a Hopper GPU using CUDA 12.6, you may encounter a segmentation fault:

kernel/build/_deps/repo-flash-attention-src/hopper/instantiations/flash_fwd_hdimall_bf16_paged_softcap_sm90.cu -o CMakeFiles/flash_ops.dir/_deps/repo-flash-attention-src/hopper/instantiations/flash_fwd_hdimall_bf16_paged_softcap_sm90.cu.o
Segmentation fault (core dumped)

⚠️ Note: To ensure that FlashAttention compiles correctly on Hopper GPU Architecture(sm90), it is strongly recommended to use:

  • nvcc version: 12.6
  • ptxas version: 12.8

1. Check Current Versions

Before proceeding, verify your current CUDA tool versions:

nvcc --version
ptxas --version

2. Update ptxas to 12.8 (if needed)

  1. Save the following script to a file (e.g., update_ptxas.sh).
#!/usr/bin/env bash
# Source: https://github.com/Dao-AILab/flash-attention/blob/7ff1b621112ba8b538e2fc6a316f2a6b6f22e518/hopper/setup.py#L404
set -ex

if [ -z "$1" ]; then
    echo "Usage: $0 <CUDA_VERSION>"
    exit 1
fi

CUDA_VERSION=$1

if awk "BEGIN {exit !("$CUDA_VERSION" >= 12.6 && "$CUDA_VERSION" < 12.8)}"; then
    NVCC_ARCHIVE_VERSION="12.8.93"
    NVCC_ARCHIVE_NAME="cuda_nvcc-linux-x86_64-${NVCC_ARCHIVE_VERSION}-archive"
    NVCC_ARCHIVE_TAR="${NVCC_ARCHIVE_NAME}.tar.xz"
    NVCC_ARCHIVE_URL="https://developer.download.nvidia.com/compute/cuda/redist/cuda_nvcc/linux-x86_64/${NVCC_ARCHIVE_TAR}"

    wget "$NVCC_ARCHIVE_URL"
    tar -xf "$NVCC_ARCHIVE_TAR"

    mkdir -p /usr/local/cuda/bin
    cp "${NVCC_ARCHIVE_NAME}/bin/ptxas" /usr/local/cuda/bin/

    # Clean up temporary files
    rm -f "${NVCC_ARCHIVE_TAR}"
    rm -rf "${NVCC_ARCHIVE_NAME}"
fi
  1. Run the script with your CUDA version as the argument, using sudo:
sudo bash update_ptxas.sh 12.6
# Check the version
ptxas --version

Developer Guide

Development Environment Setup

Use Docker to set up the development environment. See Docker setup guide.

Create and enter development container:

docker run -itd --shm-size 32g --gpus all -v $HOME/.cache:/root/.cache --ipc=host --name sglang_zhyncs lmsysorg/sglang:dev /bin/zsh
docker exec -it sglang_zhyncs /bin/zsh

Project Structure

Dependencies

Third-party libraries:

FlashAttention FYI

FA3 can fail without a enough shared memory for a some shapes, such as higher hidden_dim or some special cases. Right now, fa3 is supported for sm80/sm87 and sm86/sm89.

The main different Between sm80/sm87 and sm86/sm89 is the shared memory size. you can follow the link below for more information https://docs.nvidia.com/cuda/cuda-c-programming-guide/#shared-memory-8-x.

And for sgl-kernel right now, we can build fa3 on sm80/sm86/sm89/sm90a. That means if you use A100(tested)/A*0/L20(tested)/L40/L40s/3090(tested) you can use fa3.

Kernel Development

Steps to add a new kernel:

  1. Implement the kernel in csrc
  2. Expose the interface in include/sgl_kernel_ops.h
  3. Create torch extension in csrc/common_extension.cc
  4. Update CMakeLists.txt to include new CUDA source
  5. Expose Python interface in python

Development Tips

  1. When implementing kernels in csrc, only define pure CUDA files and C++ interfaces. If you need to use Torch::tensor, use <torch/all.h> instead of <torch/extension.h>. Using <torch/extension.h> will cause compilation errors when using SABI.

  2. When creating torch extensions, add the function definition with m.def, and device binding with m.impl:

  • Using torch.compile need m.def with schema, it helps auto capture the custom kernel. Reference: How to add FakeTensor

  • How to write schema: Schema reference

    // We need def with schema here for torch.compile
    m.def(
     "bmm_fp8(Tensor A, Tensor B, Tensor! D, Tensor A_scale, Tensor B_scale, Tensor workspace_buffer, int "
     "cublas_handle, int cuda_stream) -> ()");
    m.impl("bmm_fp8", torch::kCUDA, &bmm_fp8);
    
  1. When exposing Python interfaces, avoid using kwargs in C++ interface kernels.

    Avoid this:

    torch.ops.sgl_kernel.apply_rope_pos_ids_cos_sin_cache.default(
        q=query.view(query.shape[0], -1, head_size),
        k=key.view(key.shape[0], -1, head_size),
        q_rope=query.view(query.shape[0], -1, head_size),
        k_rope=key.view(key.shape[0], -1, head_size),
        cos_sin_cache=cos_sin_cache,
        pos_ids=positions.long(),
        interleave=(not is_neox),
        cuda_stream=get_cuda_stream(),
    )
    

    Use this instead:

    torch.ops.sgl_kernel.apply_rope_pos_ids_cos_sin_cache.default(
        query.view(query.shape[0], -1, head_size),
        key.view(key.shape[0], -1, head_size),
        query.view(query.shape[0], -1, head_size),
        key.view(key.shape[0], -1, head_size),
        cos_sin_cache,
        positions.long(),
        (not is_neox),
        get_cuda_stream(),
    )
    

Integrating Third-Party Libraries with Data Type Conversion

When integrating new third-party libraries like flash-attention, you may encounter data type compatibility issues between the C++ interface and PyTorch bindings. For example, the third-party code might use float or int types, while PyTorch requires double and int64_t.

The reason we need double and int64_t in torch binding is that TORCH_LIBRARY handles the Python-to-C++ conversion process. Python's float data type actually corresponds to double in C++, while Python's int corresponds to int64_t in C++.

To address this issue, we provide the make_pytorch_shim function in sgl_kernel_torch_shim that handles data type conversions automatically.

When you need to support new data type conversions, you can easily add conversion functions like this:

// Map `int` -> `int64_t`
template <>
struct pytorch_library_compatible_type<int> {
  using type = int64_t;
  static int convert_from_type(int64_t arg) {
    TORCH_CHECK(arg <= std::numeric_limits<int>::max(), "int64_t value is too large to be converted  to int");
    TORCH_CHECK(arg >= std::numeric_limits<int>::min(), "int64_t value is too small to be converted to int");
    return arg;
  }
};

To use this with your library functions, simply wrap them with make_pytorch_shim:

/*
 * From flash-attention
 */
 m.impl("fwd", torch::kCUDA, make_pytorch_shim(&mha_fwd));

Testing & Benchmarking

  1. Add pytest tests in tests/, if you need to skip some test, please use @pytest.mark.skipif
@pytest.mark.skipif(
    skip_condition, reason="Nvfp4 Requires compute capability of 10 or above."
)
  1. Add benchmarks using triton benchmark in benchmark/
  2. Run test suite

FAQ

  • When encountering this error while compiling using ccache: ImportError: /usr/local/lib/python3.10/dist-packages/sgl_kernel/common_ops.abi3.so: undefined symbol: _ZN3c108ListType3getERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEENS_4Type24SingletonOrSharedTypePtrIS9_EE, please modify the last command as follows to resolve it: python3 -m uv build --wheel -Cbuild-dir=build . --color=always --no-build-isolation .

Release new version

Update version in pyproject.toml and version.py