sglang0.4.5.post1/python/sglang/srt/layers/quantization/compressed_tensors/README.md

457 B

quantization compressed_tensors module

To support compressed_tensors format quantization models, we adapted https://github.com/vllm-project/vllm/tree/main/vllm/model_executor/layers/quantization/compressed_tensors into SGLang.

For practical purposes, we have only applied the compressed_tensors format of w8a8_fp8. If you have requirements for other formats, you can submit an issue through this link.