vllm/vllm_v0.10.0/docs/deployment/frameworks/triton.md

422 B

NVIDIA Triton

The Triton Inference Server hosts a tutorial demonstrating how to quickly deploy a simple facebook/opt-125m model using vLLM. Please see Deploying a vLLM model in Triton for more details.