255 lines
7.6 KiB
Plaintext
255 lines
7.6 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# OpenAI APIs - Vision\n",
|
||
"\n",
|
||
"SGLang provides OpenAI-compatible APIs to enable a smooth transition from OpenAI services to self-hosted local models.\n",
|
||
"A complete reference for the API is available in the [OpenAI API Reference](https://platform.openai.com/docs/guides/vision).\n",
|
||
"This tutorial covers the vision APIs for vision language models.\n",
|
||
"\n",
|
||
"SGLang supports various vision language models such as Llama 3.2, LLaVA-OneVision, Qwen2.5-VL, Gemma3 and [more](../supported_models/multimodal_language_models.md).\n",
|
||
"\n",
|
||
"As an alternative to the OpenAI API, you can also use the [SGLang offline engine](https://github.com/sgl-project/sglang/blob/main/examples/runtime/engine/offline_batch_inference_vlm.py)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Launch A Server\n",
|
||
"\n",
|
||
"Launch the server in your terminal and wait for it to initialize."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from sglang.test.doc_patch import launch_server_cmd\n",
|
||
"from sglang.utils import wait_for_server, print_highlight, terminate_process\n",
|
||
"\n",
|
||
"vision_process, port = launch_server_cmd(\n",
|
||
" \"\"\"\n",
|
||
"python3 -m sglang.launch_server --model-path Qwen/Qwen2.5-VL-7B-Instruct --log-level warning\n",
|
||
"\"\"\"\n",
|
||
")\n",
|
||
"\n",
|
||
"wait_for_server(f\"http://localhost:{port}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using cURL\n",
|
||
"\n",
|
||
"Once the server is up, you can send test requests using curl or requests."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import subprocess\n",
|
||
"\n",
|
||
"curl_command = f\"\"\"\n",
|
||
"curl -s http://localhost:{port}/v1/chat/completions \\\\\n",
|
||
" -H \"Content-Type: application/json\" \\\\\n",
|
||
" -d '{{\n",
|
||
" \"model\": \"Qwen/Qwen2.5-VL-7B-Instruct\",\n",
|
||
" \"messages\": [\n",
|
||
" {{\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {{\n",
|
||
" \"type\": \"text\",\n",
|
||
" \"text\": \"What’s in this image?\"\n",
|
||
" }},\n",
|
||
" {{\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {{\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\n",
|
||
" }}\n",
|
||
" }}\n",
|
||
" ]\n",
|
||
" }}\n",
|
||
" ],\n",
|
||
" \"max_tokens\": 300\n",
|
||
" }}'\n",
|
||
"\"\"\"\n",
|
||
"\n",
|
||
"response = subprocess.check_output(curl_command, shell=True).decode()\n",
|
||
"print_highlight(response)\n",
|
||
"\n",
|
||
"\n",
|
||
"response = subprocess.check_output(curl_command, shell=True).decode()\n",
|
||
"print_highlight(response)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using Python Requests"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import requests\n",
|
||
"\n",
|
||
"url = f\"http://localhost:{port}/v1/chat/completions\"\n",
|
||
"\n",
|
||
"data = {\n",
|
||
" \"model\": \"Qwen/Qwen2.5-VL-7B-Instruct\",\n",
|
||
" \"messages\": [\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\"type\": \"text\", \"text\": \"What’s in this image?\"},\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\n",
|
||
" },\n",
|
||
" },\n",
|
||
" ],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" \"max_tokens\": 300,\n",
|
||
"}\n",
|
||
"\n",
|
||
"response = requests.post(url, json=data)\n",
|
||
"print_highlight(response.text)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Using OpenAI Python Client"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from openai import OpenAI\n",
|
||
"\n",
|
||
"client = OpenAI(base_url=f\"http://localhost:{port}/v1\", api_key=\"None\")\n",
|
||
"\n",
|
||
"response = client.chat.completions.create(\n",
|
||
" model=\"Qwen/Qwen2.5-VL-7B-Instruct\",\n",
|
||
" messages=[\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\n",
|
||
" \"type\": \"text\",\n",
|
||
" \"text\": \"What is in this image?\",\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\n",
|
||
" },\n",
|
||
" },\n",
|
||
" ],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" max_tokens=300,\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(response.choices[0].message.content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Multiple-Image Inputs\n",
|
||
"\n",
|
||
"The server also supports multiple images and interleaved text and images if the model supports it."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from openai import OpenAI\n",
|
||
"\n",
|
||
"client = OpenAI(base_url=f\"http://localhost:{port}/v1\", api_key=\"None\")\n",
|
||
"\n",
|
||
"response = client.chat.completions.create(\n",
|
||
" model=\"Qwen/Qwen2.5-VL-7B-Instruct\",\n",
|
||
" messages=[\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": [\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\",\n",
|
||
" },\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"image_url\",\n",
|
||
" \"image_url\": {\n",
|
||
" \"url\": \"https://raw.githubusercontent.com/sgl-project/sglang/main/assets/logo.png\",\n",
|
||
" },\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"text\",\n",
|
||
" \"text\": \"I have two very different images. They are not related at all. \"\n",
|
||
" \"Please describe the first image in one sentence, and then describe the second image in another sentence.\",\n",
|
||
" },\n",
|
||
" ],\n",
|
||
" }\n",
|
||
" ],\n",
|
||
" temperature=0,\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(response.choices[0].message.content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"terminate_process(vision_process)"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 2
|
||
}
|