853 lines
30 KiB
Plaintext
853 lines
30 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"# Tool Parser\n",
|
||
"\n",
|
||
"This guide demonstrates how to use SGLang’s [Function calling](https://platform.openai.com/docs/guides/function-calling) functionality."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Currently supported parsers:\n",
|
||
"\n",
|
||
"| Parser | Supported Models | Notes |\n",
|
||
"|---|---|---|\n",
|
||
"| `llama3` | Llama 3.1 / 3.2 / 3.3 (e.g. `meta-llama/Llama-3.1-8B-Instruct`, `meta-llama/Llama-3.2-1B-Instruct`, `meta-llama/Llama-3.3-70B-Instruct`) | |\n",
|
||
"| `llama4` | Llama 4 (e.g. `meta-llama/Llama-4-Scout-17B-16E-Instruct`) | |\n",
|
||
"| `mistral` | Mistral (e.g. `mistralai/Mistral-7B-Instruct-v0.3`, `mistralai/Mistral-Nemo-Instruct-2407`, `mistralai/Mistral-7B-v0.3`) | |\n",
|
||
"| `qwen25` | Qwen 2.5 (e.g. `Qwen/Qwen2.5-1.5B-Instruct`, `Qwen/Qwen2.5-7B-Instruct`) and QwQ (i.e. `Qwen/QwQ-32B`) | For QwQ, reasoning parser can be enabled together with tool call parser. See [reasoning parser](https://docs.sglang.ai/backend/separate_reasoning.html). |\n",
|
||
"| `deepseekv3` | DeepSeek-v3 (e.g., `deepseek-ai/DeepSeek-V3-0324`) | |\n",
|
||
"| `gpt-oss` | GPT-OSS (e.g., `openai/gpt-oss-120b`, `openai/gpt-oss-20b`, `lmsys/gpt-oss-120b-bf16`, `lmsys/gpt-oss-20b-bf16`) | The gpt-oss tool parser filters out analysis channel events and only preserves normal text. This can cause the content to be empty when explanations are in the analysis channel. To work around this, complete the tool round by returning tool results as `role=\"tool\"` messages, which enables the model to generate the final content. |\n",
|
||
"| `kimi_k2` | `moonshotai/Kimi-K2-Instruct` | |\n",
|
||
"| `pythonic` | Llama-3.2 / Llama-3.3 / Llama-4 | Model outputs function calls as Python code. Requires `--tool-call-parser pythonic` and is recommended to use with a specific chat template. |\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## OpenAI Compatible API"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Launching the Server"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import json\n",
|
||
"from sglang.test.doc_patch import launch_server_cmd\n",
|
||
"from sglang.utils import wait_for_server, print_highlight, terminate_process\n",
|
||
"from openai import OpenAI\n",
|
||
"\n",
|
||
"server_process, port = launch_server_cmd(\n",
|
||
" \"python3 -m sglang.launch_server --model-path Qwen/Qwen2.5-7B-Instruct --tool-call-parser qwen25 --host 0.0.0.0 --log-level warning\" # qwen25\n",
|
||
")\n",
|
||
"wait_for_server(f\"http://localhost:{port}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"Note that `--tool-call-parser` defines the parser used to interpret responses."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Define Tools for Function Call\n",
|
||
"Below is a Python snippet that shows how to define a tool as a dictionary. The dictionary includes a tool name, a description, and property defined Parameters."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Define tools\n",
|
||
"tools = [\n",
|
||
" {\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\n",
|
||
" \"name\": \"get_current_weather\",\n",
|
||
" \"description\": \"Get the current weather in a given location\",\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": {\n",
|
||
" \"city\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"The city to find the weather for, e.g. 'San Francisco'\",\n",
|
||
" },\n",
|
||
" \"state\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"the two-letter abbreviation for the state that the city is\"\n",
|
||
" \" in, e.g. 'CA' which would mean 'California'\",\n",
|
||
" },\n",
|
||
" \"unit\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"The unit to fetch the temperature in\",\n",
|
||
" \"enum\": [\"celsius\", \"fahrenheit\"],\n",
|
||
" },\n",
|
||
" },\n",
|
||
" \"required\": [\"city\", \"state\", \"unit\"],\n",
|
||
" },\n",
|
||
" },\n",
|
||
" }\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Define Messages"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"def get_messages():\n",
|
||
" return [\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": \"What's the weather like in Boston today? Output a reasoning before act, then use the tools to help you.\",\n",
|
||
" }\n",
|
||
" ]\n",
|
||
"\n",
|
||
"\n",
|
||
"messages = get_messages()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Initialize the Client"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Initialize OpenAI-like client\n",
|
||
"client = OpenAI(api_key=\"None\", base_url=f\"http://0.0.0.0:{port}/v1\")\n",
|
||
"model_name = client.models.list().data[0].id"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Non-Streaming Request"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Non-streaming mode test\n",
|
||
"response_non_stream = client.chat.completions.create(\n",
|
||
" model=model_name,\n",
|
||
" messages=messages,\n",
|
||
" temperature=0,\n",
|
||
" top_p=0.95,\n",
|
||
" max_tokens=1024,\n",
|
||
" stream=False, # Non-streaming\n",
|
||
" tools=tools,\n",
|
||
")\n",
|
||
"print_highlight(\"Non-stream response:\")\n",
|
||
"print_highlight(response_non_stream)\n",
|
||
"print_highlight(\"==== content ====\")\n",
|
||
"print_highlight(response_non_stream.choices[0].message.content)\n",
|
||
"print_highlight(\"==== tool_calls ====\")\n",
|
||
"print_highlight(response_non_stream.choices[0].message.tool_calls)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### Handle Tools\n",
|
||
"When the engine determines it should call a particular tool, it will return arguments or partial arguments through the response. You can parse these arguments and later invoke the tool accordingly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"name_non_stream = response_non_stream.choices[0].message.tool_calls[0].function.name\n",
|
||
"arguments_non_stream = (\n",
|
||
" response_non_stream.choices[0].message.tool_calls[0].function.arguments\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(f\"Final streamed function call name: {name_non_stream}\")\n",
|
||
"print_highlight(f\"Final streamed function call arguments: {arguments_non_stream}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Streaming Request"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Streaming mode test\n",
|
||
"print_highlight(\"Streaming response:\")\n",
|
||
"response_stream = client.chat.completions.create(\n",
|
||
" model=model_name,\n",
|
||
" messages=messages,\n",
|
||
" temperature=0,\n",
|
||
" top_p=0.95,\n",
|
||
" max_tokens=1024,\n",
|
||
" stream=True, # Enable streaming\n",
|
||
" tools=tools,\n",
|
||
")\n",
|
||
"\n",
|
||
"texts = \"\"\n",
|
||
"tool_calls = []\n",
|
||
"name = \"\"\n",
|
||
"arguments = \"\"\n",
|
||
"for chunk in response_stream:\n",
|
||
" if chunk.choices[0].delta.content:\n",
|
||
" texts += chunk.choices[0].delta.content\n",
|
||
" if chunk.choices[0].delta.tool_calls:\n",
|
||
" tool_calls.append(chunk.choices[0].delta.tool_calls[0])\n",
|
||
"print_highlight(\"==== Text ====\")\n",
|
||
"print_highlight(texts)\n",
|
||
"\n",
|
||
"print_highlight(\"==== Tool Call ====\")\n",
|
||
"for tool_call in tool_calls:\n",
|
||
" print_highlight(tool_call)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"#### Handle Tools\n",
|
||
"When the engine determines it should call a particular tool, it will return arguments or partial arguments through the response. You can parse these arguments and later invoke the tool accordingly."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Parse and combine function call arguments\n",
|
||
"arguments = []\n",
|
||
"for tool_call in tool_calls:\n",
|
||
" if tool_call.function.name:\n",
|
||
" print_highlight(f\"Streamed function call name: {tool_call.function.name}\")\n",
|
||
"\n",
|
||
" if tool_call.function.arguments:\n",
|
||
" arguments.append(tool_call.function.arguments)\n",
|
||
"\n",
|
||
"# Combine all fragments into a single JSON string\n",
|
||
"full_arguments = \"\".join(arguments)\n",
|
||
"print_highlight(f\"streamed function call arguments: {full_arguments}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Define a Tool Function"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# This is a demonstration, define real function according to your usage.\n",
|
||
"def get_current_weather(city: str, state: str, unit: \"str\"):\n",
|
||
" return (\n",
|
||
" f\"The weather in {city}, {state} is 85 degrees {unit}. It is \"\n",
|
||
" \"partly cloudly, with highs in the 90's.\"\n",
|
||
" )\n",
|
||
"\n",
|
||
"\n",
|
||
"available_tools = {\"get_current_weather\": get_current_weather}"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"\n",
|
||
"### Execute the Tool"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"messages.append(response_non_stream.choices[0].message)\n",
|
||
"\n",
|
||
"# Call the corresponding tool function\n",
|
||
"tool_call = messages[-1].tool_calls[0]\n",
|
||
"tool_name = tool_call.function.name\n",
|
||
"tool_to_call = available_tools[tool_name]\n",
|
||
"result = tool_to_call(**(json.loads(tool_call.function.arguments)))\n",
|
||
"print_highlight(f\"Function call result: {result}\")\n",
|
||
"# messages.append({\"role\": \"tool\", \"content\": result, \"name\": tool_name})\n",
|
||
"messages.append(\n",
|
||
" {\n",
|
||
" \"role\": \"tool\",\n",
|
||
" \"tool_call_id\": tool_call.id,\n",
|
||
" \"content\": str(result),\n",
|
||
" \"name\": tool_name,\n",
|
||
" }\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(f\"Updated message history: {messages}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Send Results Back to Model"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"final_response = client.chat.completions.create(\n",
|
||
" model=model_name,\n",
|
||
" messages=messages,\n",
|
||
" temperature=0,\n",
|
||
" top_p=0.95,\n",
|
||
" stream=False,\n",
|
||
" tools=tools,\n",
|
||
")\n",
|
||
"print_highlight(\"Non-stream response:\")\n",
|
||
"print_highlight(final_response)\n",
|
||
"\n",
|
||
"print_highlight(\"==== Text ====\")\n",
|
||
"print_highlight(final_response.choices[0].message.content)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Native API and SGLang Runtime (SRT)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from transformers import AutoTokenizer\n",
|
||
"import requests\n",
|
||
"\n",
|
||
"# generate an answer\n",
|
||
"tokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2.5-7B-Instruct\")\n",
|
||
"\n",
|
||
"messages = get_messages()\n",
|
||
"\n",
|
||
"input = tokenizer.apply_chat_template(\n",
|
||
" messages,\n",
|
||
" tokenize=False,\n",
|
||
" add_generation_prompt=True,\n",
|
||
" tools=tools,\n",
|
||
")\n",
|
||
"\n",
|
||
"gen_url = f\"http://localhost:{port}/generate\"\n",
|
||
"gen_data = {\n",
|
||
" \"text\": input,\n",
|
||
" \"sampling_params\": {\n",
|
||
" \"skip_special_tokens\": False,\n",
|
||
" \"max_new_tokens\": 1024,\n",
|
||
" \"temperature\": 0,\n",
|
||
" \"top_p\": 0.95,\n",
|
||
" },\n",
|
||
"}\n",
|
||
"gen_response = requests.post(gen_url, json=gen_data).json()[\"text\"]\n",
|
||
"print_highlight(\"==== Response ====\")\n",
|
||
"print_highlight(gen_response)\n",
|
||
"\n",
|
||
"# parse the response\n",
|
||
"parse_url = f\"http://localhost:{port}/parse_function_call\"\n",
|
||
"\n",
|
||
"function_call_input = {\n",
|
||
" \"text\": gen_response,\n",
|
||
" \"tool_call_parser\": \"qwen25\",\n",
|
||
" \"tools\": tools,\n",
|
||
"}\n",
|
||
"\n",
|
||
"function_call_response = requests.post(parse_url, json=function_call_input)\n",
|
||
"function_call_response_json = function_call_response.json()\n",
|
||
"\n",
|
||
"print_highlight(\"==== Text ====\")\n",
|
||
"print(function_call_response_json[\"normal_text\"])\n",
|
||
"print_highlight(\"==== Calls ====\")\n",
|
||
"print(\"function name: \", function_call_response_json[\"calls\"][0][\"name\"])\n",
|
||
"print(\"function arguments: \", function_call_response_json[\"calls\"][0][\"parameters\"])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"terminate_process(server_process)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Offline Engine API"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import sglang as sgl\n",
|
||
"from sglang.srt.function_call.function_call_parser import FunctionCallParser\n",
|
||
"from sglang.srt.managers.io_struct import Tool, Function\n",
|
||
"\n",
|
||
"llm = sgl.Engine(model_path=\"Qwen/Qwen2.5-7B-Instruct\")\n",
|
||
"tokenizer = llm.tokenizer_manager.tokenizer\n",
|
||
"input_ids = tokenizer.apply_chat_template(\n",
|
||
" messages, tokenize=True, add_generation_prompt=True, tools=tools\n",
|
||
")\n",
|
||
"\n",
|
||
"# Note that for gpt-oss tool parser, adding \"no_stop_trim\": True\n",
|
||
"# to make sure the tool call token <call> is not trimmed.\n",
|
||
"\n",
|
||
"sampling_params = {\n",
|
||
" \"max_new_tokens\": 1024,\n",
|
||
" \"temperature\": 0,\n",
|
||
" \"top_p\": 0.95,\n",
|
||
" \"skip_special_tokens\": False,\n",
|
||
"}\n",
|
||
"\n",
|
||
"# 1) Offline generation\n",
|
||
"result = llm.generate(input_ids=input_ids, sampling_params=sampling_params)\n",
|
||
"generated_text = result[\"text\"] # Assume there is only one prompt\n",
|
||
"\n",
|
||
"print_highlight(\"=== Offline Engine Output Text ===\")\n",
|
||
"print_highlight(generated_text)\n",
|
||
"\n",
|
||
"\n",
|
||
"# 2) Parse using FunctionCallParser\n",
|
||
"def convert_dict_to_tool(tool_dict: dict) -> Tool:\n",
|
||
" function_dict = tool_dict.get(\"function\", {})\n",
|
||
" return Tool(\n",
|
||
" type=tool_dict.get(\"type\", \"function\"),\n",
|
||
" function=Function(\n",
|
||
" name=function_dict.get(\"name\"),\n",
|
||
" description=function_dict.get(\"description\"),\n",
|
||
" parameters=function_dict.get(\"parameters\"),\n",
|
||
" ),\n",
|
||
" )\n",
|
||
"\n",
|
||
"\n",
|
||
"tools = [convert_dict_to_tool(raw_tool) for raw_tool in tools]\n",
|
||
"\n",
|
||
"parser = FunctionCallParser(tools=tools, tool_call_parser=\"qwen25\")\n",
|
||
"normal_text, calls = parser.parse_non_stream(generated_text)\n",
|
||
"\n",
|
||
"print_highlight(\"=== Parsing Result ===\")\n",
|
||
"print(\"Normal text portion:\", normal_text)\n",
|
||
"print_highlight(\"Function call portion:\")\n",
|
||
"for call in calls:\n",
|
||
" # call: ToolCallItem\n",
|
||
" print_highlight(f\" - tool name: {call.name}\")\n",
|
||
" print_highlight(f\" parameters: {call.parameters}\")\n",
|
||
"\n",
|
||
"# 3) If needed, perform additional logic on the parsed functions, such as automatically calling the corresponding function to obtain a return value, etc."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"llm.shutdown()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Tool Choice Mode\n",
|
||
"\n",
|
||
"SGLang supports OpenAI's `tool_choice` parameter to control when and which tools the model should call. This feature is implemented using EBNF (Extended Backus-Naur Form) grammar to ensure reliable tool calling behavior.\n",
|
||
"\n",
|
||
"### Supported Tool Choice Options\n",
|
||
"\n",
|
||
"- **`tool_choice=\"required\"`**: Forces the model to call at least one tool\n",
|
||
"- **`tool_choice={\"type\": \"function\", \"function\": {\"name\": \"specific_function\"}}`**: Forces the model to call a specific function\n",
|
||
"\n",
|
||
"### Backend Compatibility\n",
|
||
"\n",
|
||
"Tool choice is fully supported with the **Xgrammar backend**, which is the default grammar backend (`--grammar-backend xgrammar`). However, it may not be fully supported with other backends such as `outlines`.\n",
|
||
"\n",
|
||
"### Example: Required Tool Choice"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from openai import OpenAI\n",
|
||
"from sglang.utils import wait_for_server, print_highlight, terminate_process\n",
|
||
"from sglang.test.doc_patch import launch_server_cmd\n",
|
||
"\n",
|
||
"# Start a new server session for tool choice examples\n",
|
||
"server_process_tool_choice, port_tool_choice = launch_server_cmd(\n",
|
||
" \"python3 -m sglang.launch_server --model-path Qwen/Qwen2.5-7B-Instruct --tool-call-parser qwen25 --host 0.0.0.0 --log-level warning\"\n",
|
||
")\n",
|
||
"wait_for_server(f\"http://localhost:{port_tool_choice}\")\n",
|
||
"\n",
|
||
"# Initialize client for tool choice examples\n",
|
||
"client_tool_choice = OpenAI(\n",
|
||
" api_key=\"None\", base_url=f\"http://0.0.0.0:{port_tool_choice}/v1\"\n",
|
||
")\n",
|
||
"model_name_tool_choice = client_tool_choice.models.list().data[0].id\n",
|
||
"\n",
|
||
"# Example with tool_choice=\"required\" - forces the model to call a tool\n",
|
||
"messages_required = [\n",
|
||
" {\"role\": \"user\", \"content\": \"Hello, what is the capital of France?\"}\n",
|
||
"]\n",
|
||
"\n",
|
||
"# Define tools\n",
|
||
"tools = [\n",
|
||
" {\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\n",
|
||
" \"name\": \"get_current_weather\",\n",
|
||
" \"description\": \"Get the current weather in a given location\",\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": {\n",
|
||
" \"city\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"The city to find the weather for, e.g. 'San Francisco'\",\n",
|
||
" },\n",
|
||
" \"unit\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"The unit to fetch the temperature in\",\n",
|
||
" \"enum\": [\"celsius\", \"fahrenheit\"],\n",
|
||
" },\n",
|
||
" },\n",
|
||
" \"required\": [\"city\", \"unit\"],\n",
|
||
" },\n",
|
||
" },\n",
|
||
" }\n",
|
||
"]\n",
|
||
"\n",
|
||
"response_required = client_tool_choice.chat.completions.create(\n",
|
||
" model=model_name_tool_choice,\n",
|
||
" messages=messages_required,\n",
|
||
" temperature=0,\n",
|
||
" max_tokens=1024,\n",
|
||
" tools=tools,\n",
|
||
" tool_choice=\"required\", # Force the model to call a tool\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(\"Response with tool_choice='required':\")\n",
|
||
"print(\"Content:\", response_required.choices[0].message.content)\n",
|
||
"print(\"Tool calls:\", response_required.choices[0].message.tool_calls)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Example: Specific Function Choice\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Example with specific function choice - forces the model to call a specific function\n",
|
||
"messages_specific = [\n",
|
||
" {\"role\": \"user\", \"content\": \"What are the most attactive places in France?\"}\n",
|
||
"]\n",
|
||
"\n",
|
||
"response_specific = client_tool_choice.chat.completions.create(\n",
|
||
" model=model_name_tool_choice,\n",
|
||
" messages=messages_specific,\n",
|
||
" temperature=0,\n",
|
||
" max_tokens=1024,\n",
|
||
" tools=tools,\n",
|
||
" tool_choice={\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\"name\": \"get_current_weather\"},\n",
|
||
" }, # Force the model to call the specific get_current_weather function\n",
|
||
")\n",
|
||
"\n",
|
||
"print_highlight(\"Response with specific function choice:\")\n",
|
||
"print(\"Content:\", response_specific.choices[0].message.content)\n",
|
||
"print(\"Tool calls:\", response_specific.choices[0].message.tool_calls)\n",
|
||
"\n",
|
||
"if response_specific.choices[0].message.tool_calls:\n",
|
||
" tool_call = response_specific.choices[0].message.tool_calls[0]\n",
|
||
" print_highlight(f\"Called function: {tool_call.function.name}\")\n",
|
||
" print_highlight(f\"Arguments: {tool_call.function.arguments}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"terminate_process(server_process_tool_choice)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Pythonic Tool Call Format (Llama-3.2 / Llama-3.3 / Llama-4)\n",
|
||
"\n",
|
||
"Some Llama models (such as Llama-3.2-1B, Llama-3.2-3B, Llama-3.3-70B, and Llama-4) support a \"pythonic\" tool call format, where the model outputs function calls as Python code, e.g.:\n",
|
||
"\n",
|
||
"```python\n",
|
||
"[get_current_weather(city=\"San Francisco\", state=\"CA\", unit=\"celsius\")]\n",
|
||
"```\n",
|
||
"\n",
|
||
"- The output is a Python list of function calls, with arguments as Python literals (not JSON).\n",
|
||
"- Multiple tool calls can be returned in the same list:\n",
|
||
"```python\n",
|
||
"[get_current_weather(city=\"San Francisco\", state=\"CA\", unit=\"celsius\"),\n",
|
||
" get_current_weather(city=\"New York\", state=\"NY\", unit=\"fahrenheit\")]\n",
|
||
"```\n",
|
||
"\n",
|
||
"For more information, refer to Meta’s documentation on [Zero shot function calling](https://github.com/meta-llama/llama-models/blob/main/models/llama4/prompt_format.md#zero-shot-function-calling---system-message).\n",
|
||
"\n",
|
||
"Note that this feature is still under development on Blackwell.\n",
|
||
"\n",
|
||
"### How to enable\n",
|
||
"- Launch the server with `--tool-call-parser pythonic`\n",
|
||
"- You may also specify --chat-template with the improved template for the model (e.g., `--chat-template=examples/chat_template/tool_chat_template_llama4_pythonic.jinja`).\n",
|
||
"This is recommended because the model expects a special prompt format to reliably produce valid pythonic tool call outputs. The template ensures that the prompt structure (e.g., special tokens, message boundaries like `<|eom|>`, and function call delimiters) matches what the model was trained or fine-tuned on. If you do not use the correct chat template, tool calling may fail or produce inconsistent results.\n",
|
||
"\n",
|
||
"#### Forcing Pythonic Tool Call Output Without a Chat Template\n",
|
||
"If you don't want to specify a chat template, you must give the model extremely explicit instructions in your messages to enforce pythonic output. For example, for `Llama-3.2-1B-Instruct`, you need:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import openai\n",
|
||
"\n",
|
||
"server_process, port = launch_server_cmd(\n",
|
||
" \" python3 -m sglang.launch_server --model-path meta-llama/Llama-3.2-1B-Instruct --tool-call-parser pythonic --tp 1 --log-level warning\" # llama-3.2-1b-instruct\n",
|
||
")\n",
|
||
"wait_for_server(f\"http://localhost:{port}\")\n",
|
||
"\n",
|
||
"tools = [\n",
|
||
" {\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\n",
|
||
" \"name\": \"get_weather\",\n",
|
||
" \"description\": \"Get the current weather for a given location.\",\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": {\n",
|
||
" \"location\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"The name of the city or location.\",\n",
|
||
" }\n",
|
||
" },\n",
|
||
" \"required\": [\"location\"],\n",
|
||
" },\n",
|
||
" },\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"type\": \"function\",\n",
|
||
" \"function\": {\n",
|
||
" \"name\": \"get_tourist_attractions\",\n",
|
||
" \"description\": \"Get a list of top tourist attractions for a given city.\",\n",
|
||
" \"parameters\": {\n",
|
||
" \"type\": \"object\",\n",
|
||
" \"properties\": {\n",
|
||
" \"city\": {\n",
|
||
" \"type\": \"string\",\n",
|
||
" \"description\": \"The name of the city to find attractions for.\",\n",
|
||
" }\n",
|
||
" },\n",
|
||
" \"required\": [\"city\"],\n",
|
||
" },\n",
|
||
" },\n",
|
||
" },\n",
|
||
"]\n",
|
||
"\n",
|
||
"\n",
|
||
"def get_messages():\n",
|
||
" return [\n",
|
||
" {\n",
|
||
" \"role\": \"system\",\n",
|
||
" \"content\": (\n",
|
||
" \"You are a travel assistant. \"\n",
|
||
" \"When asked to call functions, ALWAYS respond ONLY with a python list of function calls, \"\n",
|
||
" \"using this format: [func_name1(param1=value1, param2=value2), func_name2(param=value)]. \"\n",
|
||
" \"Do NOT use JSON, do NOT use variables, do NOT use any other format. \"\n",
|
||
" \"Here is an example:\\n\"\n",
|
||
" '[get_weather(location=\"Paris\"), get_tourist_attractions(city=\"Paris\")]'\n",
|
||
" ),\n",
|
||
" },\n",
|
||
" {\n",
|
||
" \"role\": \"user\",\n",
|
||
" \"content\": (\n",
|
||
" \"I'm planning a trip to Tokyo next week. What's the weather like and what are some top tourist attractions? \"\n",
|
||
" \"Propose parallel tool calls at once, using the python list of function calls format as shown above.\"\n",
|
||
" ),\n",
|
||
" },\n",
|
||
" ]\n",
|
||
"\n",
|
||
"\n",
|
||
"messages = get_messages()\n",
|
||
"\n",
|
||
"client = openai.Client(base_url=f\"http://localhost:{port}/v1\", api_key=\"xxxxxx\")\n",
|
||
"model_name = client.models.list().data[0].id\n",
|
||
"\n",
|
||
"\n",
|
||
"response_non_stream = client.chat.completions.create(\n",
|
||
" model=model_name,\n",
|
||
" messages=messages,\n",
|
||
" temperature=0,\n",
|
||
" top_p=0.9,\n",
|
||
" stream=False, # Non-streaming\n",
|
||
" tools=tools,\n",
|
||
")\n",
|
||
"print_highlight(\"Non-stream response:\")\n",
|
||
"print_highlight(response_non_stream)\n",
|
||
"\n",
|
||
"response_stream = client.chat.completions.create(\n",
|
||
" model=model_name,\n",
|
||
" messages=messages,\n",
|
||
" temperature=0,\n",
|
||
" top_p=0.9,\n",
|
||
" stream=True,\n",
|
||
" tools=tools,\n",
|
||
")\n",
|
||
"texts = \"\"\n",
|
||
"tool_calls = []\n",
|
||
"name = \"\"\n",
|
||
"arguments = \"\"\n",
|
||
"\n",
|
||
"for chunk in response_stream:\n",
|
||
" if chunk.choices[0].delta.content:\n",
|
||
" texts += chunk.choices[0].delta.content\n",
|
||
" if chunk.choices[0].delta.tool_calls:\n",
|
||
" tool_calls.append(chunk.choices[0].delta.tool_calls[0])\n",
|
||
"\n",
|
||
"print_highlight(\"Streaming Response:\")\n",
|
||
"print_highlight(\"==== Text ====\")\n",
|
||
"print_highlight(texts)\n",
|
||
"\n",
|
||
"print_highlight(\"==== Tool Call ====\")\n",
|
||
"for tool_call in tool_calls:\n",
|
||
" print_highlight(tool_call)\n",
|
||
"\n",
|
||
"terminate_process(server_process)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"> **Note:** \n",
|
||
"> The model may still default to JSON if it was heavily finetuned on that format. Prompt engineering (including examples) is the only way to increase the chance of pythonic output if you are not using a chat template."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"metadata": {},
|
||
"source": [
|
||
"## How to support a new model?\n",
|
||
"1. Update the TOOLS_TAG_LIST in sglang/srt/function_call_parser.py with the model’s tool tags. Currently supported tags include:\n",
|
||
"```\n",
|
||
"\tTOOLS_TAG_LIST = [\n",
|
||
"\t “<|plugin|>“,\n",
|
||
"\t “<function=“,\n",
|
||
"\t “<tool_call>“,\n",
|
||
"\t “<|python_tag|>“,\n",
|
||
"\t “[TOOL_CALLS]”\n",
|
||
"\t]\n",
|
||
"```\n",
|
||
"2. Create a new detector class in sglang/srt/function_call_parser.py that inherits from BaseFormatDetector. The detector should handle the model’s specific function call format. For example:\n",
|
||
"```\n",
|
||
" class NewModelDetector(BaseFormatDetector):\n",
|
||
"```\n",
|
||
"3. Add the new detector to the MultiFormatParser class that manages all the format detectors."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3"
|
||
}
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 4
|
||
}
|