怎么做这个购物网站,公司平台,万能搜索引擎网站,网站查询平台文章目录 逐段讲清 **LLM Function Calling#xff08;函数调用#xff09;** 的典型链路。1. 角色与概念 | Actors Concepts2. 全流程时序 | End-to-End Sequence3. 关键细节 | Key Implementation Notes4. 最小可用示例#xff08;伪代码#xff09; | Minimal Exa… 文章目录 逐段讲清 **LLM Function Calling函数调用** 的典型链路。1. 角色与概念 | Actors Concepts2. 全流程时序 | End-to-End Sequence3. 关键细节 | Key Implementation Notes4. 最小可用示例伪代码 | Minimal Example (Pseudo-Python)5. 常见坑 / FAQTL;DR 逐段讲清 LLM Function Calling函数调用 的典型链路。
1. 角色与概念 | Actors Concepts
ZH
开发者 (Developer)用 JSON-Schema 描述可调用函数。用户 (User)提出自然语言请求。模型 (LLM)在推理时决定是否调用函数、选哪一个、填什么参数。执行器 (Caller / Tool Runtime)真正跑代码或访问外部 API把结果回给模型。
EN –
Developer supplies JSON-Schema definitions of callable functions.User sends a request in natural language.LLM decides whether, and how, to call a function.Caller / Tool Runtime executes the function and returns results to the model. 2. 全流程时序 | End-to-End Sequence
#步骤 (ZH)Step (EN)1注册函数开发者在 functions 字段传入 JSON-Schema示例{name:weather,parameters:{type:object,properties:{location:{type:string}}}}Register functions via the functions array in the API call, giving JSON-Schema for each.2输入消息messages[{role:user,content:今天天气如何}]User message is sent in messages.3模型推理LLM 读取系统用户信息若判断需要函数→输出 role:assistant, name:weather, arguments:{...}Model reasoning: If it decides a function is needed, it emits a tool call with role:assistant and name, arguments.4执行器调用客户端检测到 name 字段→调用本地或远程函数→得到返回值如 {temp:25}Caller executes detected function, obtains result JSON.5结果回注把结果作为 role:function, name:weather, content:{\temp\:25} 再塞进 messagesInject result back as a function role message so the model can see it.6模型二次生成LLM 看到函数输出后再生成最终自然语言回答。Second pass: model crafts the final user-visible response, informed by result. 从开发者视角看就是 “定义 → 监听 → 调用 → 回注” 四件事。 From the dev perspective: define → watch → call → inject. 3. 关键细节 | Key Implementation Notes JSON-Schema 精度 字段类型、必选/可选决定模型填参质量。尽量用 enum, minimum, pattern 等约束减少幻觉。Use rich JSON-Schema constraints (enum, minimum, pattern) to steer the model toward valid args. 多轮调用 模型可串联多次函数例如先 search_flights 再 book_flight。保持每轮都把结果作为 function role 回注即可。Chained calls work: keep injecting each result back as a function message. 流式 (Streaming) 在 OpenAI 流接口(streamtrue)里tool_calls δ块先到客户端可即时触发执行。With streaming, tool-call deltas arrive early, enabling near-real-time actions. 错误处理 若执行失败把错误描述同样作为 function 消息交给模型让它决定重试/解释。Return errors as function role messages so the model can handle retries or apologies. 安全与权限 在执行器侧校验参数避免模型注入恶意 shell 字符串。Validate args server-side; the JSON schema itself is not a security boundary. 4. 最小可用示例伪代码 | Minimal Example (Pseudo-Python)
# 1. define schema
functions [{name: weather,description: Get current weather,parameters: {type: object,properties: {location: {type: string}},required: [location]}
}]# 2. first API call
messages [{role:user,content:帮我查一下北京天气}]
resp openai.ChatCompletion.create(modelgpt-4o,messagesmessages,functionsfunctions)# 3. if resp.choices[0].finish_reason tool_call:
tool_call resp.choices[0].message
args json.loads(tool_call[arguments])
result get_weather(args[location])# 4. feed back
messages [tool_call,{role:function,name:weather,content:json.dumps(result)}
]
final openai.ChatCompletion.create(modelgpt-4o,messagesmessages)
print(final.choices[0].message[content])5. 常见坑 / FAQ
问题处理建议模型拒绝填参数检查 system prompt 是否允许工具确保 schema 名称易懂给示例调用。参数格式错位使用 json.loads() 校验在 schema 里加 format, pattern。连续调用失序每轮都把 完整历史最新函数结果发回保持 deterministic。 TL;DR
Function Calling LLM decides runtime executes LLM finishes。只需掌握 JSON-Schema 定义、工具消息注入、循环两轮即可把“智能体”接入任意后端逻辑。 Master the pattern once and you can wire your Llama/GPT/Claude agent to databases, SaaS APIs, or even a coffee machine.