Commit Graph

127 Commits

Author SHA1 Message Date
hailin 6767215f83 refactor(agents): remove Structured Output (Layer 2) to enable true streaming
背景:
在 commit bb1a113 中引入了 4 层回复质量控制体系:
- Layer 1: System Prompt (1095行详细指导)
- Layer 2: Structured Output (Zod schema → output_config)
- Layer 3: LLM-as-Judge (Haiku 4.5 评分)
- Layer 4: Per-intent hard truncation (已在 db8617d 移除)

Layer 2 (Structured Output) 的问题:
1. 阻塞流式输出 — output_config 强制模型输出 JSON,JSON 片段无法展示给
   用户,导致整个响应缓冲后才一次性输出
2. Zod 验证频繁崩溃 — intent 枚举值不匹配时 SDK 抛错,已出现 4 次 hotfix
   (b55cd4b, db8617d, 7af8c4d, 及本次)
3. followUp 字段导致内容丢失 — 模型将回答内容分到 followUp 后被过滤
4. intent 分类仅用于日志,对用户体验无价值
5. z.string() 无 .max() 约束 — 实际不控制回答长度

移除后,回答质量由以下机制保证(全部保留):
- Layer 1: System Prompt — 意图分类表、回答风格、长度指导
- Layer 3: LLM-Judge — 相关性/简洁性/噪音评分,不合格则自动重试
- API max_tokens: 2048 — 硬限制输出上限

改动:
- coordinator-agent.service.ts: 移除 zodOutputFormat/CoordinatorResponseSchema
  import 和 outputConfig 参数
- agent-loop.ts: 移除 text_delta 中的 outputConfig 守卫(文本现在直接流式
  输出)、移除 output_config API 参数、移除两个 Structured Output 验证失败
  恢复 catch 块、移除 JSON 解析 + safety net 块
- agent.types.ts: 从 AgentLoopParams 接口移除 outputConfig 字段
- coordinator-response.schema.ts: 清空 Zod schema/工具函数,保留历史备注

效果:
- 用户现在能看到逐字流式输出(token-by-token streaming)
- 消除了 Structured Output 相关的所有崩溃风险
- 代码净减 ~130 行

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 11:15:48 -08:00
hailin 913a3fd375 fix(prompt): defer 99元 mention — never in first response to new users
## Problem
User asks "我适合哪种移民方式?" as their first message → AI immediately
mentions 99元 paid assessment. This is aggressive and off-putting for new
users who are just exploring.

## Root Cause
Intent table classified "适合什么" as assessment_request with instruction
to immediately mention 99元. This conflicts with the conversion philosophy
section that says "免费问答建立信任 → 付费评估".

## Fix (3 changes in coordinator-system-prompt.ts)

1. **Intent table**: assessment_request no longer says "immediately mention
   99元". Instead references new handling rules below the table.

2. **New "评估请求处理规则" section** (after intent table):
   - Early conversation + no user info → exploratory question, NOT
     assessment request. Collect info first, give initial direction.
   - User shared info + explicitly asks "做个评估" → real assessment
     request, mention 99元.
   - User shared info but didn't ask → give free initial direction,
     don't proactively mention payment.

3. **Assessment suggestion timing** (section 5.6):
   - Added 3 prerequisites before mentioning 99元:
     a. At least 3 key info items collected
     b. Already gave free initial direction (user felt value)
     c. Conversation has gone 3-4+ rounds
   - Added absolute prohibition: never mention 99元 in first response.

4. **Conversion boundary example**: Changed misleading "我适合走高才通吗
   → 需要评估" to nuanced guidance that distinguishes exploration from
   genuine assessment requests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 10:52:37 -08:00
hailin 7af8c4d8de fix(agents): graceful recovery from structured output validation errors
## Problem
SDK's Zod validation for `output_config` occasionally fails with:
  "Failed to parse structured output: invalid_value at path [intent]"
This crashes the entire response — user sees nothing despite model
generating a valid answer.

## Root Cause
The Anthropic SDK validates streamed structured output against the Zod
schema (CoordinatorResponseSchema) after streaming completes. When the
model returns an intent value not in the z.enum() (rare but happens),
the SDK throws during stream iteration or finalMessage().

## Fix
1. Catch "Failed to parse structured output" errors in both:
   - Stream iteration catch block (for-await loop)
   - stream.finalMessage() catch block
2. Recover by extracting accumulated text from assistantBlocks
3. Manual JSON.parse (skips Zod validation — intent enum mismatch
   doesn't affect user-facing content)
4. Yield parsed.answer + parsed.followUp normally

## Also Included (from previous commit)
- Removed INTENT_MAX_ANSWER_LENGTH hard truncation (弊大于利)
- Only 2000-char safety net remains for extreme edge cases
- followUp: non-question content always appended (prevents content loss)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 10:42:52 -08:00
hailin db8617dda8 refactor(agents): remove per-intent hard truncation, keep 2000-char safety net
Hard-coded INTENT_MAX_ANSWER_LENGTH limits caused mid-sentence truncation and
content loss. Length control now relies on prompt + schema description + LLM-Judge
(3 layers). Only a 2000-char safety net remains for extreme edge cases.

Also simplified followUp: non-question followUp is now always appended (prevents
model content split from silently dropping text).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 10:32:53 -08:00
hailin b55cd4bc1e fix(agents): widen answer length limits and preserve followUp continuations
INTENT_MAX_ANSWER_LENGTH was too tight (objection_expression 200 chars truncated
good responses). Bumped all limits ~25-50%. Also fixed followUp filter that silently
dropped content when model split answer across answer+followUp fields — now appends
followUp as continuation when answer ends mid-sentence.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 10:25:59 -08:00
hailin 366a9cda3a feat(agents): tiered tool-calling system + KB coverage hint for smart routing
P0: Enrich Chapter 10 with detailed policy facts (QMAS scoring, GEP A/B/C
conditions, FAQ quick answers) so Claude can answer common questions directly
without tool calls. Replace absolute rule "never answer from memory" with
3-tier system: Tier 1 (direct from Ch10), Tier 2 (search_knowledge), Tier 3
(invoke_policy_expert).

P1: Context injector now always returns a kb_coverage_hint block — when KB has
results it tells Claude to prefer KB over web_search; when KB has no results
it suggests considering web_search. Web_search tool description updated to
reference the hint.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 10:06:29 -08:00
hailin b7e84ba3b6 fix(agents): only run InputGate on first message to prevent mid-conversation misclassification
Short follow-up answers like "计算机,信息技术" were being classified as
OFF_TOPIC (0.85) because the InputGate has no conversation context. Now the
gate only runs when there are no previous messages (first message in conversation).
Mid-conversation topic management is handled by the Coordinator prompt.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 09:43:33 -08:00
hailin f5820f9c7f feat(agents): add subtle conversion guidance to coordinator prompt
Add section 5.6 "隐性转化引导" with trust-first conversion philosophy:
- Free facts vs paid analysis boundary
- "Taste-then-sell" strategy with positive but vague hints
- Assessment suggestion limited to max once per conversation
- Natural urgency only when fact-supported
- Post-assessment → full service transition only when user asks
- Anti-annoyance red line: never make user feel pushed to pay

Recalibrate info exchange (4.3): warm acknowledgment without deep analysis.
Add value framing (4.4) and post-assessment guidance (4.5).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 09:33:52 -08:00
hailin fb966244bc fix(agents): enforce 99 RMB assessment fee — remove "free assessment" language
Update coordinator system prompt to enforce pricing rules:
- All assessments cost 99 RMB (one-time per user), no free assessments
- Must collect payment before calling assessment expert
- Add fee inquiry intent type to response strategy table
- Update generate_payment tool description with fixed pricing
- Replace "免费初步咨询" with tiered service model

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 09:22:37 -08:00
hailin 40a0513b05 fix(agents): strip markdown code fences from InputGate Haiku response
Haiku sometimes returns JSON wrapped in ```json ... ``` code blocks,
causing JSON.parse to fail. Strip markdown fences before parsing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 08:59:19 -08:00
hailin fa835e4f56 fix(agents): preserve image content blocks in context injection — fixes 209K token overflow
injectIntoMessages() was JSON.stringify-ing array content (with image blocks),
turning base64 data into text tokens (~170K) instead of image tokens (~1,600).
Fix: append context as a new text block in the array, preserving image block format.

Also fixes token estimation to count images at ~1,600 tokens instead of base64 char length,
and adds debug logging for API call token composition.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 08:51:24 -08:00
hailin 7dc364d9b3 fix(agents): raise compaction threshold to 160K (80% of 200K limit)
80K was too aggressive and caused premature context loss. Now triggers
at 160K tokens with a target of 80K after compaction.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 08:40:18 -08:00
hailin 033e476a6f fix(agents): wire up autoCompactIfNeeded to prevent token overflow
The auto-compaction logic (threshold 80K tokens, summarize older
messages via Haiku) existed but was never called in sendMessage flow.
Now called after context injection, before agent loop.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 08:29:36 -08:00
hailin f2903bd67a fix(agents): use text placeholders for historical attachments to avoid token overflow
Historical images/PDFs were being re-downloaded and base64-encoded for
every API call, causing 200K+ token requests. Now only the current
message includes full attachment blocks; historical ones use text
placeholders like "[用户上传了图片: photo.png]".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 08:22:02 -08:00
hailin 15d42315ed fix(docling): align volume mount with HF default cache path
Build preloads models to /root/.cache/huggingface (HF default).
Volume must mount there too, not a separate /models path.
Remove HF_HOME env override to keep paths consistent.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 07:38:28 -08:00
hailin 9b357fe01c chore(docling): add .gitignore and .dockerignore
Exclude __pycache__, .pyc files from git tracking and Docker build context.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 07:30:05 -08:00
hailin 73dee93d19 feat(docling): persist model cache via Docker volume
- Add docling_models volume mounted at /models in container
- Set HF_HOME=/models/huggingface at runtime (via docker-compose env)
- Models download once → persist in volume → survive container rebuilds
- Build-time preload uses || to not block build if network unavailable

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 07:18:14 -08:00
hailin 764613bd86 fix(docling): use standalone script for model pre-download
Inline Python one-liner had syntax errors (try/except/finally can't be
single-line). Move to scripts/preload_models.py for reliable execution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 07:16:20 -08:00
hailin d725864cd6 fix(docling): pre-download models during Docker build
DocumentConverter() constructor only sets up config, models are lazily
downloaded on first convert(). Fix by running an actual PDF conversion
during build to trigger HuggingFace model download and cache.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 07:13:54 -08:00
hailin 57d21526a5 feat(knowledge): add Docling document parsing microservice
Add IBM Docling as a Python FastAPI microservice for high-quality document
parsing with table structure recognition (TableFormer ~94% accuracy) and
OCR support, replacing pdf-parse/mammoth as the primary text extractor.

Architecture:
- New docling-service (Python FastAPI, port 3007) in Docker network
- knowledge-service calls docling-service via HTTP POST multipart/form-data
- Graceful fallback: if Docling fails, falls back to pdf-parse/mammoth
- Text/Markdown files skip Docling (no benefit for plain text)

Changes:
- New: packages/services/docling-service/ (main.py, Dockerfile, requirements.txt)
- docker-compose.yml: add docling-service, wire DOCLING_SERVICE_URL to
  knowledge-service, add missing FILE_SERVICE_URL to conversation-service
- text-extraction.service.ts: inject ConfigService, add extractViaDocling()
  with automatic fallback to legacy extractors
- .env.example: add FILE_SERVICE_PORT/URL and DOCLING_SERVICE_PORT/URL

Inter-service communication map:
  conversation-service → file-service (FILE_SERVICE_URL, attachments)
  conversation-service → knowledge-service (KNOWLEDGE_SERVICE_URL, RAG)
  knowledge-service → docling-service (DOCLING_SERVICE_URL, document parsing)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 05:24:10 -08:00
hailin 5338bdfc0f fix(agents): correct Claude API file size limits (image 5MB, PDF 25MB)
Claude API enforces a hard 5MB limit per image (not 20MB as previously
set). PDFs have a 32MB total request limit; set individual PDF cap to
25MB to leave room for prompt/messages. The downloadAsBase64 method now
accepts a per-type maxSize parameter.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 04:55:54 -08:00
hailin e867ba5529 fix(files): replace MinIO presigned URLs with API proxy + base64 for Claude
MinIO presigned URLs use Docker-internal hostname (minio:9000), making
them inaccessible from both Claude API servers and user browsers.

Changes:
- file-service: add /files/:id/content and /files/:id/thumbnail proxy
  endpoints that stream file data from MinIO
- file-service: toResponseDto now returns API proxy paths instead of
  MinIO presigned URLs
- coordinator: buildAttachmentBlocks now downloads files via file-service
  internal API (http://file-service:3006) and converts to base64 for
  Claude API (images, PDFs) or embeds text content directly
- Configurable FILE_SERVICE_URL env var for service-to-service calls

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 04:49:39 -08:00
hailin 8f7b633041 fix(agents): download text file content for Claude instead of passing URL
Claude API cannot fetch arbitrary URLs. Text-based attachments (txt, csv,
json, md) are now downloaded via their presigned MinIO URL and embedded
directly as text blocks. PDF uses Claude's native document block. Added
50KB size limit with truncation for large text files.

buildMessages() is now async to support text content fetching.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 04:36:17 -08:00
hailin 5076e16cc7 feat: add document support for Claude + image thumbnail generation
1. Coordinator now sends all attachment types to Claude:
   - Images → native image blocks (existing)
   - PDF → native document blocks (Claude PDF support)
   - Text files (txt, csv, json, md) → text blocks with filename
   Extracted common buildAttachmentBlocks() helper.

2. File-service generates thumbnails on image upload:
   - Uses sharp to resize to 400x400 max (inside fit, no upscale)
   - Output as WebP at 80% quality for smaller file size
   - Stored in MinIO under thumbnails/ prefix
   - Generated for both direct upload and presigned URL confirm
   - Non-blocking: thumbnail failure doesn't break upload

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 04:31:53 -08:00
hailin cf2fd07ead fix(file-service): sync tenantId in domain entity and add migration
FileORM had tenant_id column but FileEntity domain class was missing it,
causing "column FileORM.tenant_id does not exist" errors on production.

- Add tenantId to FileEntity (constructor, create, fromPersistence)
- Pass tenantId in repository toEntity() mapping
- Add idempotent migration script for files.tenant_id + indexes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 04:21:43 -08:00
hailin 6592e72758 fix(agents): filter out internal strategy notes from followUp output
The model was putting internal notes like "引导回移民话题" in the followUp
field instead of actual user-facing questions. Two fixes:

1. Schema: describe followUp as "必须以?结尾,禁止填写内部策略备注"
2. agent-loop: only yield followUp if it contains ?or ? (question mark)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 02:06:08 -08:00
hailin d608403535 fix(agents): upgrade coordinator model to Sonnet 4.5 for structured output support
claude-sonnet-4-20250514 does not support output_config (structured outputs).
Changed coordinator model to claude-sonnet-4-5-20250929 which supports it.
Specialist agents remain on their original models (no output_config needed).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 01:58:31 -08:00
hailin 9e9865acb0 fix(tools): 修复 coordinator-tools 与 immigration-tools 之间的 input_schema 不一致
## 问题

全链路检查发现 coordinator-tools.ts(Claude 实际使用的工具定义)与
immigration-tools.service.ts(实际执行器)之间有 4 处 input_schema 不一致,
会导致 Claude 发送的参数无法被正确解析。

## 修复

### check_off_topic
- coordinator 发 `query`,handler 读 `question`
- Fix: handler 同时支持 `query` 和 `question` 两个字段名

### collect_assessment_info
- coordinator 发 `{ userId, field, value }`(单字段模式)
- handler 读 `{ category, age, education, ... }`(批量模式)
- Fix: handler 同时支持两种输入格式

### generate_payment
- coordinator 旧 schema: `{ userId, serviceType, amount, description }`
- handler 需要: `{ serviceType, category, paymentMethod }`
- Fix: 更新 coordinator schema 为 `{ serviceType, category, paymentMethod }`
  - serviceType enum 改为 ASSESSMENT/CONSULTATION/DOCUMENT_REVIEW(匹配 payment-service)
  - 添加 category enum 和 paymentMethod enum
  - 移除 userId(从 context 获取)和 amount(由 payment-service 定价)

### save_user_memory
- coordinator 旧 schema 多余 `userId`(handler 用 context.userId)
- coordinator 发 `importance` 但 handler 不读
- handler 支持 `category` 但 coordinator 未定义
- Fix: coordinator schema 移除 userId,移除 importance,添加 category

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 01:31:38 -08:00
hailin a3f2be078b feat(payment): P2 — 订单管理增强,支持取消订单和订单详情查询
## 后端改动

### PaymentClientService 增强
- 新增 `getOrderDetail(orderId)` — 获取完整订单信息(含支付详情)
- 新增 `cancelOrder(orderId)` — 取消未支付订单(调用 POST /orders/:id/cancel)

### 新增 cancel_order 工具
- 工具定义: 接收 orderId,取消未支付订单
- 实现: 调用 PaymentClientService.cancelOrder()
- 成功返回 { success, orderId, status, message }
- 失败返回友好错误信息(如"只有未支付的订单才能取消")
- coordinator-tools.ts 注册,concurrency map 标记 false(写操作)

## 前端改动

### cancel_order 结果渲染
- 成功: 绿色卡片 + CheckCircle 图标 + 成功提示
- 失败: 红色卡片 + AlertCircle 图标 + 错误原因
- 显示订单号

## 注意事项
- payment-service 暂无退款 API,cancel_order 仅限未支付订单
- 退款功能待 payment-service 侧实现后再扩展

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 01:23:28 -08:00
hailin df754ce8b8 feat(payment): P0 — 支付闭环,Agent 可创建真实订单并生成支付二维码
## 后端改动

### 新增 PaymentClientService
- 新建 `infrastructure/payment/payment-client.service.ts`
  - HTTP 客户端封装,调用 payment-service API(端口 3002)
  - 方法: createOrder, createPayment, checkPaymentStatus, getOrderStatus, getUserOrders
  - 基于 native fetch,模式与 KnowledgeClientService 一致
- 新建 `infrastructure/payment/payment.module.ts`
- AgentsModule 导入 PaymentModule

### 重写 generate_payment 工具
- 删除所有 MOCK 数据(fake orderId, placeholder QR URL)
- 实际调用 payment-service: createOrder → createPayment → 返回真实 QR URL
- 返回 orderId, paymentId, qrCodeUrl, paymentUrl, expiresAt

### 新增 check_payment_status 工具
- 查询订单支付状态(调用 payment-service GET /orders/:id/status)
- 返回 status, statusLabel(中文映射), paidAt
- 在 coordinator-tools.ts 和 concurrency map 中注册(只读 safe=true)

### 新增 query_order_history 工具
- 查询用户历史订单列表(调用 payment-service GET /orders)
- 返回 orders 数组含 orderId, serviceType, amount, status, createdAt
- 在 coordinator-tools.ts 和 concurrency map 中注册(只读 safe=true)

## 前端改动

### QR 码渲染
- 安装 qrcode.react 4.2.0
- ToolCallResult 组件使用 QRCodeSVG 渲染真实二维码
- 支持 qrCodeUrl(二维码)和 paymentUrl(跳转链接)两种支付方式
- 显示订单号、金额、过期时间

### 支付状态卡片
- check_payment_status 结果渲染为彩色状态卡片
- 已支付=绿色, 待支付=黄色, 已取消=红色, 已退款=橙色

### 订单历史列表
- query_order_history 结果渲染为订单列表卡片
- 每行显示: 类别、日期、金额、状态标签

### WebSocket 工具事件处理
- tool_result 事件收集到 pendingToolResults(chatStore 新增状态)
- stream_end 时将 toolResults 注入消息 metadata.toolCalls
- stream_start 时清空 pendingToolResults

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 01:17:14 -08:00
hailin bb1a1139a3 feat(agents): add 4-layer response quality control — structured outputs, LLM judge, smart truncation
AI回复质量硬约束系统,解决核心问题:AI无法用最少的语言精准回答用户问题。

## 四层防线架构

### Layer 1 — Prompt 优化 (软约束)
- coordinator-system-prompt.ts: 新增"最高优先级原则:精准回答"章节
  - 意图分类表(7种)+ 每种对应长度和回答策略
  - 错误示范 vs 正确示范对比
  - "宁可太短,不可太长"原则
  - 最终提醒三条:精准回答 > 准确性 > 简洁就是专业
- policy-expert-prompt.ts: 精简输出格式
- objection-handler-prompt.ts: 微调

### Layer 2 — Structured Outputs (格式约束)
- 新文件 coordinator-response.schema.ts: Zod schema 定义
  - intent: 7种意图分类 (factual/yes_no/comparison/assessment/objection/detailed/casual)
  - answer: 回复文本
  - followUp: 可选跟进问题
- agent-loop.ts: 通过 output_config 传入 Claude API,强制 JSON 输出
  - 流式模式下抑制 text delta(JSON 片段不展示给用户)
  - 流结束后解析 JSON,提取 answer 字段 yield 给前端
  - JSON 解析失败时回退到原始文本(安全降级)
- coordinator-agent.service.ts: 传入 zodOutputFormat(CoordinatorResponseSchema)
- agent.types.ts: AgentLoopParams 新增 outputConfig 字段

### Layer 3 — LLM-as-Judge (语义质检)
- evaluation-rule.entity.ts: 新增 LLM_JUDGE 规则类型(第9种)
- evaluation-gate.service.ts:
  - 注入 ConfigService + 初始化 Anthropic client (Haiku 4.5)
  - evaluateRule 改为 async(支持异步 LLM 调用)
  - 新增 checkLlmJudge():评估 relevance/conciseness/noise 三维度
  - 可配置阈值:minRelevance(7), minConciseness(6), maxNoise(3)
  - 5s 超时 + 异常默认通过(非阻塞)
  - EvaluationContext 新增 userMessage 字段
- coordinator-agent.service.ts: 传入 userMessage 到评估门控

### Layer 4 — 程序级硬截断 (物理约束)
- coordinator-response.schema.ts:
  - INTENT_MAX_ANSWER_LENGTH: 按意图限制字符数
    factual=200, yes_no=120, comparison=250, assessment=400,
    objection=200, detailed=500, casual=80
  - MAX_FOLLOWUP_LENGTH: 80 字符
  - smartTruncate(): 在句子边界处智能截断(中英文标点)
- agent-loop.ts: JSON 解析后按 intent 强制截断 answer 和 followUp
- max_tokens 从 4096 降至 2048

## Bug 修复
- agent-loop.ts: currentTextContent 在 content_block_stop 时被重置为空字符串,
  导致评估门控收到空文本。改为从 finalMessage.content 提取 responseText。

## 依赖升级
- @anthropic-ai/sdk: 0.52.0 → 0.73.0 (支持 output_config)
- 新增 zod@4.3.6 (Structured Output schema 定义)

## 文件清单 (1 new + 10 modified)
- NEW: agents/schemas/coordinator-response.schema.ts
- MOD: agents/coordinator/agent-loop.ts (核心改造)
- MOD: agents/coordinator/coordinator-agent.service.ts
- MOD: agents/coordinator/evaluation-gate.service.ts
- MOD: agents/types/agent.types.ts
- MOD: agents/prompts/coordinator-system-prompt.ts
- MOD: agents/prompts/policy-expert-prompt.ts
- MOD: agents/prompts/objection-handler-prompt.ts
- MOD: domain/entities/evaluation-rule.entity.ts
- MOD: package.json + pnpm-lock.yaml

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-07 01:01:05 -08:00
hailin 93ed3343de refactor(knowledge): separate file upload into independent entry point
将知识库的"新建文章"和"上传文件"拆分为两个独立入口:

UI 改动:
- 移除 Segmented 切换器,"新建文章"弹窗恢复为纯手动输入
- 新增独立的"上传文件"按钮 + 上传弹窗(Upload.Dragger)
- 上传提取完成后自动打开"确认提取内容"弹窗,预填标题+内容
- 管理员编辑确认后保存,文章来源标记为 EXTRACT

后端改动:
- CreateArticleDto 新增可选 source 字段
- Controller 使用 dto.source || MANUAL(不再硬编码 MANUAL)

流程:
- 新建文章 → 手动输入 → source = MANUAL
- 上传文件 → 提取文本 → 编辑确认 → source = EXTRACT

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 23:29:37 -08:00
hailin e16ec7930d feat(knowledge): add file upload with text extraction for knowledge base
支持在管理后台知识库页面上传文件(PDF、Word、TXT、Markdown),
自动提取文本内容,管理员预览编辑后保存为知识库文章。

## 后端 (knowledge-service)

- 新增 TextExtractionService:文件文本提取服务
  - PDF 提取:使用 pdf-parse v2 (PDFParse class API)
  - Word (.docx) 提取:使用 mammoth.extractRawText()
  - TXT/Markdown:直接 UTF-8 解码
  - 支持中英文混合字数统计
  - 文件大小限制 200MB,类型校验(MIME 白名单)
  - 空文本 PDF(扫描件/图片)返回友好错误提示

- 新增上传接口:POST /knowledge/articles/upload
  - 使用 NestJS FileInterceptor 处理 multipart/form-data
  - 仅提取文本并返回,不直接创建文章(两步流程)
  - 返回:extractedText, suggestedTitle, wordCount, pageCount

- 新增 ExtractedTextResponse DTO
- KnowledgeModule 注册 TextExtractionService

## 前端 (admin-client)

- knowledge.api.ts:新增 uploadFile() 方法(FormData + 120s 超时)
- useKnowledge.ts:新增 useUploadKnowledgeFile hook
- KnowledgePage.tsx:
  - 新增 Segmented 切换器(手动输入 / 文件上传),仅新建时显示
  - 文件上传模式显示 Upload.Dragger 拖拽上传区域
  - 上传后自动提取文本,填入标题+内容字段
  - 提取完成自动切回手动模式,管理员可预览编辑后保存
  - 显示提取结果(字数、页数)

## 用户流程

新建文章 → 切换"文件上传" → 拖入/选择文件 → 系统提取文本
→ 自动填入标题+内容 → 管理员编辑确认 → 点击保存

## 依赖

- pdf-parse@^2.4.5(PDF 文本提取)
- mammoth@^1.8.0(Word 文档文本提取)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 22:58:19 -08:00
hailin ce13c8b770 fix(agents): remove official website referral from fallback labels, use neutral disclaimer
Replace "请以入境处官方信息为准" / "建议向香港入境处官网核实" with
"基于AI训练数据,可能不是最新信息" across all fallback messages.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 22:10:45 -08:00
hailin 04dbc61131 feat(agents): add capability boundary guardrails — input gate, cascading fallback, output gate rules
Four guardrail improvements to enforce agent capability boundaries:

1. Cascading Fallback (Fix 1+4):
   - Rewrite searchKnowledge() in immigration-tools.service.ts with 3-tier fallback:
     KB (similarity >= 0.55) → Web Search → Built-in Knowledge (clearly labeled)
   - Rewrite executeTool() in policy-expert.service.ts to use retrieveKnowledge()
     with confidence threshold; returns [KB_EMPTY]/[KB_LOW_CONFIDENCE]/[KB_ERROR]
     markers so the model knows to label source reliability

2. Input Gate (Fix 2):
   - New InputGateService using Haiku for lightweight pre-classification
   - Classifications: ON_TOPIC / OFF_TOPIC (threshold >= 0.7) / HARMFUL (>= 0.6)
   - Short messages (< 5 chars) fast-path to ON_TOPIC
   - Gate failure is non-fatal (allows message through)
   - Integrated in CoordinatorAgentService.sendMessage() before agent loop entry
   - OFF_TOPIC/HARMFUL messages get fixed responses without entering agent loop

3. Output Gate Enhancement (Fix 3):
   - Add TOPIC_BOUNDARY and NO_FABRICATION to EvaluationRuleType
   - TOPIC_BOUNDARY: regex detection for code blocks, programming keywords,
     AI identity exposure, off-topic indicators in agent responses
   - NO_FABRICATION: detects policy claims without policy_expert invocation
     or source markers; ensures factual claims are knowledge-backed
   - Both rule types are admin-configurable (zero rules = zero checks)
   - No DB migration needed (ruleType is varchar(50))

Files changed:
- NEW: agents/coordinator/input-gate.service.ts
- MOD: agents/coordinator/coordinator-agent.service.ts (inject InputGate + gate check)
- MOD: agents/agents.module.ts (register InputGateService)
- MOD: agents/coordinator/evaluation-gate.service.ts (2 new evaluators)
- MOD: domain/entities/evaluation-rule.entity.ts (2 new rule types)
- MOD: agents/specialists/policy-expert.service.ts (RAG confidence threshold)
- MOD: claude/tools/immigration-tools.service.ts (cascading fallback)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 21:59:10 -08:00
hailin d81f03d318 feat(agents): persist evaluation gate failures as system experiences for cross-conversation learning
将评估门控的失败教训自动沉淀为全局系统经验,实现跨对话学习闭环:

1. KnowledgeClientService 新增 saveExperience() 方法
   - 封装 knowledge-service 的 POST /memory/experience API
   - 支持保存 experienceType、scenario、content、confidence 等字段
   - 错误处理:失败时返回 null,不影响主流程

2. CoordinatorAgentService 评估门控回调扩展
   - Gate 失败(RETRY/SUPPLEMENT)时,异步将失败教训写入 knowledge-service
   - fire-and-forget 模式:不阻塞重试流程,失败仅打印警告
   - 经验格式:scenario = "阶段 + 规则名",content = "具体要求描述"
   - experienceType = KNOWLEDGE_GAP,confidence = 45(低于默认50,需积累)

3. 学习闭环机制
   - 经验写入后状态为 PENDING,需 admin 审核激活(isActive=true)
   - 激活后 Context Injector 的 searchExperiences() 语义匹配检索
   - 未来对话中注入 prompt → AI 第一次就遵循规则,避免重复犯错
   - knowledge-service 自动去重:0.9 相似度阈值合并,confidence +5/次

4. 零侵入保证
   - 无评估规则时 Gate 直接 PASS → 不保存任何经验 → 原有流程零影响
   - agent-loop.ts 未修改,所有逻辑在 coordinator 回调闭包内完成

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 21:28:56 -08:00
hailin 8d8df53e56 fix(conversations): exclude DELETED conversations from evolution queries and user operations
- findForEvolution() now excludes DELETED conversations (should not learn from deleted data)
- getConversation() rejects DELETED conversations for user-facing operations (sendMessage, getMessages, etc.)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 20:55:04 -08:00
hailin d083008001 fix(conversations): implement soft-delete for conversation deletion
The delete conversation endpoint was a no-op — it verified ownership but
never actually modified the record. Users saw conversations disappear
(frontend optimistic removal) but they reappeared on refresh.

Changes:
- conversation.entity.ts: Add DELETED status, softDelete() and isDeleted()
- conversation.service.ts: Call softDelete() + update instead of no-op
- conversation-postgres.repository.ts: Exclude DELETED conversations
  from findByUserId() queries so they don't appear in user's list

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 20:49:03 -08:00
hailin b75d607e2b fix(agents): resolve NestJS route collision for evaluation-rules endpoints
AdminConversationController's GET /:id was intercepting requests to
AdminEvaluationRuleController (matching "evaluation-rules" as an id param).
Similarly, DELETE /:id was matching "cache" as an id.

Changes:
- conversation.module.ts: Register AdminMcpController and
  AdminEvaluationRuleController before AdminConversationController
  (more specific prefixes must come first in NestJS)
- admin-evaluation-rule.controller.ts: Move static routes (POST /test,
  DELETE /cache) before dynamic routes (GET/:id, DELETE/:id)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 20:37:40 -08:00
hailin 51a7589fbf fix(agents): export EVALUATION_RULE_REPOSITORY from AgentsModule
The AdminEvaluationRuleController in ConversationModule needs the
EVALUATION_RULE_REPOSITORY token. Even though AgentsModule is @Global(),
Symbol-based providers must be explicitly exported to be available
in other modules.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 19:45:38 -08:00
hailin 00a0ac3820 feat(agents): add admin-configurable evaluation gate for agent loop quality control
Add a configurable evaluation gate system that allows administrators to
define quality rules per consulting stage. The gate checks are executed
programmatically before the agent loop returns a response to the user.

## Architecture

- **Zero-config safe**: Empty rules table = no checks = current behavior preserved
- **Callback-based decoupling**: agent-loop.ts receives an optional callback,
  stays decoupled from database layer
- **Max 1 retry**: On RETRY/SUPPLEMENT failure, recurse once without gate to
  prevent infinite loops
- **Error-tolerant**: Gate exceptions are caught and logged, never block responses

## New files

- `database/migrations/20260206_add_evaluation_rules.sql` — DB migration
- `domain/entities/evaluation-rule.entity.ts` — Domain entity with 6 rule types
  (FIELD_COMPLETENESS, ASSESSMENT_QUALITY, RESPONSE_LENGTH, MUST_CONTAIN,
  STAGE_MIN_TURNS, CONVERSION_SIGNAL) and 4 failure actions (RETRY, SUPPLEMENT,
  WARN_AND_PASS, ESCALATE)
- `domain/repositories/evaluation-rule.repository.interface.ts` — Repository contract
- `infrastructure/database/postgres/entities/evaluation-rule.orm.ts` — TypeORM ORM entity
- `infrastructure/database/postgres/repositories/evaluation-rule.repository.ts` — Repository impl
- `infrastructure/agents/coordinator/evaluation-gate.service.ts` — Core evaluation engine
  with 5-minute rule cache, per-rule-type evaluators, severity-based action resolution,
  and feedback message builder for model retry
- `application/dtos/evaluation-rule.dto.ts` — Create/Update/Test DTOs
- `adapters/inbound/admin-evaluation-rule.controller.ts` — Admin CRUD API with 8 endpoints:
  list, get, create, update, delete, toggle, test (dry-run), clear cache

## Modified files

- `agent.types.ts` — Add optional `evaluationGate` callback to `AgentLoopParams`
- `stream.types.ts` — Add `EvaluationWarningEvent`, `'evaluating'` phase
- `agent-loop.ts` — Insert gate check at termination point (line 315)
- `coordinator-agent.service.ts` — Inject EvaluationGateService, build callback,
  handle `evaluation_warning` event in StreamChunk mapping
- `agents.module.ts` — Register EvaluationRuleORM, repository, EvaluationGateService
- `conversation.module.ts` — Register AdminEvaluationRuleController

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 18:56:52 -08:00
hailin 714a674818 feat(mcp): add MCP Server management — backend API + admin UI
实现完整的 MCP (Model Context Protocol) 服务器管理功能,包括后端 API 和管理界面。

## 后端 (conversation-service)

### MCP 混合架构核心 (新增)
- mcp.types.ts: MCP 类型定义 (McpServerConfig, McpToolDefinition, McpConnectionState 等)
- mcp-config.service.ts: 配置解析 — 支持环境变量 MCP_SERVERS 和租户级配置
- mcp-client.service.ts: MCP 客户端 — 连接管理、工具发现、工具执行、运行时增删改
- mcp.module.ts: @Global NestJS 模块,注册 MCP 服务 + TypeORM 实体 + Repository

### 数据持久化 (新增)
- 20260206_add_mcp_server_configs.sql: 数据库迁移 — mcp_server_configs 表
- mcp-server-config.orm.ts: TypeORM 实体 (tenant_id 支持多租户)
- mcp-server-config.repository.ts: Repository 层 (CRUD + ORM→McpServerConfig 转换)

### Admin API (新增)
- admin-mcp.controller.ts: 11 个管理端点,路由前缀 conversations/admin/mcp
  - GET /overview — 统计信息 (服务器总数、已连接、错误、工具总数)
  - GET/POST /servers — 列表 + 创建
  - GET/PUT/DELETE /servers/:id — 详情 + 更新 + 删除
  - POST /servers/:id/connect — 手动连接
  - POST /servers/:id/disconnect — 手动断开
  - GET /servers/:id/tools — 查看已发现工具
  - POST /servers/:id/test — 测试连接
  - POST /test-config — 测试未保存的配置

### 已有文件修改
- coordinator-tools.ts: getToolsForClaudeAPI() 支持 additionalTools 可选参数
- agent-loop.ts: 支持 additionalTools + additionalConcurrencyMap 透传
- coordinator-agent.service.ts: 注入 McpClientService,工具路由加 MCP 分支
- agents.module.ts: 导入 McpModule
- conversation.module.ts: 注册 AdminMcpController

## 前端 (admin-client)

### API + Hooks (新增)
- mcp.api.ts: Axios API 客户端 + 完整 TypeScript 类型定义
- useMcp.ts: 10 个 React Query hooks (queries + mutations)

### UI 页面 (新增)
- McpPage.tsx: 主页面 — 统计卡片 + 服务器表格 + 操作按钮
- ServerFormDrawer.tsx: 创建/编辑表单 — 基本信息、传输配置、高级设置、连接测试
- ServerDetailDrawer.tsx: 详情抽屉 — 配置展示、工具浏览 (Collapse + JSON Schema)

### 路由 + 导航
- App.tsx: 添加 /mcp 路由
- MainLayout.tsx: 侧边栏添加 "MCP 服务器" 菜单项 (ApiOutlined 图标)

## 依赖
- @modelcontextprotocol/sdk: ^1.0.0 (MCP 协议 SDK)

## 架构设计
- 混合架构: 16 个内置工具保持不变 + MCP 工具动态发现/热插拔
- 工具名前缀 mcp__{serverId}__{toolName} 确保零冲突
- 优雅降级: MCP 连接失败不影响内置工具,仅 log 记录
- 启动加载: 先连接环境变量配置,再连接数据库配置
- 运行时管理: 支持不重启服务即可增删改 MCP Server

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 18:29:02 -08:00
hailin dc6311f050 fix(db): add multi-tenancy migration for knowledge-service tables
Adds tenant_id UUID column to user_memories, system_experiences,
knowledge_articles, and knowledge_chunks tables with default tenant
backfill and indexes. Migration already applied to production DB.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 09:23:13 -08:00
hailin 20001277ff fix(agents): update Haiku model ID from 3.5 to 4.5
claude-haiku-3-5-20241022 returns 404 on the proxy. Updated to
claude-haiku-4-5-20251001 in agent configs and context injector.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:59:58 -08:00
hailin fb811c7cf4 fix(agents): prevent usage events from emitting stream_end to frontend
The mapEventToStreamChunk was mapping both 'usage' (per-turn) and 'end'
(final) events to type 'end', causing the gateway to emit multiple
stream_end events. This made the frontend create a separate message
bubble (with its own bot avatar) for each agent loop turn.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:58:11 -08:00
hailin d1907636fe fix(ws): set tenant context for WebSocket message handling
WebSocket gateway was missing AsyncLocalStorage tenant context setup,
causing 'Tenant context not set' error on every message. Now extracts
tenantId from handshake and wraps handleMessage in tenantContext.runAsync().

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:49:56 -08:00
hailin 4ac1fc4f88 fix(db): add multi-tenancy migration for conversation-service tables
- Add tenant_id column to conversations, messages, token_usages tables
- Create standalone migration SQL script for production deployment
- Add agent_executions table to init-db.sql for new installations
- Fix MessageORM created_at nullable mismatch with database schema
- Backfill existing data with default tenant ID

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:39:10 -08:00
hailin 51c05f98ee fix(agents): resolve ContextInjectorService DI error by replacing interface with concrete class
- Replace IKnowledgeClient TypeScript interface (erased at runtime) with KnowledgeClientService concrete class injection
- Fix method signatures to match KnowledgeClientService API:
  - getUserMemories() → getUserTopMemories(), field type → memoryType
  - retrieveForPrompt(query, userId) → retrieveForPrompt({ query, userId })
  - getRelevantExperiences(query, n) → searchExperiences({ query, limit: n }), field type → experienceType
- Remove unused ContextData import

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:10:40 -08:00
hailin 691a3523e8 feat(analytics): add Agent usage analytics to admin panel
Add full-stack Agent execution tracking and analytics:

**Database (conversation-service)**
- New `agent_executions` table: tracks each specialist Agent invocation
  with agentType, agentName, durationMs, success, tenantId
- Migration: AddAgentExecutionsTable1738800000000
- ORM entity: AgentExecutionORM with indexes on tenant, conversation,
  agentType, createdAt, and (tenant+date) composite

**Data Capture (conversation-service)**
- conversation.service.ts: captures `agent_start` and `agent_complete`
  StreamChunk events in the sendMessage() async generator loop
- Persists agent execution records to DB after each message completes
- Non-blocking: agent persistence failures are logged but don't break
  the main conversation flow

**Admin API (conversation-service)**
- GET /conversations/admin/statistics/agents?days=30
  Aggregated stats per agent type: totalCalls, successCount, failureCount,
  successRate, avgDurationMs, min/max duration
- GET /conversations/admin/statistics/agents/trend?days=7&agentType=
  Daily trend data: date, agentType, calls, avgDurationMs, successRate
- GET /conversations/admin/:id/agent-executions
  Per-conversation agent execution records ordered by createdAt

**Admin Client - Analytics Page**
- New AgentAnalyticsTab component with:
  - 4 summary cards (total calls, success rate, avg duration, top agent)
  - Agent statistics table (Ant Design Table with sortable columns,
    color-coded Tags, Progress bar for success rate)
  - Stacked bar trend chart (Recharts BarChart, color per agent type)
  - Time range selectors (7/14/30/90 days)
- Added as third tab "Agent 使用分析" in AnalyticsPage dimension tabs

**Admin Client - Conversations Page**
- Added "Agent 使用详情" section to conversation detail drawer
  (between Token Usage and Messages sections)
- Shows per-conversation agent execution table with agent name (color Tag),
  duration, success/failure status, and timestamp
- Empty state: "暂无 Agent 使用记录"

Agent color mapping: policy_expert=#1890ff, assessment_expert=#52c41a,
strategist=#722ed1, objection_handler=#eb2f96, case_analyst=#faad14,
memory_manager=#13c2c2

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 08:00:55 -08:00
hailin 16cc0e4c08 feat(agents): implement multi-agent collaboration architecture
借鉴 Claude Code 的架构模式,将单一 Agent 重构为 Coordinator + 6 Specialist 多 Agent 协作系统。

## 新增文件 (36个)

### 架构设计文档 (docs/architecture/, 12个)
- 00-overview.md ~ 11-prompt-templates.md: 完整架构设计,覆盖所有 Agent 的详细设计、
  Prompt 模板、协作流程、工具并发系统、动态上下文注入

### 多 Agent 系统 (infrastructure/agents/, 23个)
- coordinator/coordinator-agent.service.ts: 主协调器,替代 ClaudeAgentServiceV2
- coordinator/agent-loop.ts: 核心递归 async generator 循环(参考 Claude Code aM())
- coordinator/context-injector.service.ts: 动态上下文注入(8种上下文类型按优先级注入)
- specialists/base-specialist.service.ts: Agent 基类(封装 Claude API 调用 + prompt 缓存)
- specialists/policy-expert.service.ts: 政策专家 (Sonnet 4, temp=0)
- specialists/assessment-expert.service.ts: 评估专家 (Sonnet 4, temp=0)
- specialists/strategist.service.ts: 策略顾问 (Sonnet 4, temp=0.3)
- specialists/objection-handler.service.ts: 异议处理 (Sonnet 4, temp=0.2)
- specialists/case-analyst.service.ts: 案例分析 (Haiku, temp=0)
- specialists/memory-manager.service.ts: 记忆管理 (Haiku, temp=0)
- prompts/coordinator-system-prompt.ts: 协调器 Prompt(1095行,13章)
- prompts/{policy,assessment,strategist,objection,case,memory}-*-prompt.ts: 各专家 Prompt
- tools/coordinator-tools.ts: 16个工具定义(6 Agent 调用 + 10 直接工具)
- tools/tool-execution-queue.ts: 并发执行队列(isConcurrencySafe 控制并行/串行)
- types/agent.types.ts: Agent 配置、输入/输出类型定义
- types/stream.types.ts: 流式事件类型(含 agent_start/complete/coordinator_thinking)
- types/context.types.ts: 上下文注入类型
- agents.module.ts: NestJS 模块注册

### 前端 Agent 状态展示 (1个)
- AgentStatusIndicator.tsx: 多 Agent 工作状态组件(含动画)

## 修改文件 (15个)

### 后端集成
- conversation.service.ts: 切换到 CoordinatorAgentService
- conversation.gateway.ts: 新增 agent_start/agent_complete/coordinator_thinking 事件
- claude.module.ts: 引入 AgentsModule
- agents.module.ts: 注册 ImmigrationToolsService(复用旧版生产测试的工具实现)
- knowledge-client.service.ts: 新增 search()/getUserContext() 便捷方法

### 旧代码标记 @deprecated
- claude-agent.service.ts, claude-agent-v2.service.ts
- strategy-engine.service.ts, intent-classifier.ts, response-gate.ts

### 前端适配
- chatStore.ts: 新增 ActiveAgent/CompletedAgent/CoordinatorPhase 状态
- useChat.ts: 新增 WebSocket 事件处理
- ChatWindow.tsx: 集成 AgentStatusIndicator
- globals.css: 新增 agentPulse/agentSlideIn 动画

### 共享类型
- conversation.types.ts: 新增 AGENT_START/AGENT_COMPLETE/COORDINATOR_THINKING 事件

## 核心设计决策

1. **新旧结合**: Coordinator 的 10 个直接工具委托给旧版 ImmigrationToolsService
   (经过生产测试的 Google Search、汇率 API、新闻 API 等),6 个 Agent 调用工具
   走新的 Specialist Agent 系统
2. **递归 async generator**: agent-loop 支持流式输出 + 工具递归 + 成本/轮次控制
3. **并行 Agent 执行**: ToolExecutionQueue 根据 isConcurrencySafe 自动并行/串行
4. **Prompt 缓存**: 所有 Agent 的 system prompt 使用 cache_control: ephemeral
5. **速率限制重试**: 429/529 指数退避,最多 2 次
6. **向后兼容**: LegacyConversationContext 类型别名,StreamChunk 扩展不破坏现有结构

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-06 04:26:39 -08:00