兩項目評估報告
- EvalRun: #223(suite #119
auto-b3d1a110-r1-053959484)
- Target: #1 production-baseline
- Cases: 13(N=1, attempts=13/13)
- Generated: 2026-05-16T13:52:33+08:00
本報告含兩個獨立評估項目。項目一 評估 bot 進入「知識與產品查詢」fallback
後的撈取 + 回答能力;項目二 評估 bot 跨全 enabled scenarios 的 routing →
tool → answer 三段 funnel 健康度。
項目一: 知識庫精準度
評估範圍 (prerequisite: 進對 scenario)
- 進對 scenario 的 attempts: 12 / 13 (92.3%)
- 只對這些 attempts 算 retrieval + answer
- 沒進對 scenario 的 attempts → 排除(routing 失敗不該污染 KB 訊號)
Per-scenario 評估
| Scenario |
qualifying / total attempts |
retrieval_relevance |
answer_correctness |
| Product Recommendation |
5 / 5 |
❌ 0.0% |
— |
| 知識與產品查詢 |
6 / 7 |
❌ 0.0% |
— |
| 轉接真人客服 |
1 / 1 |
❌ 0.0% |
— |
待修方向(worst-3)
- Product Recommendation × retrieval_relevance = 0.0% — bot 進對 scenario 但沒呼叫 search tool — 流程 / prompt 問題(不是 KB content gap)
- 轉接真人客服 × retrieval_relevance = 0.0% — bot 進對 scenario 但沒呼叫 search tool — 流程 / prompt 問題(不是 KB content gap)
- 知識與產品查詢 × retrieval_relevance = 0.0% — bot 進對 scenario 但沒呼叫 search tool — 流程 / prompt 問題(不是 KB content gap)
項目二: 情境調用與完成
整體 funnel(全 scenarios 加總)
| Stage |
Pass count |
% of total |
| Total attempts |
13 |
100.0% |
| Step 1: scenario_routing |
12 |
92.3% |
| Step 2: + tool_calling |
12 |
92.3% |
| Step 3: + answer |
0 |
0.0% |
Per-scenario funnel
| Scenario (n_attempts) |
Step 1 (routing) |
Step 2 (tool) |
Step 3 (answer) |
end-to-end |
| Product Recommendation (5) |
✅ 5/5 (100.0%) |
✅ 5/5 (100.0%) |
❌ 0/5 (0.0%) |
❌ 0/5 (0.0%) |
| 知識與產品查詢 (7) |
✅ 6/7 (85.7%) |
✅ 6/7 (85.7%) |
❌ 0/7 (0.0%) |
❌ 0/7 (0.0%) |
| 轉接真人客服 (1) |
✅ 1/1 (100.0%) |
✅ 1/1 (100.0%) |
❌ 0/1 (0.0%) |
❌ 0/1 (0.0%) |
Drop-off 最大的 5 個 scenario
- Product Recommendation drop at step3 (-100.0pp)
- 轉接真人客服 drop at step3 (-100.0pp)
- 知識與產品查詢 drop at step3 (-85.7pp)
Audit
Reproduce combined report: bin/rails runner "puts Eval::EvaluationReport.call(run: EvalRun.find(223))"
Or fetch each item separately:
bin/rails runner "puts Eval::KbAccuracyReport.call(run: EvalRun.find(223))"
bin/rails runner "puts Eval::ScenarioFunnelReport.call(run: EvalRun.find(223))"
Per-Scenario × Per-Dim — Run #223
Suite: Sony (bulk R1) · scenarios: 3 · dims: 5 · populated cells: 11/15
| Scenario |
Scenario |
Tool |
Retrieval |
Faith |
AnsQ |
| Product Recommendation |
100.0% [100.0–100.0] (n=5) |
0.0% [0.0–0.0] (n=5) |
— |
0.0% [0.0–0.0] (n=5) |
29.3% [14.7–41.3] (n=5) |
| 知識與產品查詢 |
100.0% [100.0–100.0] (n=7) |
100.0% [100.0–100.0] (n=7) |
— |
0.0% [0.0–0.0] (n=3) |
30.0% [15.2–44.3] (n=7) |
| 轉接真人客服 |
100.0% (n=1) |
— |
— |
0.0% (n=1) |
83.3% (n=1) |
Worst-3 cells (lowest primary score)
- Product Recommendation × Tool · 0.0% (n=5) · lowest sub_metric:
tools_recall
- Product Recommendation × Faith · 0.0% (n=5) · lowest sub_metric:
rule_compliance
- 知識與產品查詢 × Faith · 0.0% (n=3) · lowest sub_metric:
rule_compliance