OutboundEval: A Dual-Dimensional Benchmark for Expert-Level Intelligent Outbound Evaluation of Xbench's Professional-Aligned Series
Anzeige
Ähnliche Artikel
arXiv – cs.AI
•
Neue Studie deckt stille Fehler in Multi-Agenten‑AI auf
arXiv – cs.AI
•
QiMeng-NeuComBack: Self-Evolving Translation from IR to Assembly Code
arXiv – cs.AI
•
LLM-Tester CLAUSE: Benchmark zur Erkennung von Vertragsfehlern
arXiv – cs.AI
•
Mechanics of Learned Reasoning 1: TempoBench, A Benchmark for Interpretable Deconstruction of Reasoning System Performance
arXiv – cs.AI
•
APTBench: Benchmarking Agentic Potential of Base LLMs During Pre-Training
arXiv – cs.AI
•
SOCIA-Nabla: Textual Gradient Meets Multi-Agent Orchestration for Automated Simulator Generation