Research to Advance AI
Scale Labs advances AI through research. Our research focuses on agents, post-training, reasoning, safety, evaluation, and alignment, and the science of data.
[LEADERBOARDS]
Benchmarks for frontier, agentic, and safety capabilities
[SHOWDOWN]
Model-preference rankings from real-world usage.
[PAPERS]
Research papers and publications covering agents, post-training, reasoning, safety, evaluation, and alignment, and the science of data.






SWE Atlas: Benchmarking Coding Agents Beyond Issue Resolution
[BLOG]
Insights, analysis, and updates from Scale Labs
57 Healthcare Professionals Told Us What They Need from AI
We surveyed 57 healthcare professionals about what they actually want from AI. Their answers point to three capability gaps that current evaluations miss.
Coverage Not Averages: Rethinking Retrieval Evaluation
A single benchmark score suggests stability and completeness. In reality, it may reflect performance on a narrow and biased slice of the problem.
Improving Multi-Turn Tool Use with GRPO: Results and Insights
We’re sharing early insights from applying GRPO reinforcement learning to multi-turn tool-use tasks using our MCP Tool Use dataset. In a controlled experiment with 3,000 samples, we fine-tuned Qwen2.5-14B using LoRA (rank 32) and evaluated it on MCP Atlas. We observed significant improvement in both coverage rate and pass rate. In this article, we share observations on how data quality, reward design, and training constraints interact in agentic training settings.
MultiChallenge Update: A More Reliable Multi-Turn Benchmark
We’ve updated the MultiChallenge benchmark to improve evaluation reliability and reduce subjectivity, and re-evaluated frontier models under the new setup.
View allAll posts