Posts by Mike Lee
Introducing Long Horizon Augmented Workflows: Controllable Underspecification for Long-Horizon Tasks
LHAW is a dataset-agnostic pipeline for generating underspecified long-horizon tasks and evaluating strategic clarification. Across MCP-Atlas, TAC, and SWE-Bench Pro, we find large differences in how frontier models detect missing information and recover performance under ambiguity.
George Pu, Mike Lee, Sam Denton
MoReBench: Evaluating the Process of AI Moral Reasoning
MoReBench is a benchmark designed to evaluate the procedural moral reasoning of large language models. Using expert-authored rubrics across diverse ethical scenarios, it scores models on the structure and coherence of their reasoning rather than task outcomes. Our findings show that moral reasoning remains weakly correlated with established benchmarks and warrants targeted evaluation and training.
Brandon Handoko, Matthew Siegel, Mike Lee