Join us
@faun ・ Jun 24,2025
Frontier Large Reasoning Models (LRMs) crash into an accuracy wall when tackling overly intricate puzzles, even when their token budget seems bottomless. LRMs exhibit this weird scaling pattern: they fizzle out as puzzles get tougher, while, curiously, simpler models often nail the easy stuff with flair.
Join other developers and claim your FAUN account now!
Only registered users can post comments. Please, login or signup.