Why Reasoning-Focused Language Models Sometimes Hallucinate More Than General Models — Evidence, Costs, and How to Test Properly
https://www.anobii.com/en/019d33b672c4d4f0e3/profile/activity
Reasoning models recorded 2-3x higher factual-error rates on mixed-task evaluations The data suggests a consistent pattern across independent tests: models tuned or prompted for explicit step-by-step reasoning often report higher rates of