• by curious_cat_163 on 6/12/2025, 6:16:15 PM

    Nathan Lambert provides a counterpoint to the recent "The Illusion of Thinking" paper by Apple [1]:

    "On one of these toy problems, the Tower of Hanoi, the models structurally cannot output enough tokens to solve the problem — the authors still took this as a claim that “these models cannot reason” or “they cannot generalize.” This is a small scientific error."

    "it appears that a majority of critiques of AI reasoning are based in a fear of no longer being special rather than a fact-based analysis of behaviors."

    [1]: https://www.arxiv.org/pdf/2506.06941