by vessenes on 12/28/2024, 8:12:49 AM
by DominikPeters on 12/28/2024, 10:32:15 AM
There are also several videos of Tao giving 1hr talks on this topic, for example https://youtu.be/e049IoFBnLA?t=89
by vouaobrasil on 12/28/2024, 11:02:16 AM
I think his comparison to previous machine-assistance is misleading. In previous cases, the use of machines was never creative, whereas now AI has the ability to suggest creative lines.
In the short-term, this sounds exciting. But I also think it reduces the beauty of math because it is mechanizing it into a production line for truth, and reduces the emphasis on the human experience in the search for truth.
The same thing has happened in chess, with more people advocating for Fischer random because of the boring aspect of preparing openings with a computer, and studying games with a computer. Of course, the computer didn't initiate the process of in-depth opening preparation but it launched it to the next level. The point is that mechanization is boring except for pure utility.
My point is that math as a thing of beauty is becoming mechanized and it will lead to the same level of apathy in math amongst the potentially interested. Of course, now it's exciting because it's still the wild west and there are things to figure out, but what of the future?
Using advanced AI in math is a mistake in my opinion. The search for truth as a human endeavor is inspiring. The production of truth in an industrialized fashion is boring.
by jfmc on 12/28/2024, 8:22:39 AM
Actually, most of the paper seems a bit obvious from the computer science side. LLMs scale for really complex tasks, but they are neither correct nor complete. If combined with a tool that is correct (code verifiers, interactive theore provers), then we can get back a correct pipeline.
by smellybigbelly on 12/28/2024, 8:43:59 AM
One vision in the article that stood out for me, was how formal proof assistants allow for large teams to collaborate on proving theorems. Imagine what we could achieve if we could do mathematics as a hive mind!
by fastneutron on 12/28/2024, 11:59:08 AM
The idea of neurosymbolic systems has been in the air a long time, but every time I look at the commentary of an article like this I’m surprised at number the “OMG why didn’t anyone think of this?” type of comments.
For a while I got the impression that an ideological undercurrent of “DL vs GOFAI” had gotten in the way of more widespread exploration of these ideas. Tao’s writing here changed my view to something more pragmatic, that being the formalization of the symbolic part of neurosymbolic AI requires too much manual intervention to easily scale. He is likely onto something by having an LLM in the loop with another system like Lean or Athena to iterate on the formalization process.
by riku_iki on 12/28/2024, 10:09:20 PM
I know this guy is Fields medalist, but all his recent posts and now this publication lack any substance and actual contributions, so it sounds like he is more in the role of hyped twitter influencer than researcher.
by sylware on 12/28/2024, 2:36:25 PM
LLM is probably not the right model for AIs strapped to a formal solver. But experience which has been gained with LLMs may help design those maths oriented models.
by benreesman on 12/28/2024, 10:02:34 AM
With all respect to luminaries: this will not stand up. This will be treated harshly by history.
I’m nobody but I’m going to stand up to Terence Tao and Scott Aarinson: you’re wrong or bought or both.
This is a detour and I want to make clear to history what side I was on.
by kartwna on 12/28/2024, 10:10:46 AM
"I have found it works surprisingly well for writing mathematical LaTeX, as well as formalizing in Lean; indeed, it assisted in writing this very article by suggesting several sentences as I was writing, many of which I retained or lightly edited for the final version. While the quality of its suggestions is highly variable, it can sometimes display an uncanny level of simulated understanding of the intent of the text."
Tao is one of the few mathematicians who is constantly singing the praises of specifically ChatGPT and now CoPilot. He does not appear to have any issues that his thought processes are logged by servers owned by a multi-billion dollar company.
He never mentions copyright or social issues. He never mentions results other than "writing LaTeX is easier".
Does he have any incentive for promoting OpenAI?
by simplemathmind on 12/28/2024, 10:29:39 AM
I did not read the pdf, but I think that LLM and Lean could be useful tools for mathematiceans to prove or refute theorems, but the creative idea that sparks knowledge and new theorems lies in the human, the others are tools that can help to reduce time and effort needed and so, indirectly, they can foster and enhance creativity. It also could mitigate some reasoning that require mechanical prove of many details. Anyway, what I just said seems simple and clear, and in no way would it be worth to be published in a high ranking math journal.
I'd call this paper a "big deal" in that it is a normalization of, very fair summary of, and indication that there is a future for, LLMs in pure mathematics from one of its leading practitioners.
On HN here, we've spent the last few years talking and thinking a lot about LLMs, so the paper might not include much that would be surprising to math-curious HN'ers. However, there is a large cohort of research mathematicians out there that likely doesn't know much about modern AI; Terence is saying there's a little utility in last-gen models (GPT-4), and he expects a lot of utility out of combining next-gen models with Lean.
Again, not surprising if you read his blog, but I think publishing a full paper in AMS is a pretty important moment.