• by aithrowawaycomm on 12/7/2024, 5:14:03 PM

    Ugh:

      The process might be akin to a chess-playing AI playing a million games to learn optimal strategies, Subbarao Kambhampati, a computer scientist at Arizona State University, told me.  Or perhaps a rat that, having run 10,000 mazes, develops a good strategy for choosing among forking paths and doubling back at dead ends.
    
    Lab rats don’t run 10,000 mazes! They don’t live nearly long enough for that. They run less than a dozen, and seem to have a good strategy “baked in” as part of their spatial reasoning abilities. What Wong is really saying here is that o1 is like a very slow and stupid rat which cannot actually reason about anything.

    The way AI constantly ignores and trivializes animal intelligence - a trend dating all the way back to Alan Turing - is in my view the root cause of AI winters, including the one coming in the next year or so. Investors don’t want to ask “is this thing actually smarter than a fish?” and executives don’t want to know the answer.

  • by techfeathers on 12/7/2024, 9:24:29 PM

    Something always seemed incomplete about testing models against standardized tests; I would expect AI models to first do well on standardized tests, much better than humans, but it makes me wonder if there’s something else that humans possess that isn’t tested by these tests. We test humans in these tests to, and I would guess that loosely speaking there’s a correlation between a persons success on an advanced math test or the bar and success in their career, but we also sort of know examples where there’s an appearance of an inverse correlation, people who do great as a phd student or mathlete who can’t operate in a day to day job.

    So when AI companies start saying these AI are as intelligent as a PHD student, it makes me wonder, most people aren’t as smart as a phd student and yet AI still seems to choke on some basic tasks.