• by ks2048 on 9/29/2025, 9:08:22 PM

    I'm not sure if there's anything interesting here, but I did notice the author was interviewed on the podcast Machine Learning Street Talk about this paper,

    https://www.youtube.com/watch?v=K18Gmp2oXIM&t=3s

  • by getnormality on 9/30/2025, 2:09:19 AM

    In statistics, sample efficiency means you can precisely estimate a specified parameter like the mean with few samples. In AI, it seems to mean that the AI can learn how to do unspecified, very general stuff without much data. Like the underlying truth about the world and how to reach one's goals within it is just some giant parameter vector that we need to infer more or less efficiently from "sampled" sensory data.

  • by EliRivers on 9/29/2025, 9:57:29 PM

    Picture a machine endowed with human intellect. In its most simplistic form, that is Artificial General Intelligence (AGI)

    Artificial human intelligence. Not what I'd call general, but I guess so long as we make it clear that by "general" we don't actually mean general, fine. I'd really expect actual general intelligence to do a lot better than human, in ways we can't understand any more than ants can comprehend us.

  • by comeonbro on 9/29/2025, 8:34:29 PM

    > simp-maxxing

    Might want to write this out in full lol I thought this in particular was going to be a much more entertaining point.

  • by nis0s on 9/29/2025, 10:16:27 PM

    Per my view, it fulfills the following criteria:

    1) Few-shot to zero-shot training for achieving a useful ability on a given new problem.

    2) Self-determining optimal paths to fine-tuning at inference time based on minimal instructions or examples.

    3) Having the capacity to self-correct, maybe by building or confirming heuristics.

    All of these concern an intern, for example, who is given a new, unseen task and can figure out the rest without handholding.

  • by SeanLuke on 9/29/2025, 10:30:15 PM

    My answer: while 99% of the AI community was busy working on Weak AI, that is, developing systems that could perform tasks that humans can do notionally because of our Big Brains, a tiny fraction of people promoted Hard AI, that is, AI as a philosophical recreation of Lt. Commander Data.

    Hard AI has long had a well-deserved jet black reputation as a flakey field filled with armchair philosophers, hucksters, impressarios, and Loebner followers who don't understand the Turing Test. It eventually got so bad that the entire field decided to rebrand itself as "Artificial General Intelligence". But it's the same duck.

  • by mwkaufma on 9/29/2025, 8:57:00 PM

    A term in search of a definition, clearly.

  • by jongjong on 9/29/2025, 9:21:43 PM

    It's been a moving goalpost but I think the point where people will be forced to acknowledge it is when fully autonomous agents are outcompeting most humans in most areas.

    So long as half of people are employed or in business, these people will insist that it's not AGI yet.

    Until AI can fully replace you in your job, it's going to continue to feel like a tool.

  • by jonny_eh on 9/29/2025, 10:03:35 PM

    Please fix the title in HN to match the actual paper's superior title: "What the F*ck Is Artificial General Intelligence?"

  • by mbgerring on 9/29/2025, 9:05:42 PM

    From what I can see, Artificial General Intelligence is a drug-fueled millenarian cult, and attempts to define it that don't consider this angle will fail.

  • by robotcookies on 9/30/2025, 5:39:11 PM

    It is intelligence created by design rather than by natural selection.

  • by nativeit on 9/29/2025, 9:07:02 PM

    [flagged]

  • by realityfactchex on 9/29/2025, 9:46:40 PM

    It would mean actually reasoning, not just applying stats to look like reasoning.