• by JKCalhoun on 4/25/2024, 1:24:11 AM

    An LLM in my pocket is truly a mind-blowing concept, I have to say. More than anything else — phone, camera, internet. The feels like a really big deal.

    And with regard to LLMs (AI?) in general, I don't think right now we have any idea what we will all be using them for in ten years. But it just feels like a fundamental change is coming from all this.

  • by gnabgib on 4/24/2024, 11:23:23 PM

    Discussion: [0] (33 points, 18 hours ago, 7 comments)

    [0]: https://news.ycombinator.com/item?id=40140675

  • by solarkraft on 4/25/2024, 1:38:21 PM

    I'm not knowledgeable enough to parse much out of the Readme.

    How "good" are the models approximately? What hardware do I need to run them? How fast are they?

  • by ChrisArchitect on 4/25/2024, 1:51:49 AM

  • by simonw on 4/25/2024, 1:05:35 AM

    Has anyone seen a working, clearly explained recipe for running this using the Python MLX library on macOS yet?

  • by sp332 on 4/25/2024, 1:02:03 AM

    Why is the 3B model worse than the 450M model on MMLU and TruthfulQA?

  • by Bloating on 4/25/2024, 1:15:29 AM

    Now we can give credit to Apple for invented AI!