• Top
  • New

LLM in a Flash: Efficient Large Language Model Inference with Limited Memory

by keep_reading on 12/21/2023, 10:31:08 PM with 1 comments
  • by dang on 12/21/2023, 11:08:31 PM

    LLM in a Flash: Efficient LLM Inference with Limited Memory - https://news.ycombinator.com/item?id=38704982 - Dec 2023 (52 comments)