• by reynaldi on 1/1/2025, 3:47:45 PM

    Some impressive results

    > 1.58-bit FLUX achieves a 7.7× reduction in model storage and more than a 5.1× reduction in inference memory usage