• Top
  • New

Show HN: Reduce ChatGPT costs 10x with distributed cache for LLMs

by zaiste on 4/15/2024, 4:07:34 PM with 3 comments
  • by throwaway888abc on 4/15/2024, 5:46:52 PM

    Looks great, do you have any concrete data how much money it will save ?

    Also, how does it compare to for example GptCache[0] ? or any other semantic cache solution[1] ?

    [0] https://gptcache.readthedocs.io/en/latest/

    [1] https://portkey.ai/blog/reducing-llm-costs-and-latency-seman...