by jedberg on 2/2/2025, 1:13:11 PM
by jbellis on 2/2/2025, 1:58:58 PM
Caffeine is a gem. Does what it claims, no drama, no scope creep, just works. I've used it in anger multiple times, most notably in Apache Cassandra and DataStax Astra, where it handles massive workloads invisibly, just like you'd want.
Shoutout to author Ben Manes if he sees this -- thanks for the great work!
by hinkley on 2/2/2025, 8:45:00 PM
Years ago I encountered a caching system that I misremembered as being a plugin for nginx and thus was never able to track down again.
It had a clever caching algorithm that favored latency over bandwidth. It weighted hit count versus size, so that given limited space, it would rather keep two small records that had more hits than a large record, so that it could serve more records from cache overall.
For some workloads the payload size is relatively proportional to the cost of the request - for the system of record. But latency and request setup costs do tend to shift that a bit.
But the bigger problem with LRU is that some workloads eventually resemble table scans, and the moment the data set no longer fits into cache, performance falls off a very tall cliff. And not just for that query but now for all subsequent ones as it causes cache misses for everyone else by evicting large quantities of recently used records. So you need to count frequency not just recency.
by thomastay on 2/2/2025, 7:38:22 PM
> However, diving into a new caching approach without a deep understanding of our current system seemed premature
Love love love this - I really enjoy reading articles where people analyze existing high performance systems instead of just going for the new and shiny thing
by dan-robertson on 2/2/2025, 8:23:04 PM
Near the beginning, the author writes:
> Caching is all about maximizing the hit ratio
A thing I worry about a lot is discontinuities in cache behaviour (simple example: let’s say a client polls a list of entries, and downloads each entry from the list one at a time to see if it is different. Obviously this feels like a bit of a silly way for a client to behave. If you have a small lru cache (eg maybe it is partitioned such that partitions are small and all the requests from this client go to the same partition) then there is some threshold size where the client transitions from ~all requests hitting the cache to ~none hitting the cache.)
This is a bit different from some behaviours always being bad for cache (eg a search crawler fetches lots of entries once).
Am I wrong to worry about these kinds of ‘phase transitions’? Should the focus just be on optimising hit rate in the average case?
by quotemstr on 2/2/2025, 6:08:24 PM
Huh. Their segmented LRU setup is similar to the Linux kernel's active and inactive lists for pages. Convergent evolution in action.
by nighthawk454 on 2/2/2025, 7:16:24 PM
Seems to be hugged, so here's a cached view
https://web.archive.org/web/20250202094451/https://adriacabe... (images are cached better here)
by dstroot on 2/2/2025, 3:58:38 PM
Codebase has >16k stars on GitHub and only 1 open issue, and 3 open PRs. Never seen that before on a highly used codebase. Kudos to the maintainer(s).
by jupiterroom on 2/2/2025, 11:26:15 AM
really random question - but what is used to create the images in this blog post? I see this style quite often but never been able to track down what is used.
by synthc on 2/2/2025, 9:37:05 AM
Interesting deep dive on the internals of Caffeine, a widely used JVM caching library.
by urbandw311er on 2/2/2025, 10:37:05 PM
Caffeine is also the name of a macOS utility to stop the screen going to sleep. Be great if whichever came second could consider a name change.
It would be interesting to see this on reddit's workload. The entire system was designed around the cache getting a 95%+ hit rate, because basically anything on front page of the top 1000 subreddits will get the overwhelming majority of traffic, so the cache is mostly filled with that.
In other words, this solves the problem of "one hit wonders" getting out of the cache quickly, but that basically already happened with the reddit workload.
The exception to that was Google, which would scrape old pages, and which is why we shunted them to their own infrastructure and didn't cache their requests. Maybe with this algo, we wouldn't have had to do that.