• by kiitos on 3/17/2025, 2:52:40 AM

    There are _so many_ bugs in this code.

    One example among many:

    https://github.com/DiceDB/dice/blob/0e241a9ca253f17b4d364cdf... defines func ExpandID, which reads from cycleMap without locking the package-global mutex; and func NextID, which writes to cycleMap under a lock of the package-global mutex. So writes are synchronized, but only between each other, and not with reads, so concurrent calls to ExpandID and NextID would race.

    This is all fine as a hobby project or whatever, but very far from any kind of production-capable system.

  • by deazy on 3/16/2025, 6:06:29 PM

    Looking at the diceDB code base, I have few questions regarding its design, I'm asking this to understand the project's goals and design rationale. Anyone feel free to help me understand this.

    I could be wrong but the primary in-memory storage appears to be a standard Go map with locking. Is this a temporary choice for iterative development, and is there a longer-term plan to adopt a more optimized or custom data structure ?

    I find the DiceDB's reactivity mechanism very intriguing, particularly the "re-execution" of the entire watch command (i.e re-running GET.WATCH mykey on key modification), it's an intriguing design choice.

    From what I understand is the Eval func executes client side commands this seem to be laying foundation for more complex watch command that can be evaluated before sending notifications to clients.

    But I have the following question.

    What is the primary motivation behind re-executing the entire command, as opposed to simply notifying clients of a key change (as in Redis Pub/Sub or streams)? Is the intent to simplify client-side logic by handling complex key dependencies on the server?

    Given that re-execution seems computationally expensive, especially with multiple watchers or more complex (hypothetical) watch commands, how are potential performance bottlenecks addressed?

    How does this "re-execution" approach compare in terms of scalability and consistency to more established methods like server-side logic (e.g., Lua scripts in Redis) or change data capture (CDC) ?

    Are there plans to support more complex watch commands beyond GET.WATCH (e.g. JSON.GET.WATCH), and how would re-execution scale in those cases?

    I'm curious about the trade-offs considered in choosing this design and how it aligns with the project's overall goals. Any insights into these design decisions would help me understand its use-cases.

    Thanks

  • by bdcravens on 3/16/2025, 2:50:29 PM

    Is there a single sentence anywhere that describes what it actually is?

  • by schmookeeg on 3/16/2025, 4:32:57 PM

    Using an instrument of chance to name a data store technology is pretty amusing to me.

  • by cozzyd on 3/17/2025, 2:59:10 AM

    DiceDB sounds like the name of a joke database that returns random results.

  • by weekendcode on 3/16/2025, 10:21:54 PM

    From the benchmarks on 4vCPU and num_clients=4, the numbers doesn't look much different.

    Reactive looks promising, doesn't look much useful in realworld for a cache. For example, a client subscribes for something and the machines goes down, what happens to reactivity?

  • by alexey-salmin on 3/16/2025, 3:40:12 PM

      | Metric               | DiceDB   | Redis    |
      | -------------------- | -------- | -------- |
      | Throughput (ops/sec) | 15655    | 12267    |
      | GET p50 (ms)         | 0.227327 | 0.270335 |
      | GET p90 (ms)         | 0.337919 | 0.329727 |
      | SET p50 (ms)         | 0.230399 | 0.272383 |
      | SET p90 (ms)         | 0.339967 | 0.331775 |
    
    UPD Nevermind, I didn't have my eyes open. Sorry for the confusion.

    Something I still fail to understand is where you can actually spend 20ms while answering a GET request in a RAM keyvalue storage (unless you implement it in Java).

    I never gained much experience with existing opensource implementations, but when I was building proprietary solutions at my previous workplace, the in-memory response time was measured in tens-hundreds of microseconds. The lower bound of latency is mostly defined by syscalls so using io_uring should in theory result in even better timings, even though I never got to try it in production.

    If you read from nvme AND also do the erasure-recovery across 6 nodes (lrc-12-2-2) then yes, you got into tens of milliseconds. But seeing these numbers for a single node RAM DB just doesn't make sense and I'm surprised everyone treats them as normal.

    Does anyone has experience with low-latency high-throughput opensource keyvalue storages? Any specific implementation to recommend?

  • by OutOfHere on 3/16/2025, 3:59:03 PM

    In-memory caches (lacking persistence) shouldn't be called a database. It's not totally incorrect, but it's an abuse of terminology. Why is a Python dictionary not an in-memory key-value database?

  • by ac130kz on 3/16/2025, 3:42:57 PM

    Any reason to use this over Valkey, which is now faster than Redis and community driven? Genuinely interested.

  • by losvedir on 3/16/2025, 5:41:45 PM

    I didn't see it in the docs, but I'd want to know the delivery semantics of the pubsub before using this in production. I assume best effort / at most once? Any retries? In what scenarios will the messages be delivered or fail to be delivered?

  • by remram on 3/16/2025, 3:28:50 PM

    This seems orders of magnitude slower than Nubmq which was posted yesterday: https://news.ycombinator.com/item?id=43371097

  • by huntaub on 3/16/2025, 5:44:47 PM

    What are some example use cases where having the ability for the database to push updates to an application would be helpful (vs. the traditional polling approach)?

  • by alexpadula on 3/17/2025, 1:03:03 PM

    15655 ops a second with a Hetzner CCX23 machine with 4 vCPU and 16GB RAM is rather slow for an in-memory database I hate to say it. You can't blame that on network latency as for example supermassivedb.com is written in go and achieves magnitudes more, actually x20 and it's persisted.. I must investigate the bottlenecks with Dice.

  • by rebolek on 3/16/2025, 4:04:27 PM

    - proudly open source. cool! - join discord. YAY :(

  • by throwaway2037 on 3/17/2025, 9:21:31 AM

    FYI: Here is the creator and maintainer's profile: https://github.com/arpitbbhayani

    Is there a plan to commercialise this product? (Offer commercial support, features, etc.) I could not find anything obvious from the home page.

  • by sidcool on 3/16/2025, 4:37:34 PM

    Is Arpit is the system design course guy?

  • by Aeolun on 3/16/2025, 11:59:00 PM

    I feel like this needs a ‘Why DiceDB instead of Redis or Valtio’ section prominently on the homepage.

  • by DrammBA on 3/16/2025, 3:08:11 PM

    I love the "Follow on twitter" link with the old logo and everything, they probably used a template that hasn't been updated recently but I'm choosing to believe it's actually a subtle sign of protest or resistance.

  • by datadeft on 3/16/2025, 3:06:34 PM

    Is this suffering from the same problems like Redis when trying to horizontally scale?

  • by re-lre-l on 3/17/2025, 8:54:40 AM

    > For Modern Hardware fully utilizes underlying core to get higgher throughput and better hardware utilization.

    Would be great to disclose details of this one. I'm interested in using what DiceDB achieves higher throughput.

  • by robertlagrant on 3/17/2025, 12:17:52 PM

    > fully utilizes underlying core to get higgher throughput and better hardware utilization

    FYI this is a misspelling of "higher"

  • by nylonstrung on 3/16/2025, 5:07:36 PM

    Who is this for? Can you help me explain why and when I'd want to use this in place of redis/dragonfly

  • by deadbabe on 3/17/2025, 2:58:18 AM

    I think Postgres can do everything this does and better if you use LISTEN/NOTIFY.

  • by 999900000999 on 3/16/2025, 11:28:18 PM

    I like it!

    Anyway to persist data in case of reboots?

    That's the only thing missing here.

    Is Go the only SDK ?

  • by retropragma on 3/17/2025, 4:45:58 AM

    Why would I use this over keyspace notifications in redis?

  • by rednafi on 3/17/2025, 2:44:18 PM

    Database as a transport?

  • by spiderfarmer on 3/16/2025, 2:56:20 PM

    DiceDB is an in-memory, multi-threaded key-value DBMS that supports the Redis protocol.

    It’s written in Go.

  • by bitlad on 3/16/2025, 4:39:29 PM

    I think performance benchmark you have done for DiceDB is fake.

    These are the real numbers - https://dzone.com/articles/performance-and-scalability-analy...

    Does not match with your benchmarks.