• by pdabbadabba on 3/20/2024, 1:47:14 PM

    I'm sure that this just because I'm not the target audience, so intend only the very gentlest criticism. But I literally LOLed at how completely incomprehensible this README was for me. It has really been a while since I've read a paragraph and had literally no idea what it was talking about. But here's the winner:

    > A network agnostic DHT crawler and monitor. The crawler connects to DHT bootstrappers and then recursively follows all entries in their k-buckets until all peers have been visited.

    Following the Wikipedia link for "DHT" yielded some clues. (Ah. Distributed hash table.) But I've still been looking at this for several minutes now and am basically just puzzled. But the graphs are pretty! Reading the word "amino" a little further down threw me off the scent for a bit. But I gather that is actually a proper noun, and we aren't really talking about proteins here.

    Maybe an initial sentence that makes fewer assumptions about the reader's familiarity with the jargon would be helpful.

  • by mikae1 on 3/20/2024, 1:25:51 PM

    Unlucky naming collision with Slack’s networking tool Nebula: https://github.com/slackhq/nebula

  • by crotchfire on 3/20/2024, 10:07:01 PM

    It isn't really network-agnostic... in fact it doesn't support the (by far) largest DHT out there, the Mainline DHT that bittorrent uses.

    This is just a crawler for DHTs that use IPFS's implementation, or at least smell very similar to it.

  • by dTal on 3/20/2024, 3:30:42 PM

    Why is BitTorrent not supported? Perhaps I'm misunderstanding this thing but it seems like application #1.

  • by ogurechny on 3/20/2024, 3:36:20 PM

    /me remembers various DHT views, traffic flows, client stats, graphs and other data decorations in Azureus. Now that's what I call a dashboard.

  • by pedalpete on 3/20/2024, 10:22:53 PM

    Can someone explain why we want to crawl and/or monitor? What is this used for?

    When I think of a crawler, I think of a non-homogonous network (if that is the right term).

    But with the blockchain, isn't it the case that each node has an entire copy of the blockchain, so you don't need to "crawl" it, it works more like a database.

    What am I not understanding about this?

  • by Alifatisk on 3/20/2024, 11:01:05 PM

    Instead of everyone crawling on their own, isn't it more efficient if everyone shared the same index somehow?