by mtrovo on 9/22/2025, 5:51:05 PM
by davexunit on 9/22/2025, 3:19:59 PM
This has some similarities and significant differences from OCapN [0]. Capability transfer and promise pipelining are part of both, and both are schemaless. Cap'n web lacks out-of-band capabilities, which OCapN has in the form of URIs known as sturdyrefs. I suppose this difference is why the examples show API key authentication since anyone can connect to the Cap'n Web endpoint. This is not necessary in OCapN because a sturdyref is an unguessable token so by possessing it you have the authority to send messages to the endpoint it designates. Cap'n Web also seems to lack the ability for Alice to introduce Bob to Carol, a feature in OCapN called third-party handoffs. Handoffs are needed for distributed applications. So I guess Cap'n Web is more for traditional client-server SaaS but now with a dash of ocaps.
by prngl on 9/22/2025, 10:52:42 PM
This is cool.
There's an interesting parallel with ML compilation libraries (TensorFlow 1, JAX jit, PyTorch compile) where a tracing approach is taken to build up a graph of operations that are then essentially compiled (or otherwise lowered and executed by a specialized VM). We're often nowadays working in dynamic languages, so they become essentially the frontend to new DSLs, and instead of defining new syntax, we embed the AST construction into the scripting language.
For ML, we're delaying the execution of GPU/linalg kernels so that we can fuse them. For RPC, we're delaying the execution of network requests so that we can fuse them.
Of course, compiled languages themselves delay the execution of ops (add/mul/load/store/etc) so that we can fuse them, i.e. skip over the round-trip of the interpreter/VM loop.
The power of code as data in various guises.
Another angle on this is the importance of separating control plane (i.e. instructions) from data plane in distributed systems, which is any system where you can observe a "delay". When you zoom into a single CPU, it acknowledges its nature as a distributed system with memory far away by separating out the instruction pipeline and instruction cache from the data. In Cap'n Web, we've got the instructions as the RPC graph being built up.
I just thought these were some interesting patterns. I'm not sure I yet see all the way down to the bottom though. Feels like we go in circles, or rather, the stack is replicated (compiler built on interpreter built on compiler built on interpreter ...). In some respect this is the typical Lispy code is data, data is code, but I dunno, feels like there's something here to cut through...
by losvedir on 9/22/2025, 8:23:00 PM
Babe get in here, a new kentonv library just dropped!
I'm surprised how little code is actually involved here, just looking at the linked GitHub repo. Is that really all there is to it? In theory, it shouldn't be too hard to port the server side to another language, right? I'm interested in using it in an Elixir server for a JS/TS frontend.
For that matter, the language porting seems like a pretty good LLM task. Did you use much LLM-generated code for this repo? I seem to recall kentonv doing an entirely AI-generated (though human-reviewed, of course) proof of concept a few months ago.
by thethimble on 9/23/2025, 12:20:32 AM
I'm curious about two things:
1. What's the best way to do app deploys that update the RPC semantics? In other words how do you ensure that the client and server are speaking the same version of the RPC? This is a challenge that protos/grpc/avro explicitly sought to solve.
2. Relatedly, what's the best way to handle flaky connections? It seems that the export/import table is attached directly to a stateful WS connection such that if the connection breaks you'd lose the state. In principle there should be nothing preventing a client/server caching this state and reinstantiating it on reconnect. That said, given these tables can contain closures, they're not exactly serializable so you could run into memory issues. Curious if the team has thought about this.
Absolutely mind blowing work!
by mpweiher on 9/23/2025, 5:23:01 AM
Big fan of Cap'n'Proto and this looks really interesting, if RPC is the thing that works for your use-case.
However, stumbled over this:
The fact is, RPC fits the programming model we're used to. Every programmer is trained to think in terms of APIs composed of function calls, not in terms of byte stream protocols nor even REST. Using RPC frees you from the need to constantly translate between mental models, allowing you to move faster.
The fact that this is, in fact, true is what I refer to as "The gentle tyranny of Call/Return"
We're used to it, doing something more appropriate to the problem space is too unfamiliar and so more or less arbitrary additional complexity is...Just Fine™.
https://www.hpi.uni-potsdam.de/hirschfeld/publications/media...
Maybe it shouldn't actually be true. Maybe we should start to expand our vocabulary and toolchest beyond just "composed function calls"? So composed function calls are one tool in our toolchest, to be used when they are the best tool, not used because we have no reasonable alternative.
https://blog.metaobject.com/2019/02/why-architecture-oriente...
by spankalee on 9/22/2025, 4:45:02 PM
Looking at this quickly, it does seem to require (or strongly encourage?) a stateful server to hold on to the import and export tables and the state of objects in each.
One thing about a traditional RPC system where every call is top-level and you pass keys and such on every call is that multiple calls in a sequence can usually land on different servers and work fine.
Is there a way to serialize and store the import/export tables to a database so you can do the same here, or do you really need something like server affinity or Durable Objects?
by benpacker on 9/22/2025, 3:51:25 PM
This seems great and I'm really excited to try it in place of trpc/orpc.
Although it seems to solve one of the problems that GraphQL solved that trpc doesn't (the ability to request nested information from items in a list or properties of an object without changes to server side code), there is no included solution for the server side problem that creates that the data loader pattern was intended to solve, where a naive GraphQL server implementation makes a database query per item in a list.
Until the server side tooling for this matures and has equivalents for the dataloader pattern, persisted/allowlist queries, etc., I'll probably only use this for server <-> server (worker <-> worker) or client <-> iframe communication and keep my client <-> server communication alongside more pre-defined boundaries.
by krosaen on 9/22/2025, 4:03:29 PM
This looks pretty awesome, and excited it's not only a cloudflare product (Cap'n Web exists alongside cloudflare Workers). Reading this section [1], can you say more about:
> as of this writing, the feature set is not exactly the same between the two. We aim to fix this over time, by adding missing features to both sides until they match.
do you think once the two reach parity, that that parity will remain, or more likely that Cap'n Web will trail cloudflare workers, and if so, by what length of time?
[1] https://github.com/cloudflare/capnweb/tree/main?tab=readme-o...
by jensneuse on 9/23/2025, 5:58:40 AM
I'm with WunderGraph, a vendor providing enterprise tooling for GraphQL.
First, I absolutely love Capn Proto and the ideas of chaining calls on objects. It's amazing to see what's possible with CapNweb.
However, one of the examples compares it to GraphQL, which I think falls a bit short of how enterprises use the Query language in real life.
First, like others mentioned, you'll have N+1 problems for nested lists. That is, if we call comments() on each post and author() on each comment, we absolutely don't want to have one individual call per nested object. In GraphQL, with the data loader pattern, this is just 3 calls.
Second, there's also an element of security. Advanced GraphQL gateways like WunderGraph's are capable of implementing fine grained rate limiting that prevent a client to ask for too much data. With this RPC object calling style, we don't have a notion of "Query Plans", so we cannot statically analyze a combination of API calls and estimate the cost before executing them.
Lastly, GraphQL these days is mostly used with Federation. That means a single client talks to a Gateway (e.g. WunderGraph's Cosmo Router) and the Router distributed the calls efficiently across many sub services (Subgraphs) with a query planner that finds the optimal way to load information from multiple services. While capNweb looks amazing, the reality is that a client would have to talk to many services.
Which brings me to my last point. Instead of Going the capNweb vs GraphQL route, I'd think more about how the two can work together. What if a client could use CapNweb to talk to a Federation Router that allows it to interact with entities, the object definitions in a GraphQL Federation system.
I think this is really worth exploring. Not going against other API styles but trying to combine the strengths.
by dimal on 9/22/2025, 4:34:23 PM
Looks very cool, especially passing functions back and forth. But then I wonder, what would I actually use that for?
You mention that it’s schemaless as if that’s a good thing. Having a well defined schema is one of the things I like about tRPC and zod. Is there some way that you get the benefits of a schema with less work?
by divan on 9/22/2025, 4:07:45 PM
> RPC is often accused of committing many of the fallacies of distributed computing. > But this reputation is outdated. When RPC was first invented some 40 years ago, async programming barely existed. We did not have Promises, much less async and await.
I'm confused. How is this a "protocol" if its core premises rely on very specific implementation of concurrency in a very specific language?
by beckford on 9/22/2025, 3:14:00 PM
Since Cap'n Web is a simplification of Cap'n Proto RPC, it would be amazing if eventually the simplification traveled back to all the languages that Cap'n Proto RPC supports (C++, etc.). Or at least could be made to be binary compatible. Regardless, this is great.
by bern4444 on 9/23/2025, 3:19:59 AM
This looks awesome, I had two questions:
Is there a structured concurrency library being used to manage the chained promise calls and lazy evaluation (IE when the final promise result is actually awaited) of the chained functions?
If an await call is never added, would function calls continue to build up taking up more and more memory - I imagine the system would return an error and clear out the stack of calls before it became overwhelmed, what would these errors look like if they do indeed exist?
by matlin on 9/22/2025, 6:40:40 PM
More niche use case but this would be awesome for communicating between contexts in web app (e.g. web worker, iframe, etc)
by coreload on 9/23/2025, 4:01:51 PM
Language specific RPC. At least Cap'n Proto is language agnostic. ConnectRPC is language agnostic and web compatible and a gRPC extended subset. I would have difficulty adopting a language specific RPC implementation.
by vmg12 on 9/22/2025, 4:09:09 PM
I see that it supports websockets for the transport layer, is there any support for two way communication?
edit: was skimming the github repo https://github.com/cloudflare/capnweb/tree/main?tab=readme-o...
and saw this which answers my question:
> Supports passing functions by reference: If you pass a function over RPC, the recipient receives a "stub". When they call the stub, they actually make an RPC back to you, invoking the function where it was created. This is how bidirectional calling happens: the client passes a callback to the server, and then the server can call it later.
> Similarly, supports passing objects by reference: If a class extends the special marker type RpcTarget, then instances of that class are passed by reference, with method calls calling back to the location where the object was created.
Gonna skim some more to see if i can find some example code.
by jimmyl02 on 9/22/2025, 4:31:44 PM
This seems like a similar and more feature complete / polished version of JSON RPC?
The part that's most exciting to me is actually the bidirectional calling. Having set this up before via JSON RPC / custom protocol the experience was super "messy" and I'm looking forward to a framework making it all better.
Can't wait to try it out!
by porridgeraisin on 9/23/2025, 3:26:51 AM
I really like the idea. Especially the idea where you return different RpcTargets based on the "context", that's really quite nice. Not just for authentication and such, but for representing various things like "completely structurally different output for GET /thingies for admin and users".
The promise-passing lazily evaluated approach is also nice -- any debugging woes are solved by just awaiting and logging before the await -- and it solves composability at the server - client layer. The hackiness of `map()` is unfortunate, but that's just how JS is.
However, I don't see this being too useful without there also being composability at the server - database layer. This is notoriously difficult in most databases. I wonder what the authors / others here think about this.
For an example of what I mean
const user = rpc.getUser(id)
const friends = await rpc.getFriends(user)
Sure beats GET /user/id
GET /graph?outbound=id
But, at the end, both cases are running two different SQL queries. Most of the time when we fuse operations in APIs we do it all the way down to the SQL layer (with a join). GET /user/id?include=friends
Which does a join and gets the data in a single query.So while its a nice programming model for sure, I think in practice we'll end up having a `rpc.getUserAndFriends()` anyways.
I'm not that experienced, so I don't know in how many projects composability at just one layer would actually be enough to solve most composability issues. If it's a majority, then great, but if not, then I don't think this is doing much.
One situation where this actually works that comes to mind is SQLite apps, where multiple queries are more or less OK due to lack of network round trip. Or if your DB is colocated with your app in one of the new fancy datacenters where you get DB RAM to app RAM transfer through some crazy fast network interconnect fabric (RDMA) that's quicker than even local disk sometimes.
by jauntywundrkind on 9/22/2025, 5:19:05 PM
I really dig the flexibility of transport. Having something that works over postMessage is totally clutch!!
> Similarly, supports passing objects by reference: If a class extends the special marker type RpcTarget, then instances of that class are passed by reference, with method calls calling back to the location where the object was created.
Can this be relaxed? Having to design the object model ahead of time for RpcTarget is constraining. If we could just attach a ThingClass.prototype[Symbol.for('RpcTarget')] = true then there would be a lot more flexibility, less need to design explciitly for RpcTarget, to use RpcTarget with the objects/classes of 3rd party libraries.
by deepsun on 9/22/2025, 11:25:56 PM
The main problem was always the same -- all the RPC libraries are designed to hide where the round-trip happens, but in real world you always want to know where and how the round-trip happens.
Just read about Cap'n Web array .map() [1] -- it's hard to understand where the round-trip is. And that is not a feature, that's a bug -- in reality you want to easily tell what the code does, not hide it.
[1] https://blog.cloudflare.com/capnweb-javascript-rpc-library/#...
by ianbicking on 9/22/2025, 6:06:09 PM
Couple random thoughts:
I'm trying to see if there's something specifically for streaming/generators. I don't think so? Of course you can use callbacks, but you have to implement your own sentinel to mark the end, and other little corner cases. It seems like you can create a callback to an anonymous function, but then the garbage collector probably can't collect that function?
---
I don't see anything about exceptions (though Error objects can be passed through).
---
Looking at array mapping: https://blog.cloudflare.com/capnweb-javascript-rpc-library/#...
I get how it works: remotePromise.map(callback) will invoke the callback to see how it behaves, then make it behave similarly on the server. But it seems awfully fragile... I am assuming something like this would fail (in this case probably silently losing the conditional):
friendsPromise.map(friend => {friend, lastStatus: friend.isBestFriend ? api.getStatus(friend.id) : null})
---The array escape is clever and compact: https://blog.cloudflare.com/capnweb-javascript-rpc-library/#...
---
I think the biggest question I have is: how would I apply this to my boring stateless-HTTP server? I can imagine something where there's a worker that's fairly simple and neutral that the browser connects to, and proxies to my server. But then my server can also get callbacks that it can use to connect back to the browser, and put those callbacks (capability?) into a database or something. Then it can connect to a worker (maybe?) and do server-initiated communication. But that's only good for a session. It has to be rebuilt when the browser network connection is interrupted, or if the browser page is reloaded.
I can imagine building that on top of Cap'n Web, but it feels very complicated and I can equally imagine lots of headaches.
by unshavedyak on 9/22/2025, 4:00:52 PM
What would this look like for other language backends to support? Eg would be neat if Rust (my webservers) could support this on the backend
edit: Downvoted, is this a bad question? The title is generically "web servers", obviously the content of the post focuses primarily on TypeScript, but i'm trying to determine if there's something unique about this that means it cannot be implemented in other languages. The serverside DSL execution could be difficult to impl, but as it's not strictly JavaScript i imagine it's not impossible?
by fitzn on 9/22/2025, 3:46:38 PM
Just making sure I understand the "one round trip" point. If the client has chained 3 calls together, that still requires 3 messages sent from the client to the server. Correct?
That is, the client is not packaging up all its logic and sending a single blob that describes the fully-chained logic to the server on its initial request. Right?
When I first read it, I was thinking it meant 1 client message and 1 server response. But I think "one round trip" more or less message "1 server message in response to potentially many client messages". That's a fair use of "1 RTT", but took me a moment to understand.
Just to make that distinction clear from a different angle, suppose the client were _really_ _really_ slow and it did not send the second promise message to the server until AFTER the server had computed the result for promise1. Would the server have already responded to the client with the result? That would be a way to incur multiple RTTs, albeit the application wouldn't care since it's bottlenecked by the client CPU, not the network in this case.
I realize this is unlikely. I'm just using it to elucidate the system-level guarantee for my understanding.
As always, thanks for sharing this, Kenton!
by random3 on 9/22/2025, 4:05:51 PM
It's inspired by and created by a coauthor of [Cap'n Proto](https://capnproto.org), which is also what OCapN (referenced in a separate comment) name refers to.
Cap'n Proto is inspired by ProtoBuf, protobuf has gRPC and gRPC web.
We've been using ProtoBuf/gRPC/gRPC-web both in the backends and for public endpoints powering React / TS UI's, at my last startup. It worked great, particularly with the GCP Kubernetes infrastructure. Basically both API and operational aspects were non-problems. However, navigating the dumpster fire around protobuf, gRPC, gRPC web with the lack of community leadership from Google was a clusterfuck.
This said, I'm a bit at loss with the meaning of schemaless. You can have different approaches wrt schema (see Avro vs ProtoBuf) but otherwise, can't fundamentally eschew schema/types. It's purely information tied to a communication channel that needs to be somewhere, whether that's explicit, implicit, handled by the RCP layer, passed to the type system, or worse all the way to the user/dev. Moreover, schemas tend to evolve and any protocol needs to take that into account.
Historically, ProtoBuf has done a good job managing various tradeoffs, here but had no experience using Capt'n Proto, yet seen mostly good stuff about it, so perhaps I'm just missing something here.
by jzig on 9/23/2025, 1:21:48 PM
Hey kentonv, this looks really neat. How would you go about writing API documentation for something like this? I really like writing up OpenAPI YAML documents for consumers of APIs I write so that any time someone asks a question, "How do I get XYZ?" I can just point them to e.g. the SwaggerUI. But I'm struggling to understand how that would work here.
by electric_muse on 9/22/2025, 3:21:25 PM
This is such a useful pattern.
I’ve ended up building similar things over and over again. For example, simplifying the worker-page connection in a browser or between chrome extension “background” scripts and content scripts.
There’s a reason many prefer “npm install” on some simple sdk that just wraps an API.
This also reminds me a lot of MCP, especially the bi-directional nature and capability focus.
by evansd on 9/22/2025, 3:43:32 PM
I need time to try this out for real, but the simplicity/power ratio here looks like it could be pretty extraordinary. Very exciting!
Tiny remark for @kentonv if you're reading: it looks like you've got the wrong code sample immediately following the text "Putting it together, a code sequence like this".
by crabmusket on 9/22/2025, 3:54:17 PM
Really nice to have something I could potentially use across the whole app. I've been looking into things I can use over HTTP, websockets, and also over message channels to web workers. I've usually ended up implementing something that rounds to JSON-RPC (i.e. just use an `id` per request and response to tie them together). But this looks much sturdier.
Building an operation description from the callback inside the `map` is wild. Does that add much in the way of restrictions programmers need to be careful of? I could imagine branching inside that closure, for example, could make things awkward. Reminiscent of the React hook rules.
by youngbum on 9/24/2025, 12:46:07 AM
Yesterday, I migrated my web worker codes from Comlink to CapnWeb. I had extensive experience with Cloudflare Worker bindings, and as mentioned in the original post, they were quite similar.
Everything appears to be functioning smoothly, but I do miss the ‘transfer’ feature in Comlink. Although it wasn’t a critical feature, it was a nice one.
The best aspect of CapnWeb is that we can reuse most of the code related to clients, servers, and web workers (including Cloudflare Workers).
by sgarland on 9/23/2025, 1:55:40 AM
Tangential from a discussion in TFA about GraphQL:
> One benefit of GraphQL was to solve the “waterfall” problem of traditional REST APIs by allowing clients to ask for multiple pieces of data in one query. For example, instead of making three sequential HTTP calls:
GET /user
GET /user/friends
GET /user/friends/photos
…you can write one GraphQL query to fetch it all at once.Or you could have designed a schema to allow easy tree traversal. Or you could use a recursive CTE.
by 3cats-in-a-coat on 9/23/2025, 9:24:49 AM
On the path they've started with pipelining calls parametric on previous calls, they'll quickly realize that eventually they'll need basic transforms on the output before they pass it as input i.e. necessitating support for a standard runtime with a standard library of features.
I.e. imagine you need this:
promise1 = foo.call1();
promise2 = foo.call2();
promise3 = foo.call2(promise1 + promise2);
Can't implement that "+" there unless... promise1 = foo.call1();
promise2 = foo.call2();
promise3 = foo.add(promise1, promise2)
promise4 = foo.call2(promise3);
You can also make some kind of a RpcNumber object so you can use their Proxy function to do promise1.add(promise2) but ultimately you don't want to write such classes on the spot every time. Or functions on the server for it.The problem is even that won't give you conditions (loops, branches) that run on the server, the server execution is blocked by the client.
Once you realize THAT, you realize it's most optimal if both sides exchange command buffers in general, including batch instructions to remote and local calls and standardized expression syntax and library.
What they did with array.map() is cute but it's not obvious what you can and what you can't do with this, and most developers will end up tripping up every time they use it, both trying to overuse this feature and underusing it, unaware of what it maps, how, when and where.
For example this record replay can't do any (again...) arithmetic, logic, branching and so on. It can record calling method on the Proxy and replaying this on the other side, in simple containers, like an object literal.
This is where GraphQL is better because it's an explicit buffer send and an explicit buffer return. The number of roundtrips and what maps how is not hidden.
GraphQL has its own mess of poorly considered features, but I don't think Cap'n Web survives prolonged contact with reality because of how implicit and magical everything is.
When you make an abstraction like this, it needs to work ALL THE TIME, so you don't have to think about it. If it only works in demo examples written by developers who know exactly when the abstraction breaks, real devs won't touch it.
by osigurdson on 9/23/2025, 4:10:16 AM
>> If a class extends the special marker type RpcTarget, then instances of that class are passed by reference, with method calls calling back to the location where the object was created.
This is like .NET Remoting. Suggest resisting the temptation to use this kind of stuff. It gets very hard to reason about what is going on.
by cbarrick on 9/22/2025, 4:33:13 PM
What's going on under the hood with that authentication example?
Is the server holding onto some state in memory that this specific client has already authenticated? Or is the API key somehow stored in the new AuthenticatedSession stub on the client side and included in subsequent requests? Or is it something else entirely?
by JensRantil on 9/23/2025, 4:58:08 PM
I struggle to find why it would be appealing to use an RPC framework which is only targetted to TypeScript (and I guess, JavaScript). The point, for me, of an RPC framework is that it should be platform agnostic to allow for reuse and modularity.
by HDThoreaun on 9/22/2025, 3:33:22 PM
This a reference to cap'n jazz?
by meindnoch on 9/22/2025, 9:12:23 PM
What happens if I pass a recursive function to map()?
let traverse = (dir) => {
name: dir.name,
files: api.ls(dir).map(traverse) // api.ls() returns [] for files
};
let tree = api.ls("/").map(traverse);
by garaetjjte on 9/22/2025, 7:30:50 PM
I think Cap'n Proto plays with web platform pretty nicely too... okay, some might say that my webapp that is mostly written in C++, compiled with Emscripten and talks to server with capnp rpc-over-websocket is in fact not playing nice with web.
by mdasen on 9/22/2025, 7:31:45 PM
This looks really nice. Are there plans to bring support to languages other than JS/TS?
by osigurdson on 9/23/2025, 4:06:05 AM
>> let namePromise = batch.getMyName(); let result = await batch.hello(namePromise);
This is quite interesting. However the abysmal pattern I have seen a number of times is:
list = getList(...) for item in list getItemDetails(item)
Sometimes this is quite hard to undo.
by benpacker on 9/25/2025, 12:27:14 PM
Does this have support for generators / streams, or any ideas how you'd do it? With LLMs that an increasingly large part of client/server communication.
by Yoric on 9/22/2025, 6:22:24 PM
That's more or less a dynamically-typed version of what we had with Opalang ~15 years ago and it worked great. Happy to see that someone else has picked the idea of sending capabilities, including server->client calls!
by mrbluecoat on 9/22/2025, 6:29:20 PM
> "Cap'n" is short for "capabilities and"
Learn something new every day
by gwbas1c on 9/23/2025, 1:14:35 PM
I've built similar things in the past. (And apologies for merely skimming the article.)
In general, I worry that framework frameworks like this could be horribly complex; breaking bugs (in the framework) might not show up until late in your development cycle. This could mean that you end up having to rewrite your product sooner than you would like, or otherwise "the framework gets in the way" and cripples product development.
Some things that worry me:
1: The way that callbacks are passed through RPC. This requires a lot of complexity in the framework to implement.
2: Callbacks passed through RPC implies server-side state. I didn't read in detail how this is implemented; but server-side state always introduces a lot of complexity in code and hosting.
---
Personally, if that much server-side state is involved, I think it makes more sense to operate more like a dumb terminal and do more HTML rendering on the server. I'm a big fan of how server-side Blazor does this, but that does require drinking C# kool-aide. On the other hand, server-side Blazor is very mature, has major backing, and is built into two IDEs.
by pzo on 9/22/2025, 6:30:25 PM
Any idea how its compares to tRPC and oRPC? Wish all such project always had section of 'Why' and explained why it was needed and what it solved that other projects didn't.
by srameshc on 9/23/2025, 12:17:17 AM
If you have ever had to use gRPC & web, then you will know how painful Protobuf is to make it work on the web. I love the simplicity of Cap'n Web https://capnproto.org/language.html , hopefully this will lead us to better and easier RPC.
Update: Unlike Cap'n Proto, Cap'n Web has no schemas. In fact, it has almost no boilerplate whatsoever. This means it works more like the JavaScript-native RPC system in Cloudflare Workers. https://github.com/cloudflare/capnweb
by yayitswei on 9/22/2025, 8:46:16 PM
One of the authors is the Kenton who built this awesome lan party house: https://lanparty.house/
by tanepiper on 9/22/2025, 6:14:30 PM
Reminds me of dnode (2015) - https://www.npmjs.com/package/dnode
by truth_seeker on 9/22/2025, 9:57:02 PM
JSON Serialization, seriously ???
Why not "application/octet-stream" header and sending ArrayBuffer over the network ?
by nly on 9/22/2025, 7:37:16 PM
Last time I used capnProto for RPC I found it an incredibly chatty protocol with tonnes of small memory allocations.
by benmmurphy on 9/22/2025, 5:08:19 PM
are there security issues with no schemas + callback stubs + language on the server with little typing. for example with this `hello(name)` example the server expects a string but can the client pass an callback object that is string-like and then use this to try and trick the server into doing something bad?
by spullara on 9/22/2025, 8:36:36 PM
The spiritual descendant of Java RMI.
by isaiahballah on 9/22/2025, 11:41:22 PM
This looks awesome! Does anyone know if this also works in React Native?
by pjmlp on 9/23/2025, 7:04:08 AM
I am going to be that guy, the way REST is used in 99% of project deployments, it is yet another form of RPC, mostly JSON-RPC with a little help of HTTP verbs to save having yet another field for the actuall message purpose, all nicely wrapped in language specific SDKs, looking like method/function calls.
by indolering on 9/22/2025, 9:35:31 PM
I find the choice of TypeScript to be disappointing. One of the reasons that capproto has struggled for market share is the lack of implementations.
Is the overhead for calling into WASM too high for a Rust implementation to be feasible?
A Haxe or Dafny implementation have let us generate libraries in multiple languages from the same source.
by quatonion on 9/22/2025, 8:05:45 PM
Really not a big fan of batteries included opinionated protocols.
Even Cap'n Proto and Protobuf is too much for me.
My particular favorite is this. But then I'm biased coz I wrote it haha.
https://github.com/Foundation42/libtuple
No, but seriously, it has some really nice properties. You can embed JSON like maps, arrays and S-Expressions recursively. It doesn't care.
You can stream it incrementally or use it a message framed form.
And the nicest thing is that the encoding is lexicographically sortable.
by gethly on 9/22/2025, 9:35:49 PM
This looks like something you tinker with during a weekend when you are bored out of your mind.
by stavros on 9/22/2025, 9:22:06 PM
I was really excited when I saw the headline, but I was kind of disappointed to see it doesn't support schemas natively. I know you can specify them via zod, but I'm really not looking forward to any more untyped APIs, given that the lack of strict API typing has been by far the #1 reason for the bugs I've had to deal with in my career.
by _1tem on 9/23/2025, 11:55:38 AM
Brilliantly engineered but this is solving all the wrong problems. The author implicates that this is supposed to be a better GraphQL/REST, but the industry is already moving towards a better solution for that[1]: data sync like ElectricSQL/Turso/litefs/RxDb. If you want to collapse the API boundary between server and client so that it "feels like" the server and client are the same, then sync the relevant data so it actually IS the same. Otherwise DON'T pretend it is the same because you will have badly leaking abstractions. This looks like it breaks all of the assumptions that programmers have about locally running code. Now every time I do a function call() I have to think about how to handle network failures and latency?
What this could've been is a better way to consume external APIs to avoid the SDK boilerplate generation dance. But the primary problems here are access control, potentially malicious clients, and multi-language support, none of which are solved by this system.
In short, if you're working over a network boundary, better keep that explicit. If you want to pretend the network boundary doesn't exist, then let a data sync engine handle the network parts and only write local code. But why would you write code that pretends to be local but is actually over a network boundary? I can't think of a single case where I would want to do that, I'd rather explicitly deal with the network issues so I can clearly see where the boundary is.
[1] https://bytemash.net/posts/i-went-down-the-linear-rabbit-hol...
by paradox460 on 9/22/2025, 11:58:05 PM
Feels a little like how erlang calls things across different nodes
by cyberax on 9/22/2025, 6:23:04 PM
ETOOMAGIC for me.
by waynenilsen on 9/22/2025, 9:27:56 PM
Now do the same for ffi
by kiitos on 9/24/2025, 8:35:41 PM
> Why RPC? (And what is RPC anyway?)
RPC is an ambiguous and abstract umbrella-term for any request-response communication between two nodes that don't share the same memory space, usually over a network connection, and always via serialized bytes.
> Without RPC, you might communicate using a protocol like HTTP.
HTTP is probably the most common protocol for implementing RPC, indeed
> With HTTP, though, you must format and parse your communications as an HTTP request and response, perhaps designed in REST style.
yep! HTTP/REST is one good option, among many, for doing RPC communications
> RPC systems try to make communications look like a regular function call instead, as if you were calling a library rather than a remote service. The RPC system provides a "stub" object on the client side which stands in for the real server-side object.
Er, no, if you introduce the concept of "objects" you're now talking about something quite different than RPC, more akin to CORBA, which we well know now is not sound...
> When a method is called on the stub, the RPC system figures out how to serialize and transmit the parameters to the server, invoke the method on the server, and then transmit the return value back.
I mean, sure, but this is just normal serialization protocol stuff, right?
> The merits of RPC have been subject to a great deal of debate. RPC is often accused of committing many of the fallacies of distributed computing. But this reputation is outdated. When RPC was first invented some 40 years ago, async programming barely existed. We did not have Promises, much less async and await. Early RPC was synchronous: calls would block the calling thread waiting for a reply. At best, latency made the program slow. At worst, network failures would hang or crash the program. No wonder it was deemed "broken". Things are different today. We have Promise and async and await, and we can throw exceptions on network failures. We even understand how RPCs can be pipelined so that a chain of calls takes only one network round trip. Many large distributed systems you likely use every day are built on RPC. It works.
What a confusion of terms and concepts! Promises and async/await are language-level concepts, completely orthogonal to "RPC" which is a transport/wire-level concept -- what is happening here!! :o
> The fact is, RPC fits the programming model we're used to. Every programmer is trained to think in terms of APIs composed of function calls, not in terms of byte stream protocols nor even REST. Using RPC frees you from the need to constantly translate between mental models, allowing you to move faster.
i do not think RPC means what this author thinks that it means
The section on how they solved arrays is fascinating and terrifying at the same time https://blog.cloudflare.com/capnweb-javascript-rpc-library/#....
> .map() is special. It does not send JavaScript code to the server, but it does send something like "code", restricted to a domain-specific, non-Turing-complete language. The "code" is a list of instructions that the server should carry out for each member of the array.
> But the application code just specified a JavaScript method. How on Earth could we convert this into the narrow DSL? The answer is record-replay: On the client side, we execute the callback once, passing in a special placeholder value. The parameter behaves like an RPC promise. However, the callback is required to be synchronous, so it cannot actually await this promise. The only thing it can do is use promise pipelining to make pipelined calls. These calls are intercepted by the implementation and recorded as instructions, which can then be sent to the server, where they can be replayed as needed.