• by extr on 6/30/2023, 12:47:43 AM

    Very interesting. I just worked to implement a baby version of this kind of system at work. Similar to this project, our basic use case was allowing researchers to quickly/easily execute their arbitrary R&D code on cloud resources. It's difficult to know in advance what they might be doing, and we wanted to avoid a situation where they are pushing a docker container or submitting a file every time they change something. So we made it possible for them to "just" ship a single class/function without leaving their local interactive environment.

    I see from looking at the source here, run.house is using the same approach of cloudpickling the function. That works, but one struggle we are having is it's quite brittle. It's all gravy assuming everyone is operating in perfectly fresh environments that mirror the cluster, but this is rarely the case. Even subtle changes in the execution environment locally can produce segfaults when run on the server. Very hard to debug. The code here looks a lot more mature, so I'm assuming this is more robust than what we have. But would be curious if the developers have run into similar challenges.

  • by cbarrick on 6/30/2023, 2:12:21 AM

    > Just as PyTorch lets you send a model .to("cuda"), Runhouse enables hardware heterogeneity by letting you send your code (or dataset, environment, pipeline, etc) .to(“cloud_instance”, “on_prem”, “data_store”...), all from inside a Python notebook or script. There’s no need to manually move the code and data around, package into docker containers, or translate into a pipeline DAG.

    From an SRE perspective, this sounds like a nightmare. Controlled releases are really important for reliability. I definitely don't want my devs doing manual rollouts from a notebook.

  • by m_ke on 6/30/2023, 3:57:04 AM

    Since people are suggesting alternatives, I'd like to shoutout skypilot: https://github.com/skypilot-org/skypilot

    EDIT: looks like this actually uses it under the hood: https://github.com/run-house/runhouse/blob/main/requirements...

  • by voz_ on 6/30/2023, 1:14:39 AM

    This is a cool approach. I really like the notion of small, powerful components that compose well together. ML infra is sorely missing this piece. I wish you the best of luck!

  • by guluarte on 6/30/2023, 12:11:44 AM

    Sounds similar to https://dstack.ai/docs/

  • by ipsum2 on 6/30/2023, 10:38:03 PM

    > Please make sure the function does not rely on any local variables, including imports (which should be moved inside the function body)

    This seems like a major limitation and pretty antithetical to the PyTorch approach.

  • by chenzhekl on 6/30/2023, 2:51:15 AM

    How do you compare Runhouse with Ray which also simplifies distributed computing?

  • by pavelstoev on 6/30/2023, 2:56:52 AM

    Have you tired Hidet ? https://pypi.org/project/hidet/

  • by nologic01 on 6/29/2023, 11:49:09 PM

    How would you position this vs the Modular/Mojo approach which aims to relieve similar pain points.