• by shykes on 6/19/2025, 11:37:41 PM

    Docker creator here. I love this. In my opinion the ideal design would have been:

    1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.

    2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.

  • by nine_k on 6/19/2025, 12:04:02 AM

    Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.

  • by richardc323 on 6/19/2025, 8:10:49 PM

    I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

    [1]: https://github.com/richardcrichardc/docker2docker

  • by alisonatwork on 6/19/2025, 2:29:54 AM

    This is a cool idea that seems like it would integrate well with systems already using push deploy tooling like Ansible. It also seems like it would work as a good hotfix deployment mechanism at companies where the Docker registry doesn't have 24/7 support.

    Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.

  • by metadat on 6/19/2025, 1:07:09 AM

    This should have always been a thing! Brilliant.

    Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.

  • by amne on 6/19/2025, 7:54:12 AM

    Takes a look at pipeline that builds image in gitlab, pushes to artifactory, triggers deployment that pulls from artifactory and pushes to AWS ECR, then updates deployment template in EKS which pulls from ECR to node and boots pod container.

    I need this in my life.

  • by lxe on 6/19/2025, 12:27:02 AM

    Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.

  • by modeless on 6/19/2025, 2:38:29 AM

    It's very silly that Docker didn't work this way to start with. Thank you, it looks cool!

  • by scott113341 on 6/19/2025, 1:22:29 AM

    Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?

    [1]: https://zotregistry.dev

  • by matt_kantor on 6/19/2025, 2:47:27 PM

    Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.

    @psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?

    [1]: https://github.com/mkantor/docker-pushmi-pullyu

    [2]: https://hub.docker.com/_/registry

  • by fellatio on 6/19/2025, 2:41:04 AM

    Neat idea. This probably has the disadvantage of coupling deployment to a service. For example how do you scale up or red/green (you'd need the thing that does this to be aware of the push).

    Edit: that thing exists it is uncloud. Just found out!

    That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.

  • by revicon on 6/19/2025, 3:29:59 PM

    Is this different from using a remote docker context?

    My workflow in my homelab is to create a remote docker context like this...

    (from my local development machine)

    > docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

    Then I can do...

    > docker context use mylinuxserver

    > docker compose build

    > docker compose up -d

    And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

    No fuss, registry, no extra applications needed.

    Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.

  • by dboreham on 6/19/2025, 2:37:59 PM

    I like the idea, but I'd want this functionality "unbundled".

    Being able to run a registry server over the local containerd image store is great.

    The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.

    Very slick though.

  • by jokethrowaway on 6/19/2025, 3:05:37 AM

    Very nice! I used to run a private registry on the same server to achieve this - then I moved to building the image on the server itself.

    Both approaches are inferior to yours because of the load on the server (one way or another).

    Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.

    The size of my images are tiny, the extra complexity is unwarranted.

    Then of course I'm not a 1000 people company with 1GB docker images.

  • by actinium226 on 6/19/2025, 12:31:25 AM

    This is excellent. I've been doing the save/load and it works fine for me, but I like the idea that this only transfers missing layers.

    FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.

  • by bradly on 6/18/2025, 11:59:47 PM

    As a long ago fan of chef-solo, this is really cool.

    Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?

  • by sushidev on 6/20/2025, 8:46:49 AM

    I've prepared a quick one using reverse port forwarding and a local temp registry. In case anyone finds it useful:

      #!/bin/bash
      set -euo pipefail
      
      IMAGE_NAME="my-app"
      IMAGE_TAG="latest"
      
      # A temporary Docker registry that runs on your local machine during deployment.
      LOCAL_REGISTRY="localhost:5000"
      REMOTE_IMAGE_NAME="${LOCAL_REGISTRY}/${IMAGE_NAME}:${IMAGE_TAG}"
      REGISTRY_CONTAINER_NAME="temp-deploy-registry"
      
      # SSH connection details.
      # The jump host is an intermediary server. Remove `-J "${JUMP_HOST}"` if not needed.
      JUMP_HOST="user@jump-host.example.com"
      PROD_HOST="user@production-server.internal"
      PROD_PORT="22" # Standard SSH port
      
      # --- Script Logic ---
      
      # Cleanup function to remove the temporary registry container on exit.
      cleanup() {
          echo "Cleaning up temporary Docker registry container..."
          docker stop "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
          docker rm "${REGISTRY_CONTAINER_NAME}" >/dev/null 2>&1 || true
          echo "Cleanup complete."
      }
      
      # Run cleanup on any script exit.
      trap cleanup EXIT
      
      # Start the temporary Docker registry.
      echo "Starting temporary Docker registry..."
      docker run -d -p 5000:5000 --name "${REGISTRY_CONTAINER_NAME}" registry:2
      sleep 3 # Give the registry a moment to start.
      
      # Step 1: Tag and push the image to the local registry.
      echo "Tagging and pushing image to local registry..."
      docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${REMOTE_IMAGE_NAME}"
      docker push "${REMOTE_IMAGE_NAME}"
      
      # Step 2: Connect to the production server and deploy.
      # The `-R` flag creates a reverse SSH tunnel, allowing the remote host
      # to connect back to `localhost:5000` on your machine.
      echo "Executing deployment command on production server..."
      ssh -J "${JUMP_HOST}" "${PROD_HOST}" -p "${PROD_PORT}" -R 5000:localhost:5000 \
        "docker pull ${REMOTE_IMAGE_NAME} && \
         docker tag ${REMOTE_IMAGE_NAME} ${IMAGE_NAME}:${IMAGE_TAG} && \
         systemctl restart ${IMAGE_NAME} && \
         docker system prune --force"
      
      echo "Deployment finished successfully."

  • by larsnystrom on 6/19/2025, 6:49:10 AM

    Nice to only have to push the layers that changed. For me it's been enough to just do "docker save my-image | ssh host 'docker load'" but I don't push images very often so for me it's fine to push all layers every time.

  • by layoric on 6/19/2025, 2:09:07 AM

    I'm so glad there are tools like this and swing back to selfhosted solutions, especially leveraging SSH tooling. Well done and thanks for sharing, will definitely be giving it a spin.

  • by koakuma-chan on 6/18/2025, 11:44:01 PM

    This is really cool. Do you support or plan to support docker compose?

  • by MotiBanana on 6/19/2025, 5:17:17 AM

    I've been using ttl.sh for a long time, but only for public, temporary code. This is a really cool idea!

  • by esafak on 6/19/2025, 12:48:02 AM

    You can do these image acrobatics with the dagger shell too, but I don't have enough experience with it to give you the incantation: https://docs.dagger.io/features/shell/

  • by nothrabannosir on 6/18/2025, 11:53:46 PM

    What’s the difference between this and skopeo? Is it the ssh support ? I’m not super familiar with skopeo forgive my ignorance

    https://github.com/containers/skopeo

  • by mountainriver on 6/19/2025, 3:01:05 AM

    I’ve wanted unregistry for a long time, thanks so much for the awesome work!

  • by iw7tdb2kqo9 on 6/19/2025, 6:59:59 AM

    I think it will be a good fit for me. Currently our 3GB docker image takes a lot of time to push to Github package registry from Github Action and pull from EC2.

  • by yjftsjthsd-h on 6/19/2025, 12:48:49 AM

    What is the container for / what does this do that `docker save some:img | ssh wherever docker load` doesn't? More efficient handling of layers or something?

  • by dzonga on 6/18/2025, 11:59:27 PM

    this is nice, hopefully DHH and the folks working on Kamal adopt this.

    the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command

  • by rcarmo on 6/19/2025, 10:23:51 AM

    I think this is great and have long wondered why it wasn’t an out of the box feature in Docker itself.

  • by quantadev on 6/19/2025, 3:54:53 AM

    I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.

  • by armx40 on 6/19/2025, 12:06:47 AM

    How about using docker context. I use that a lot and works nicely.

  • by remram on 6/19/2025, 12:43:44 AM

    Does it start a unregistry container on the remote/receiving end or the local/sending end? I think that runs remotely. I wonder if you could go the other way instead?

  • by spwa4 on 6/19/2025, 11:19:21 AM

    THANK you. Can you do the same for kubernetes somehow?

  • by cultureulterior on 6/19/2025, 4:00:28 AM

    This is super slick. I really wish there was something that did the same, but using torrent protocol, so all your servers shared it.

  • by hoppp on 6/19/2025, 11:32:50 AM

    Oh this is great, its a problem I also have.

  • by victorbjorklund on 6/19/2025, 8:08:50 AM

    Sweet. I been wanting this for long.

  • by alibarber on 6/19/2025, 9:26:02 AM

    This is timely for me!

    I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.

    I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.

    Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.

    Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.

  • by peyloride on 6/19/2025, 6:20:34 AM

    This is awesome, thanks!

  • by s1mplicissimus on 6/18/2025, 11:59:26 PM

    very cool. now lets integrate this such that we can do `docker/podman push localimage:localtag ssh://hostname:port/remoteimage:remotetag` without extra software installed :)

  • by czhu12 on 6/19/2025, 4:13:15 AM

    Does this work with Kubernetes image pulls?

  • by bflesch on 6/19/2025, 7:34:45 AM

    this is useful. thanks for sharing

  • by isaacvando on 6/19/2025, 1:29:04 AM

    Love it!

  • by politelemon on 6/19/2025, 6:20:01 AM

    Considering the nature of servers, security boundaries and hardening,

    > Linux via Homebrew

    Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.

  • by jdsleppy on 6/19/2025, 11:22:00 AM

    I've been very happy doing this:

    DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

    It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

    Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.

  • by jlhawn on 6/18/2025, 11:52:31 PM

    A quick and dirty version:

        docker -H host1 image save IMAGE | docker -H host2 image load
    
    note: this isn't efficient at all (no compression or layer caching)!

  • by Aaargh20318 on 6/19/2025, 1:35:14 PM

    I simply use "docker save <imagename>:<version> | ssh <remoteserver> docker load"

  • by ajd555 on 6/19/2025, 1:34:07 PM

    This is great! I wonder how well it works in case of Disaster Recovery though. Perhaps it is not intended for production environments with strict SLAs and uptime requirements, but if you have 20 servers in a cluster that you're migrating to another region or even cloud provider, the pull approach from a registry seems like the safest and most scalable approach