• by ruuda on 3/10/2025, 6:20:22 AM

    Woah, so much overcomplication in the comments!

    If you want to run multiple applications, how about ... just running them? It sounds like you already do that, so what is the real problem that you are trying to solve?

    If it's annoying to start them by hand one by one, you could use Foreman or Overmind to start them with a single command, and interleave their output in a terminal.

  • by lelanthran on 3/10/2025, 2:44:16 PM

    What a bunch of over engineered solutions.

    I run multiple side projects on my linux desktop using both postgresql and mysql, and using host entries work well enough.

    For HTTP, using entries in hosts file locally allows the clients to all connect to port 80 on the local machine all using different domain names, and nginx can proxy the connection to whatever port the backend is running on.

  • by jasonkester on 3/10/2025, 6:11:59 AM

    Everybody loves their random port numbers these days, but I still prefer custom hostnames.

    Just chuck an entry into your hosts file and tell your web server to sniff for it and you’re done. Run your stuff on port 80 like nature intended and never have to teach either end of your app how to deal with colons and numbers in domain names.

    And you get to skip the six paragraphs of pain the other commenters are describing, orchestrating their kubernetes and whatnot.

    e.g.: http://dev.whatever/

  • by SkyPuncher on 3/10/2025, 5:30:40 AM

    Different ports.

    Don't add unnecessary complexity unless it's strictly necessary.

  • by neilv on 3/10/2025, 6:07:40 AM

    Using Kubernetes can be good for your resume.

    What I usually do is to use different ports on my workstation. So I can get the fastest iteration, by keeping things simple. Be careful to keep the ports straight, though.

    You can put the port numbers and other installation-specific files in a `.env` file, application-specific config file, or a supplemental build system config file, that aren't checked into Git.

    One way I did this was to have a very powerful `Makefile` that could not only build a complicated system and perform many administrative functions, of a tricky deployment, but also work in both development and production. That `Makefile` pulled in `Makefile-options`, which had all the installation-specific variables, including ports. Other development config files, including HTTPS certificates, were generated automatically based on that. Change `Makefile-options` and everything depending on any of those was rebuilt. When you ran `make` without a `Makefile-options` file, it generated one, with a template of the variables you could set.

    Today I'd consider using Kubernetes for that last example, but we were crazy-productive without it, and production worked fine.

  • by xlii on 3/10/2025, 6:05:36 AM

    Containers.

    I’ll also recommend commercial OrbStack for Mac, because it simplifies some configuration and is performant.

    That was my focus over the last couple months (and right now with customer solution I’m running tens of isolated clusters of heterogenous services and custom network configurations).

    I’ve tried nix, local Caddy/Haproxy, Direnvs, Devenvs, dozens of task file runners, DAG runners etc.

    Kubernetes is fine but it’s a domain in its own and you’ll end up troubleshooting plenty of things instead of doing work itself.

    The simplest solution I would recommend is a task runner (Justfile/Taskfile) with containers (either built or with volumes attached - this prevents secrets leakage). Pay special attention to artifact leakage and clone volumes instead of mutating them.

    I don’t recommend Docker Compose because it has low entry and a high ceiling and it takes long time to back out.

    For simple clusters (5-8 containers) it’s working well. If you need to level up my personal recommendation would be:

    - Go for pure programmatic experience (I’ve tested bunch of API clients and IMO it’s less time learning Go than troubleshooting/backfilling missing control flow features) - there’s also Magefile for simplified flows

    - Full Kubernetes with templating language (avoid YAMLs like a plague)

    - Cuelang if you want to go for full reliability (but it’s difficult to understand and documentation is one of the worst I ever read through).

  • by gabesullice on 3/10/2025, 7:08:49 AM

    https://ddev.com/ has become standard in the circles I run in (most are web devs working in agencies touching multiple projects each week). You don't have to use DDEV specifically, but it works like a dream and may provide some inspiration.

    Each project gets its own Docker Compose file. These allow you to set up whatever project specific services and configuration you need.

    None of your projects need to expose a port. Instead each project gets a unique name like `fooproject` and `barproject` and the container listening to port 80 is named {project-name}-web.

    It all gets tied together by a single global NGINX/Traefik/Caddy container (you choose) that exposes port 80 and 443 and reverse proxies to each project's web container using Docker's internal hostnames. In pseudo-code:

      https://fooproject.example.site 
      {
        reverse_proxy fooproject- web:80
      }
    
      https://barproject.example.site 
      {
        reverse_proxy barproject-web:80
      }
    
    The final piece of the puzzle is that the maintainer of DDEV set up a DNS A record for

      127.0.0.1 *.ddev.site
    
    You could do something similar yourself using your own domain or locally with DNSMasq.

    It may seem overcomplicated (and it is complicated). But since it's a de-facto standard and well-maintained, that maintenance burden is amortized over all the users and projects. (To the naysayers, consider: PopOS/Ubuntu are quite complicated, but they're far easier to use for most people than a simpler hand-rolled OS with only the essentials.)

  • by necovek on 3/10/2025, 6:12:48 AM

    I prefer setting up services that bind to port 0 ("get me an unprivileged port"), report that back, and use that to auto-configure dependent services.

    This allows local development and debugging for fast iterations and quick feedback loop. But, it also allows for multiple branches of the same project to be tested locally at the same time!

    Yeah, git does not make that seamless, so I'd have multiple shallow clones to allow me to review a branch without stopping work on my own branch(es).

  • by PaulHoule on 3/7/2025, 12:32:12 AM

    Before there was Docker and Kubernetes I used to run hundreds of web sites on a single server by being disciplined about where files go, how database connections are configured, etc. I still do.

  • by djood on 3/10/2025, 6:19:02 AM

    I would say docker-compose with traefik is definitely the easiest! You can even set dependencies between services to ensure that they load in the right order, do networking, etc.

    If you're interested in running locally, a solution like kubernetes seems slightly overkill, but it can be fun to mess with for sure!

  • by renewiltord on 3/10/2025, 5:36:53 AM

    Kube slows down iteration cycle. I use bash script that encodes the port information. No big deal if you repeat this boilerplate code. LLM can apply a change to all of it simultaneously.

    Simple `bin/restart`

    K3s is good. Kube is also good. But for local development you want to isolate to code and have rapid cycle time on features. Use `mise` with simple run script. If deploying to k3s use Docker (with Orbstack if Mac) and simple run script.

    LLMs bad at auto debugging environment means you are spending even more time on low leverage task. Avoid that at all costs. Small problem => small solution.

  • by victorNicollet on 3/10/2025, 8:20:12 AM

    We use the following steps:

    - Each service listens on a different, fixed port (as others have recommended).

    - Have a single command (incrementally) build and then run each service, completely equivalent to running it from your IDE. In our case, `dotnet run` does this out of the box.

    - The above step is much easier if services load their configuration from files, as opposed to environment variables. The main configuration files are in source control; they never contain secrets, instead they contain secret identifiers that are used to load secrets from a secret store. In our case, those are `appsettings.json` files and the secret store is Azure KeyVault.

    - An additional optional configuration file for each application is outside source control, in a standard location that is the same on every development machine (such as /etc/companyname/). This lets us have "personal" configuration that applies regardless of whether the service is launched from the IDE or the command-line. In particular, when services need to communicate with each other, it lets us configure whether service A should use a localhost address for service B, or a testing cluster address, or a production cluster address.

    - We have a simple GUI application that lists all services. For each service it has a "Run" button that launches it with the command-line script, and a checkbox that means "other local services should expect this one to be running on localhost". This makes it very simple to, say, check three boxes, run two of them from the GUI, and run the third service from the IDE (to have debugging enabled).

  • by smjburton on 3/10/2025, 5:45:14 PM

    Docker-Compose is likely the simplest solution for your needs (assuming you have some familiarity with Docker).

    Your docker-compose.yml file would look something like this:

    ```

    ---

    services:

      frontend:
        image: localhost/frontend:latest
        ports:
          - 6767:6767
        environment:
           - ...
        volumes:
          - /home/user/dev/app/frontend:/app/frontend
        restart: unless-stopped
    
      backend-1:
        image: localhost/backend-1:latest
        ports:
          - 7878:7878
        environment:
           - ...
        volumes:
          - /home/user/dev/app/backend-1:/app/backend-1
        restart: unless-stopped
    
      backend-2:
        image: localhost/backend-2:latest
        ports:
          - 8989:8989
        environment:
           - ...
        volumes:
          - /home/user/dev/app/backend-2:/app/backend-2
        restart: unless-stopped
    ```

    This example even includes volume mounts to your local files from you app development folder (update to suit your setup) so that any changes are immediately reflected in your running frontend or either backend containers when you save your files.

    You would bring all of your services up with this command:

    docker-compose up -d

    And bring them down again with:

    docker-compose down

  • by imwally on 3/10/2025, 5:51:41 AM

    If you want to go the k8s route, it’s worth checking out https://tilt.dev/.

  • by petermetz on 3/7/2025, 12:52:03 AM

    I found that Caddy will take you a very long way if you just want virtual hosting and reverse proxying. YMMV

  • by j45 on 3/10/2025, 6:08:16 AM

    Docker is probably simplest. Start with one machine, with the services in it, on different ports to keep it portable. As you confirm the fit and finish, it can inform how to further architect it.

    Docker Swarm can be a little heavy for small projects, there's a tool called spin that seems to be a relatively lightweight docker orchestrator that has been pretty handy.

    https://serversideup.net/open-source/spin/

    Kubernetes, etc, are nice too, just might be a little heavier and more complex than a simple set of tools needs in the beginning. I understand it's interpretation and preference, most of the time if I'm solving a problem, I want the most of it to be invisible as possible.

  • by cess11 on 3/10/2025, 6:32:57 AM

    Do you actually need to just run them, or do you need to bring them up and down through automation?

    If you just need to have them running, nginx and /etc/hosts are your friends. You can script those as well, put a magic comment or two in hosts and split on that and change the inbetween programmatically, add a sites-available that maps port to name in hosts.

    If you need to spin up, tear down, spin up, tear down, then containers is a more convenient option. Name your containers to make them discoverable, don't get bogged down in kubectl or similar tooling.

  • by junto on 3/10/2025, 7:08:57 AM

    If you’re happy enough to write a little bit of C# and install the .NET SDK then try Aspire.

    Here’s and example of orchestrating a node.js app and a Redis dependency.

    https://github.com/dotnet/aspire-samples/blob/main/samples/A...

  • by czhu12 on 3/10/2025, 6:40:07 AM

    I’m the developer of https://canine.sh — I’d humbly ask you to take a look! It’s basically built to solve exactly this problem, and it’s totally free + open source.

    The backend uses kubernetes, and the frontend integrates with GitHub, dockerhub, etc, for doing builds and deploys.

    I’ve been able to host 2 rails apps, 1 postgres and 1 redis in a $10/month Hetzner VPS.

  • by tbrownaw on 3/10/2025, 7:04:31 AM

    Visual Studio gives each project a (static, included in what gets committed to version control somewhere) random port.

    For stuff I'm playing with at home, I'll either pick a port or put it in k3s. Half the reason for the latter is an excuse to practice with a kubernetes since it seems like a good thing to learn, both in general and for some specific things at work that it would probably be a good match for.

  • by rossng on 3/10/2025, 6:53:17 AM

    My suggestion would be Process Compose[1]. Just write a simple config telling it what commands to run; it will launch them all and give you a nice TUI to read the stdout and control each process individually.

    [1] https://github.com/F1bonacc1/process-compose

  • by rglover on 3/10/2025, 6:34:05 AM

    If you want to avoid containers/complexity, you could just have a shell script (or choose your pet language) that starts the apps via a single command on your machine (e.g. $> start-my-app).

    That would handle the actual startup on your specified ports (it could even randomly discover open ports and use those if you don't care).

  • by beryilma on 3/10/2025, 9:34:15 PM

    Github, nodejs, and PM2.

    I can deploy to my local machine AND to EC2 with a single deploy command using PM2 configuration facilities. This works quite well for side projects.

    I also have nginx and postges running on both locations with local (to the server) database connections without opening unnecessary ports to the world.

  • by breadchris on 3/10/2025, 7:08:13 AM

    not answering the op's question, but an alternative approach to multi-service development. I have been fixated the idea of a single process that runs all my code until an optimization is desperately needed. imo adding N number of processes to development slows down iteration by N^2. instead of reliably having one place to check for an error it could be any of the services, and having all team members have the context on where classes of problems could be is challenging. there may be a compelling reason a multi-service architecture is needed (different languages, legacy code) but i personally heavily weight development iteration in my decision making. slowing down experiments to test hypotheses is the death of productivity and morale in a codebase.

  • by oulipo on 3/10/2025, 6:10:13 AM

    I'm using Pulumi to generate a Docker-compose on-the-fly, so that I can reconfigure the services I want to have up, and whether to run them / discard them / mock them

    eg "when I want to debug the app, I want to mock the authentication server, and run the API" etc

  • by hedgehog on 3/10/2025, 5:53:07 AM

    Docker compose, possibly with VS Code dev containers if you're a VS Code user.

  • by AJRF on 3/10/2025, 6:47:39 AM

    Makefile at the root of your code directory with target per project for building and running and a command that runs them all.

    Using Kubernetes for your use case is like buying a Bugatti as your first car.

  • by woutr_be on 3/10/2025, 6:29:21 AM

    I constantly run multiple projects without issues. Usually it’s both front- and back-end. I just assign random ports.

    Same goes for databases, I just map it to a different port.

  • by thedookmaster on 3/10/2025, 7:05:29 AM

  • by lars512 on 3/10/2025, 6:51:19 AM

    To throw yet another option in, you could consider an LXC container per project, if they’re small and you don’t find you need Docker. LXC containers are basically multiprocess containers, unlike Docker’s single process containers, making them feel more like VMs and giving you a great dev experience.

  • by coolgoose on 3/10/2025, 5:43:22 AM

    docker compose, technically other clients support the former also.

  • by ThrowawayTestr on 3/10/2025, 6:26:51 AM

    Buy mini computers and learn to manager a cluster

  • by chrisgoman on 3/11/2025, 4:24:24 AM

    Vagrant