Home > OS >  Best approach to deploy a multi-containers web app?
Best approach to deploy a multi-containers web app?

Time:08-13

I have been working on a web app for a few months and now it's ready for deployment. My frontend and backend are in different docker containers (and different repos as well). I use docker-compose to communicate between the two containers and for nginx. Now, I want to deploy my app to AWS and I'm thinking of 2 approaches, but I don't know which one is better:

  1. Deploy the 2 containers separately (as 2 different apps) so that it's easier for me to make changes/maintain each of them, and I also read somewhere that this approach is more secured.
  2. Deploy them as a single app for simpler deployment process, but other than that, I can't really think of anything good about this approach.

I'm obviously leaning more toward the first approach, but if anyone could give me more insights on the pros and cons of both approaches, I would highly appreciate! I am trying to make this process as professional as possible so I can learn more about devOps.

CodePudding user response:

So what docker-compose does under the hood:

  • Create a docker network
  • Put all containers in this network
  • Sets up DNS names, so containers can find each other using their names

This can also be achieved with ECS (which seems suitable for your use case).

So create an ECS Cluster with Fargate as the capacity provider (allowing you to work serverless and don't have to care about ec2 instances)

ECS works with task definitions, so you can create a task definition containing your backend and frontend and create a service based on the definition.

All containers defined in one task work exactly like docker-compose, ECS creates a docker network for them, and they are basically in the same network.

Also see:

If you just want to use nginx in front of your service for load balancing, maybe using an application load balancer will be a better choice.

  • Related