Home > Enterprise >  Best practice for multiple Container(django,nginx) on Fargate
Best practice for multiple Container(django,nginx) on Fargate

Time:02-21

I have multiple containers/images such as admin(django),nginx(httpserver)

my system is here below.

port80-> nginx -> port8011 -> admin

I want to deploy these on fargate.

However I'm still confused.

Two Image, Two Container,Two task difinition,two loadbalancer it's ok.

However one public IP to only nginx?

How can I connect between container?

Currently my source code is like this below.

I am familiar with docker-compose, but not for aws fargate.

Any help appreciated.

const VPCID='vpc-0867d6797e62XXXXX';
const vpc = ec2.Vpc.fromLookup(this, "VPC", {
  vpcId:VPCID
  //isDefault: true,
});

const cluster = new ecs.Cluster(this, "SampleCluster", {
  vpc:vpc,
  clusterName: "StAdminNginxCluster"
});

const adminRepo = ecr.Repository.fromRepositoryArn(this, 'AdminRepository', 'arn:aws:ecr:ap-northeast-1:678100XXXXXX:repository/st_admin_site');
const nginxRepo = ecr.Repository.fromRepositoryArn(this, 'NginxRepository', 'arn:aws:ecr:ap-northeast-1:678100XXXXXX:repository/st_nginx');

const adminImage = ecs.ContainerImage.fromEcrRepository(adminRepo,"latest");
const nginxImage = ecs.ContainerImage.fromEcrRepository(nginxRepo,"latest");

//make task definition
     const taskDefinitionAdmin = new ecs.FargateTaskDefinition(this, "TaskDefAdmin",{
  memoryLimitMiB: 512,
  cpu: 256,
});
const taskDefinitionNginx = new ecs.FargateTaskDefinition(this, "TaskDefNginx",{
  memoryLimitMiB: 512,
  cpu: 256,
});
const adminContainer = taskDefinitionAdmin.addContainer("AdminContainer", {
  image: adminImage,
});
const nginxContainer = taskDefinitionNginx.addContainer("NginxContainer", {
  image: nginxImage,
});

adminContainer.addPortMappings({
  containerPort: 8011
});
nginxContainer.addPortMappings({
  containerPort: 80
})

const ecsServiceAdmin = new ecs.FargateService(this, "ServiceAdmin", {
  cluster,
  taskDefinition:taskDefinitionAdmin,
  desiredCount: 2
});
const ecsServiceNginx = new ecs.FargateService(this, "ServiceNginx", {
  cluster,
  taskDefinition:taskDefinitionNginx,
  desiredCount: 2
});

const lbAdmin = new elb.ApplicationLoadBalancer(this, "LBAdmin", {
  vpc: cluster.vpc,
  internetFacing: true
});
const listenerAdmin = lbAdmin.addListener("Listener", { port: 8011 });

const targetGroupAdmin = listenerAdmin.addTargets("ECSAdmin", {
  protocol: elb.ApplicationProtocol.HTTP,
  port: 8011,
  targets: [ecsServiceAdmin]
});

const lbNginx = new elb.ApplicationLoadBalancer(this, "LBNginx", {
  vpc: cluster.vpc,
  internetFacing: true
});
const listenerNginx = lbNginx.addListener("Listener", { port: 80 });

const targetGroupNginx = listenerNginx.addTargets("ECS", {
  protocol: elb.ApplicationProtocol.HTTP,
  port: 80,
  targets: [ecsServiceNginx]
});

This docker-compose works.

version: "3.9"
   
services:

  admindjango:
    image: 678100XXXXXX.dkr.ecr.ap-northeast-1.amazonaws.com/st_admin_site:latest
    ports:
      - "8011:8011"
    restart: always 

  nginx:
    image: 678100XXXXXX.dkr.ecr.ap-northeast-1.amazonaws.com/st_nginx:latest
    ports:
      - '80:80'
    depends_on:
      - admindjango

Update

I use two container with one task difinition.

I just call addContainer twice fron one taskDifinition.

(For now, ignore loadbalancer)

Also you can access each container by 127.0.0.1:XXX.

It works well for my purpose. thanks to @Mark B

const taskDefinitionAdmin = new ecs.FargateTaskDefinition(this, "TaskDefAdmin",{
  memoryLimitMiB: 512,
  cpu: 256,
});
const adminContainer = taskDefinitionAdmin.addContainer("AdminContainer", {
  image: adminImage,
});
adminContainer.addPortMappings({
  containerPort: 8011,
  hostPort: 8011
});
const nginxContainer = taskDefinitionAdmin.addContainer("NginxContainer", {
  image: nginxImage,
});

nginxContainer.addPortMappings({
  containerPort: 80,
  hostPort: 80
})
const adminSG = new ec2.SecurityGroup(this, 'admin-server-sg', {
  vpc,
  allowAllOutbound: true,
  description: 'security group for a web server',
});

adminSG.addIngressRule(
  ec2.Peer.anyIpv4(),
  ec2.Port.tcp(80),
  'allow SSH access from anywhere',
);

const ecsAdminService = new ecs.FargateService(this, "AdminService", {
  cluster,
  taskDefinition:taskDefinitionAdmin,
  desiredCount: 2,
  vpcSubnets:  {subnetType: ec2.SubnetType.PUBLIC },
  assignPublicIp: true,
  securityGroups:[adminSG]
});

CodePudding user response:

How can I connect between container?

In your current configuration you can't connect directly between containers. You would have to have Nginx connect to the internal load balancer that is connected to the Django tasks.

Web Browser -> Public Nginx Load Balancer -> Nginx Container -> Private Django Load Balancer -> Django Container.


I would suggest looking into running both containers in the same ECS task. You would probably save a good bit of money by only having one load balancer and half as many Fargate instances. The traffic flow would look like this:

Web Browser -> Public Load Balancer -> Nginx Container on port 80 -> Django Container on port 8011.

In that scenario you would configure Nginx to proxy requests to 127.0.0.1:8011. All containers in the same task can connect to each other over 127.0.0.1 inside a Fargate instance. See the Fargate networking documentation here.


A much more advanced setup would be to keep each container running as a separate task, and use AWS App Mesh for internal container communication, instead of internal load balancers. This is probably overkill for your situation, and much more appropriate in a large environment with many microservices deployed independently.

  • Related