Home > OS >  AWS CDK ApplicationLoadBalancedFargateService with Node.js and WebSockets constantly stops after eac
AWS CDK ApplicationLoadBalancedFargateService with Node.js and WebSockets constantly stops after eac

Time:05-13

I used CDK with the ApplicationLoadBalancedFargateService construct to deploy the ECS Fargate service with Load Balancing. It's the WebSocket API written on Node.js with Socket.io.

It works fine locally, but in the AWS it's constantly being killed by the health check, with an error:

service CoreStack-CoreServiceA69F11F4-9ciFOoAL5oj9 (port 5000) is unhealthy in target-group CoreS-CoreS-FMLY84WTNYVN due to (reason Health checks failed with these codes: [404]).

The code of the stack:

import { aws_ecs_patterns, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { Repository } from 'aws-cdk-lib/aws-ecr';
import { ContainerImage } from 'aws-cdk-lib/aws-ecs';
import { HostedZone } from 'aws-cdk-lib/aws-route53';
import { ApplicationProtocol } from 'aws-cdk-lib/aws-elasticloadbalancingv2';
import ec2 = require('aws-cdk-lib/aws-ec2');
import ecs = require('aws-cdk-lib/aws-ecs');

const DOMAIN_NAME = 'newordergame.com';
const SUBDOMAIN = 'core';
const coreServiceDomain = SUBDOMAIN   '.'   DOMAIN_NAME;

export class CoreStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const vpc = new ec2.Vpc(this, 'CoreServiceVpc', { maxAzs: 2 });

    const cluster = new ecs.Cluster(this, 'Cluster', { vpc });

    const repository = Repository.fromRepositoryName(
      this,
      'CoreServiceRepository',
      'core-service'
    );

    const image = ContainerImage.fromEcrRepository(repository, 'latest');

    const zone = HostedZone.fromLookup(this, 'NewOrderGameZone', {
      domainName: DOMAIN_NAME
    });

    // Instantiate Fargate Service with just cluster and image
    new aws_ecs_patterns.ApplicationLoadBalancedFargateService(
      this,
      'CoreService',
      {
        cluster,
        taskImageOptions: {
          image: image,
          containerName: 'core',
          containerPort: 5000,
          enableLogging: true
        },
        protocol: ApplicationProtocol.HTTPS,
        domainName: coreServiceDomain,
        domainZone: zone
      }
    );
  }
}

The Dockerfile:

FROM node:16-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY package.json ./
RUN npm install
COPY webpack.config.js tsconfig.json ./
COPY src ./src
RUN npm run webpack:build
EXPOSE 5000
CMD [ "node", "build/main.js" ]

The application also listens to the port 5000. It crashes every 3 mins.

Does anyone have a clue about how to fix it? Thank you!

CodePudding user response:

Your health check URL is returning a 404 status code. You need to configure the health check with a path that will return 200 status code.

CodePudding user response:

ELB health check works only with HTTP or HTTPS, it does not work with WebSocket including Socket.io

So, the best workaround I found is to make the service listen to the HTTP port along with WebSocket and answer it with status 200.

With this approach, the health check will actually check the status of the service.

  • Related