Home > Back-end >  Can I configure my EKS cluster's inbound rules via CDK?
Can I configure my EKS cluster's inbound rules via CDK?

Time:04-20

I am wondering if it is possible to configure the “public access source allowlist” from CDK. I can see and manage this in the console under the networking tab, but can’t find anything in the CDK docs about setting the allowlist during deploy. I tried creating and assigning a security group (code sample below), but this didn't work. Also the security group was created as an "additional" security group, rather than the "cluster" security group.

declare const vpc: ec2.Vpc;
declare const adminRole: iam.Role;

const securityGroup = new ec2.SecurityGroup(this, 'my-security-group', {
    vpc,
    allowAllOutbound: true,
    description: 'Created in CDK',
    securityGroupName: 'cluster-security-group'
});

securityGroup.addIngressRule(
    ec2.Peer.ipv4('<vpn CIDR block>'),
    ec2.Port.tcp(8888),
    'allow frontend access from the VPN'
);

const cluster = new eks.Cluster(this, 'my-cluster', {
    vpc,
    clusterName: 'cluster-cdk',
    version: eks.KubernetesVersion.V1_21,
    mastersRole: adminRole,
    defaultCapacity: 0,
    securityGroup
});

Update: I attempted the following, and it updated the cluster security group, but I'm still able to access the frontend when I'm not on the VPN:

cluster.connections.allowFrom(
  ec2.Peer.ipv4('<vpn CIDER block>'),
  ec2.Port.tcp(8888)
);

Update 2: I tried this as well, and I can still access my application's frontend even when I'm not on the VPN. However I can now only use kubectl when I'm on the VPN, which is good! It's a step forward that I've at least improved the cluster's security in a useful manner.

const cluster = new eks.Cluster(this, 'my-cluster', {
    vpc,
    clusterName: 'cluster-cdk',
    version: eks.KubernetesVersion.V1_21,
    mastersRole: adminRole,
    defaultCapacity: 0,
    endpointAccess: eks.EndpointAccess.PUBLIC_AND_PRIVATE.onlyFrom('<vpn CIDER block>')
});

CodePudding user response:

In general EKS has two relevant security groups:

  1. The one used by nodes, which AWS calls "cluster security group". It's setup automatically by EKS. You shouldn't need to mess with it unless you want (a) more restrictive rules the defaults (b) open your nodes to maintenance taks (e.g.: ssh access). This is what you are acessing via cluster.connections.

  2. The Ingress Load Balancer security group. This is an Application Load balancer created and managed by EKS. In CDK, it can be created like so:

const cluster = new eks.Cluster(this, 'HelloEKS', {
  version: eks.KubernetesVersion.V1_22,
  albController: {
    version: eks.AlbControllerVersion.V2_4_1,
  },
});

This will will serve as a gateway for all internal services that need an Ingress. You can access it via the cluster.albController propriety and add rules to it like a regular Application Load Balancer. I have no idea how EKS deals with task communication when an Ingress ALB is not present.

Relevant docs:

  • Related