Home > Mobile >  Kubernetes open port to server on same subnet
Kubernetes open port to server on same subnet

Time:02-01

I am launching a service in a Kubernetes pod that I would like to be available to servers on the same subnet only.

I have created a service with a LoadBalancer opening the desired ports. I can connect to these ports through other pods on the cluster, but I cannot connect from virtual machines I have running on the same subnet.

So far my best solution has been to assign a loadBalancerIP and restrict it with loadBalancerSourceRanges, however this still feels too public.

The virtual machines I am attempting to connect to my service are ephemeral, and have a wide range of public IPs assigned, so my loadBalancerSourceRanges feels too broad.

My understanding was that I could connect to the internal LoadBalancer cluster-ip from servers that were on that same subnet, however this does not seem to be the case.

Is there another solution to limit this service to connections from internal IPs that I am missing?

This is all running on GKE.

Any help would be really appreciated.

CodePudding user response:

To restrict the service to only be available to servers on the same subnet, you can use a combination of Network Policies and Service Accounts.

First, you'll need to create a Network Policy which specifies the source IP range that is allowed to access your service. To do this, you'll need to create a YAML file which contains the following:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-subnet-traffic
spec:
  podSelector: {}
  ingress:
  - from:
    - ipBlock:
        cidr: <subnet-range-cidr>
    ports:
    - protocol: TCP
      port: <port-number>

Replace the <subnet-range-cidr> and <port-number> placeholders with the relevant IP address range and port numbers. Once this YAML file is created, you can apply it to the cluster with the following command:

kubectl apply -f path-to-yaml-file

Next, you'll need a Service Account and assign it to the service. you can use the Service Account to authenticate incoming requests. To do this, you'll need to add a Service Account to the service's metadata with the following command:

kubectl edit service <service-name>

You must first create a Role or ClusterRole and grant it access to the network policy before you can assign a Service Account to it. The network policy will then be applied to the Service Account when you bind the Role or ClusterRole to it. This can be accomplished using the Kubernetes kubectl command line tool as follows:

kubectl create role <role_name> --verb=get --resource=networkpolicies
kubectl create clusterrole <clusterrole_name> --verb=get --resource=networkpolicies
kubectl create rolebinding <rolebinding_name> --role=<role_name> --serviceaccount=<service_account_name>
kubectl create clusterrolebinding <clusterrolebinding_name> --clusterrole=<clusterrole_name> --serviceaccount=<service_account_name>

The network policy will be applied to all pods that make use of the Service Account when the Role or ClusterRole is bound to it. To access the service, incoming requests will need to authenticate with the Service Account once it has been added. The service will only be accessible to authorized requests as a result of this.

For more info follow this documentation.

CodePudding user response:

i think you are right here a little bit but not sure why you mentioned the cluster-ip

My understanding was that I could connect to the internal LoadBalancer cluster-ip from servers that were on that same subnet, however this does not seem to be the case.

Now if you have deployment running on GKE and you have exposed it with service type LoadBalancer and have internal LB you will be able to access to internal LB across same VPC.

apiVersion: v1
kind: Service
metadata:
  name: internal-svc
  annotations:
    networking.gke.io/load-balancer-type: "Internal"
spec:
  type: LoadBalancer
  externalTrafficPolicy: Cluster
  selector:
    app: internal-svcinternal-svc
  ports:
  - name: tcp-port
    protocol: TCP
    port: 8080
    targetPort: 8080

once your changes are applied check the status using

kubectl get service internal-svc --output yaml

In YAML output check at last section for

status:
  loadBalancer:
    ingress:
    - ip: 10.127.40.241

that's your actual IP you can use to connect with service from other VMs in subnet.

Doc ref

  • Related