Home > Software design >  Kubernetes in GCP: How a pod can access its parent node to perform some operation e.g. iptables upda
Kubernetes in GCP: How a pod can access its parent node to perform some operation e.g. iptables upda

Time:06-04

Scenario is like this:

I have a pod running in a node in K8s cluster in GCP. cluster is created using kops and pod is created using kne_cli.

I know only the name of the pod e.g. "test-pod".

My requirement is to configure something in the node where this pod is running. e.g. I want to update "iptables -t nat" table in node.

how to access the node and configure it from within a pod?

any suggestion will be helpful.

CodePudding user response:

You the Job or deployment or POD, not sure how POD is getting managed. If you just want to run that task Job is good fir for you.

One option is to use SSH way :

You can run one POD inside that you get a list of Nodes or specific node as per need and run SSH command to connect with that node.

That way you will be able to access Node from POD and run commands top of Node.

You can check this document for ref : https://alexei-led.github.io/post/k8s_node_shell/

Option two :

You can mount sh file on Node with IP table command and invoke that shell script from POD to execute which will run the command whenever you want.

Example :

apiVersion: v1
kind: ConfigMap
metadata:
  name: command
data:
  command.sh: |
    #!/bin/bash
    echo "running sh script on node..!"
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: command
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: cron-namespace-admin
          containers:
          - name: command
            image: IMAGE:v1
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - name: commandfile
              mountPath: /test/command.sh
              subPath: command.sh
            - name: script-dir
              mountPath: /test
          restartPolicy: OnFailure
          volumes:
          - name: commandfile
            configMap:
              name: command
              defaultMode: 0777
          - name: script-dir
            hostPath:
              path: /var/log/data
              type: DirectoryOrCreate

Use privileged mode

    securityContext:
      privileged: true

Privileged - determines if any container in a pod can enable privileged mode. By default a container is not allowed to access any devices on the host, but a "privileged" container is given access to all devices on the host. This allows the container nearly all the same access as processes running on the host. This is useful for containers that want to use linux capabilities like manipulating the network stack and accessing devices.

Read more : https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged

CodePudding user response:

You might be better off using GKE and configuring the ip-masq-agent as described here: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent

In case you stick with kops on GCE, I would suggest following the guide for ip-masq-agent here instead of the GKE docs: https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/

In case you really need to run custom iptables rules on the host then your best option is to create a DaemonSet with pods that are privileged and have hostNetwork: true. That should allow you to modify iptable rules directly on the host from the pod.

  • Related