my goal is to have EXTERNAL HTTP CLOUD LOAD BALANCER with NGINX INGRESS in our GCP GKE.
Im trying solution as Rami H proposed and Google developer Garry Singh confirmed here: Global load balancer (HTTPS Loadbalancer) in front of GKE Nginx Ingress Controller
You can create the Nginx as a service of type LoadBalancer and give it a NEG annotation as per this google documentation. https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing Then you can use this NEG as a backend service (target) for HTTP(S) load balancing You can use the gcloud commands from this article https://hodo.dev/posts/post-27-gcp-using-neg/
I have followed mentioned hodo.dev tutorial and successfully deployed HTTP LB with NEGs as backend service. Then I found this script to attach NGINX-INGRESS to NEGs but its probably obsolete and fails while deploying. https://gist.github.com/halvards/dc854f16d76bcc86ec59d846aa2011a0
Please can somebody help me to to adapt hodo.dev config to deploy there nginx-ingress? Here is repo with my config script https://github.com/robinpecha/hododev_gke-negs-httplb
#First lets define some variables:
PROJECT_ID=$(gcloud config list project --format='value(core.project)') ; echo $PROJECT_ID
ZONE=europe-west2-b ; echo $ZONE
CLUSTER_NAME=negs-lb ; echo $CLUSTER_NAME
# and we need a cluster
gcloud container clusters create $CLUSTER_NAME --zone $ZONE --machine-type "e2-medium" --enable-ip-alias --num-nodes=2
# the --enable-ip-alias enables the VPC-native traffic routing option for your cluster. This option creates and attaches additional subnets to VPC, the pods will have IP address allocated from the VPC subnets, and in this way the pods can be addressed directly by the load balancer aka container-native load balancing.
# Next we need a simple deployment, we will use nginx
cat << EOF > app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
EOF
kubectl apply -f app-deployment.yaml
# and the service
cat << EOF > app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: app-service
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "app-service-80-neg"}}}'
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: nginx
EOF
kubectl apply -f app-service.yaml
# this annotation cloud.google.com/neg tells the GKE to create a NEG for this service and to add and remove endpoints (pods) to this group.
# Notice here that the type is ClusterIP. Yes it is possible to expose the service to the internet even if the type is ClusterIP. This one of the magic of NEGs.
# You can check if the NEG was created by using next command
gcloud compute network-endpoint-groups list
# Next let’s create the load balancer and all the required components.
# We need a firewall rule that will allow the traffic from the load balancer
# find the network tags used by our cluster
NETWORK_TAGS=$(gcloud compute instances describe \
$(kubectl get nodes -o jsonpath='{.items[0].metadata.name}') \
--zone=$ZONE --format="value(tags.items[0])")
echo $NETWORK_TAGS
# create the firewall rule
gcloud compute firewall-rules create $CLUSTER_NAME-lb-fw \
--allow tcp:80 \
--source-ranges 130.211.0.0/22,35.191.0.0/16 \
--target-tags $NETWORK_TAGS
# and a health check configuration
gcloud compute health-checks create http app-service-80-health-check \
--request-path / \
--port 80 \
--check-interval 60 \
--unhealthy-threshold 3 \
--healthy-threshold 1 \
--timeout 5
# and a backend service
gcloud compute backend-services create $CLUSTER_NAME-lb-backend \
--health-checks app-service-80-health-check \
--port-name http \
--global \
--enable-cdn \
--connection-draining-timeout 300
# next we need to add our NEG to the backend service
gcloud compute backend-services add-backend $CLUSTER_NAME-lb-backend \
--network-endpoint-group=app-service-80-neg \
--network-endpoint-group-zone=$ZONE \
--balancing-mode=RATE \
--capacity-scaler=1.0 \
--max-rate-per-endpoint=1.0 \
--global
# This was the backend configuration, let’s setup also the fronted.
# First the url map
gcloud compute url-maps create $CLUSTER_NAME-url-map --default-service $CLUSTER_NAME-lb-backend
# and then the http proxy
gcloud compute target-http-proxies create $CLUSTER_NAME-http-proxy --url-map $CLUSTER_NAME-url-map
# and finally the global forwarding rule
gcloud compute forwarding-rules create $CLUSTER_NAME-forwarding-rule \
--global \
--ports 80 \
--target-http-proxy $CLUSTER_NAME-http-proxy
# Done! Give some time for the load balancer to setup all the components and then you can test if your setup works as expected.
# get the public ip address
IP_ADDRESS=$(gcloud compute forwarding-rules describe $CLUSTER_NAME-forwarding-rule --global --format="value(IPAddress)")
# print the public ip address
echo $IP_ADDRESS
# make a request to the service
curl -s -I http://$IP_ADDRESS/
CodePudding user response:
The trick is to deploy the ingress-nginx
service as ClusterIP
and not as LoadBalancer
and then expose the ingresss-nginx-controller
service using NEG and GCP External Load Balancer feature.
First you need to update the helm repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
The default installation of this ingress-nginx is configured to use the LoadBalancer option, this will automatically create a load balancer for you, but in this case is not the expected behavior. If I understood correctly, you want to create/configure your own GCP Load Balancer, outside GKE and to manually configure it, and to route traffic to your custom ingress-nginx. For this you need to change the service type to be "ClusterIP" and to add the NEG annotation.
Create a file values.yaml
cat << EOF > values.yaml
controller:
service:
type: ClusterIP
annotations:
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "ingress-nginx-80-neg"}}}'
EOF
And install the ingress-nginx
helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
After that you need to configure the load balancer to point to your ingress-nginx controller using NEG.
I added the complete steps to follow in this gist https://gist.github.com/gabihodoroaga/1289122db3c5d4b6c59a43b8fd659496