I have web api application running over AKS have 2 replica, hence 2 instance/Replica of same app running and behind we have NGINX load balancer which distributed the traffic between both instances/Replica.
Now I have a PUT/POST endpoint which used for dynamically changing the log level using Serilog.
[HttpPut("logLevel")]
public IActionResult ChangeLevel(LogEventLevel eventLevel)
{
_levelSwitch.MinimumLevel = eventLevel;
return Ok();
}
The service URL is something like https://XXXXX.com:12345/logLevel and when execute this through POSTMAN tool, it's effect ONLY one replica/POD while the other have NO effect (off course!).
Question is, what I have option here so that I can execute the API call for all replica of the service? Is there any out of the box solution?
Thanks.
Ingress Rule:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- XXXXX.com
secretName: aks-ingress-tls
rules:
- host: XXXXX.com
- http:
paths:
- path: /Servive1/(.*)
pathType: Prefix
backend:
service:
name: Servive1
port:
number: 12345
CodePudding user response:
There's no good way to accomplish this. You're looking to change the state of a pod in a stateless deployment. The Ingress distributes traffic across all pods of a single deployment/service. There is no way to force the ingress to send traffic to a specific pod.
If you remove any kind of sticky sessions, then multiple POSTs should eventually reach all your pods and update the state accordingly, but it's not reliable.
The only solution I can think of to accomplish this would be to break each pod into it's own deployment with a matching service. Make sure all the deployments have at least one label in common and other labels that are unique. Then create one service that targets all the pods with the shared label, and create individual services for individual labels. Then create your Ingress rules so that the default path goes to the shared service, and create yourself "back doors" into the individual pods.
It's not a pretty solution.
Alternatively, you might want to use a statefulset, then have a pod run in your cluster that can receive the HTTP request to change logging level. When that request comes in, that pod will then send HTTP requests to each of the pods (since you can route to individual pods using headless service). It's a bit more work and overhead, but will work reliably with a single request.
On a last note, would it be possible to have the log level set as a variable in the deployment? If you could manage that, instead of making an HTTP request directly to the pod, you would just need to use a single kubectl patch
commnand to update your deployment and all the pods will be affected.