I have a flask api and I am trying to improve it identifying which function calls in the api definition takes the longest time whenever call it. For that I am using a profiler as highlighted in this repo. Whenever I make the api call, this profiler generates a .prof file which I can use with snakeviz
to visualize.
Now I am trying to run this on aws cluster in the same region where my database is stored to minimize network latency time. I can get the api server running and make the api calls, my question is how can I transfer the .prof
file from kubernetes pod without disturbing the api server. Is there a way to start a separate shell that transfers file to say an s3 bucket whenever that file is created without killing off the api server.
CodePudding user response:
If you want to automate this process or it's simply hard to figure out connectivity for running kubectl exec ...
, one idea would be to use a sidecar container. So your pod contains two containers with a single emptyDir
volume mounted into both. emptyDir
is perhaps the easiest way to create a folder shared between all containers in a pod.
- First container is your regular Flask API
- Second container is watching for new files in shared folder. Whenever it finds a file there it uploads this file to S3
You will need to configure profiler so it dumps output into a shared folder. One benefit of this approach is that you don't have to make any major modifications to the existing container running Flask.
CodePudding user response:
The best option the sidecar container.
Pods that run multiple containers that need to work together. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.
For example, you might have a container that acts as a web server for files in a shared volume, and a separate "sidecar" container that updates those files from a remote source.
Here's a link!
The sidecar creation is easy look this:
apiVersion: v1
kind: Pod
metadata:
name: demo
spec:
containers:
- image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) 'Hi I am from Sidecar container'; sleep 5;done"]
name: sidecar-container
resources: {}
volumeMounts:
- name: var-logs
mountPath: /var/log
- image: nginx
name: main-container
resources: {}
ports:
- containerPort: 80
volumeMounts:
- name: var-logs
mountPath: /usr/share/nginx/html
dnsPolicy: Default
volumes:
- name: var-logs
emptyDir: {}
All you need is change the sidecar container command to your needs.