Home > Back-end >  Container filesystem is empty when deploying k8s application with Minikube
Container filesystem is empty when deploying k8s application with Minikube

Time:11-24

I have a small web application (a Rails app called sofia) that I'm deploying locally with minikube.

When I create the k8s resources and run my deployment, the containers do not contain any of the files that were supposed to be copied over during the image build process.

Here's what I'm doing:

Dockerfile

As part of the Dockerfile build, I copy the contents of my local cloned repository into the image working directory:

RUN mkdir -p /app
WORKDIR /app

COPY . ./

The (old) docker-compose setup

Historically I've used a docker-compose file to run this application and all its services. I map my local directory to the container's working directory (see volumes: below). This is a nice convenience when working locally since all changes are reflected "live" inside the container:

# docker-compose.yml
sofia:
    build:
      context: .
      args:
        RAILS_ENV: development
    environment:
      DATABASE_URL: postgres://postgres:sekrit@postgres/
    image: sofia/sofia:local
    ports:
      - # ...
    volumes:
      - .:/app  #<---- HERE

Building k8s resource file with kompose

In order to run this on minikube, I use the kompose tool that Kubernetes themselves provide in order to transform my docker-compose file into a k8s resource file that can be consumed.

$ kompose convert --file docker-compose.yml --out k8s.yml --with-kompose-annotation=false
WARN Volume mount on the host "/Users/jeeves/git/jeeves/sofia" isn't supported - ignoring path on the host
INFO Kubernetes file "k8s.yml" created

As you can see, it generates a warning that my local volume can not be mounted against the remote volume. This makes sense since a k8s deployment runs "remotely", so I just ignore the warning.

Running

Finally I run the above resources with k8s / minikube

minikube start
kubectl apply -f k8s.yml

I notice the sofia container keeps crashing and restarting so I check the logs:

$ kubectl get pods
NAME                             READY   STATUS             RESTARTS   AGE
pod/sofia-6668945bc8-x9267       0/1     CrashLoopBackOff   1          10s
pod/postgres-fc84cbd4b-dqbrh     1/1     Running            0          10s
pod/redis-cbff75fbb-znv88        1/1     Running            0          10s

$ kubectl logs pod/sofia-6668945bc8-x9267
Could not locate Gemfile or .bundle/ directory

That error is Ruby/Rails specific, but the underlying cause is that there are no files in the container! I can confirm this by entering the container and checking files with ls - it is indeed empty.

Questions

  1. If the sofia/sofia:latest image is correctly built with the COPY-ied file contents, why would it dissapear when runing the container on minikube?
  2. What should I do to ensure my files get copied over correctly?

Thanks!

CodePudding user response:

The issue is that Volumes are not behaving the same way in Docker in docker-compose and K8s. Kompose can't perfectly translate volume. In Docker with docker-compose, your declared volume keep the existing files from the directory, while in k8s, a volume is created empty and shadow the existing content.

There is no direct equivalent of docker-compose volumes that keeps existing files in k8s, you will have to work around that with one of the following options, depending on what makes sense in your use case:

  • leverage ConfigMaps to add your files to this app volume (if needed, use subPath). Probably ok for a handfull of config files if that's what lives your app directory when the container starts
  • in your dockerfile, use COPY to something like app-tmp, then in your entrypoint script copy those files from that app-tmp directory to your "app" volume
  • Refactor your application so that it uses "app1" directory (without volumes) with existing files, "app2" starts empty and is used as your volume.
  • Related