I'm trying to set up a database migration job for dotnet entity framework. It seems that I cannot connect to mysql database service from kubernetes job, but I can connect from my desktop when I forward ports.
This is my working MySql deployment service:
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
targetPort: 3306
selector:
app: mysql
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:8.0
name: mysql
env:
- name: MYSQL_DATABASE
value: myDatabase
- name: MYSQL_USER
value: myUser
- name: MYSQL_PASSWORD
value: myPassword
- name: MYSQL_ROOT_PASSWORD
value: myRootPassword
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
I'm not pasting persistent volume claims for brevity.
This works as I'm able to connect to mysql from my desktop after i do:
kubectl port-forward deployment/mysql 3306:3306
and connect through MySQL Workbench.
What I cannot do is to run migrations from the job that uses Dockerfile which has C# database project with dbContext in order to run db migration.
The job:
apiVersion: batch/v1
kind: Job
metadata:
name: candles-downloader-db-migration
spec:
backoffLimit: 0
template:
spec:
containers:
- name: candles-service-migration
image: migration
imagePullPolicy: Never
env:
- name: CONNECTION_STRING
value: server=mysql.default.svc.cluster.local:3306;uid=myUser;pwd=myPassword;database=myDatabase
restartPolicy: Never
As you can see I'm passing the connection string via environment variable CONNECTION_STRING
Then there's Dockerfile for the job:
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
RUN dotnet tool install --global dotnet-ef --version 5.0.9
ENV PATH $PATH:/root/.dotnet/tools
WORKDIR /app
# Copy csproj and restore as distinct layers
COPY *.csproj .
RUN dotnet restore
# Copy everything else and build
COPY ./ .
ENTRYPOINT dotnet ef database update -v --connection $CONNECTION_STRING
I have the image built on my minikube cluster. When the job starts, the dockerfile container gets the connection string. For debugging I used -v flag for verbose output.
Here is the output from the failed job (unimportant parts truncated for brevity): kubectl logs candles-downloader-db-migration-gqndm
Finding IDesignTimeServices implementations in assembly 'Infrastructure.Persistence.Sql'...
No design-time services were found.
Migrating using database 'myDatabase' on server 'mysql.default.svc.cluster.local:3306'.
'CandlesServiceDbContext' disposed.
System.InvalidOperationException: An exception has been raised that is likely due to a transient failure. Consider enabling transient error resiliency by adding 'EnableRetryOnFailure()' to the 'UseMySql' call.
---> MySql.Data.MySqlClient.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts.
I suspect the problems with connection string.
I used server=mysql.default.svc.cluster.local:3306;uid=myUser;pwd=myPassword;database=myDatabase
But I've tried with different server values as well:
mysql.default.svc.cluster.local:3306
mysql.default.cluster.local:3306
mysql.svc.cluster.local:3306
mysql:3306
- even my local cluster ip for the mysql service
10.97.213.180:3306
None of them works. I always get this error in the job logs:
Unable to connect to any of the specified MySQL hosts.
Should my job / container on different pod see the mysql server on other pod through the kubernetes service? I thought so, but it looks like it's "invisible".
CodePudding user response:
I figured it out after reading kubernetes documentation: https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
I've installed DNS utils with the following command:
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
Then I was able to test my 'mysql' service if it's discoveryable by name:
kubectl exec -i -t dnsutils -- nslookup mysql
And it was. The output was:
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 10.97.213.180
But after specifying name with port, it failed:
kubectl exec -i -t dnsutils -- nslookup mysql:3306
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mysql:3306: NXDOMAIN
command terminated with exit code 1
So as I expected the error was in connection string. I had to change from
server=mysql:3306; ...
to
server=mysql;port=3306; ...
and my migrations ran in the job.