Home > database >  Setup Kubeflow on minikube: No resources found in kubeflow namespace
Setup Kubeflow on minikube: No resources found in kubeflow namespace

Time:08-16

When I try to setup kubeflow with minikube on my local desktop(Ubuntu 20.0.4) and run kubectl create -f bootstrapper.yaml according to this official document, I got the AlreadyExists following error.

Error from server (AlreadyExists): error when creating
"bootstrapper.yaml": namespaces "kubeflow-admin" already exists Error
from server (AlreadyExists): error when creating "bootstrapper.yaml":
persistentvolumeclaims "kubeflow-ksonnet-pvc" already exists [unable
to recognize "bootstrapper.yaml": no matches for kind
"ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1",
unable to recognize "bootstrapper.yaml": no matches for kind
"StatefulSet" in version "apps/v1beta2"]

After deleting namespace and persistent volume and run the same command, then I got this version error messages.

namespace/kubeflow-admin created
persistentvolumeclaim/kubeflow-ksonnet-pvc created unable to recognize
"bootstrapper.yaml": no matches for kind "ClusterRoleBinding" in
version "rbac.authorization.k8s.io/v1beta1" unable to recognize
"bootstrapper.yaml": no matches for kind "StatefulSet" in version
"apps/v1beta2"

Thus, I changed versions for ClusterRoleBinding and StatefulSet to v1 according to this, then I got this error.

persistentvolumeclaim/kubeflow-ksonnet-pvc created
statefulset.apps/kubeflow-bootstrapper created Error from server
(AlreadyExists): error when creating "bootstrapper.yaml":
clusterrolebindings.rbac.authorization.k8s.io "kubeflow-cluster-admin"
already exists

So, I also delete the clusterrolebindings kubeflow-cluster-admin and rerun kubectl create -f bootstrapper.yaml. Then, I got expected result.

namespace/kubeflow-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubeflow-cluster-admin created
persistentvolumeclaim/kubeflow-ksonnet-pvc created
statefulset.apps/kubeflow-bootstrapper created

I checked the existence of namespaces to be created by kubectl get ns, then

NAME                   STATUS   AGE
default                Active   8h
kube-node-lease        Active   8h
kube-public            Active   8h
kube-system            Active   8h
kubeflow-admin         Active   60s
kubernetes-dashboard   Active   8h

But, I got No resources found in kubeflow namespace. by kubectl -n kubeflow get svc

I already checked this post. I waited long but I don't get any results.

I run docker images then there is no gcr.io/kubeflow-images-public/bootstrapper:v0.2.0. Thus it seems bootstrap failed.

Original bootstrapper.yaml

---
# Namespace for bootstrapper
apiVersion: v1
kind: Namespace
metadata:
  name: kubeflow-admin
---
# Make kubeflow-admin admin
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubeflow-cluster-admin
subjects:
  - kind: ServiceAccount
    name: default
    namespace: kubeflow-admin
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
# Store ksonnet apps
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kubeflow-ksonnet-pvc
  namespace: kubeflow-admin
  labels:
    app: kubeflow-ksonnet
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: kubeflow-bootstrapper
  namespace: kubeflow-admin
spec:
  selector:
    matchLabels:
      app: kubeflow-bootstrapper
  serviceName: kubeflow-bootstrapper
  template:
    metadata:
      name: kubeflow-bootstrapper
      labels:
        app: kubeflow-bootstrapper
    spec:
      containers:
      - name: kubeflow-bootstrapper
        image: gcr.io/kubeflow-images-public/bootstrapper:v0.2.0
        workingDir: /opt/bootstrap
        command: [ "/opt/kubeflow/bootstrapper"]
        args: [
          "--in-cluster",
          "--namespace=kubeflow",
          "--apply",
          # change config here if you want to use customized config.
          # "--config=/opt/kubeflow/default.yaml"
          # app-dir: path to store your ks apps in pod's PersistentVolume
          "--app-dir=/opt/bootstrap/default"
          ]
        volumeMounts:
        - name: kubeflow-ksonnet-pvc
          mountPath: /opt/bootstrap
      volumes:
      - name: kubeflow-ksonnet-pvc
        persistentVolumeClaim:
          claimName: kubeflow-ksonnet-pvc

CodePudding user response:

Summary

  • This document is deprecated. I realized the version of this site was v0-2.

  • I followed this Japanese document. hack/setup-kubeflow.sh is an installation tool. This is not mentioned in the kubeflow/manifests repository. This was the breakthrough for me.

  • I read this document carefully. And I found the there was prerequisite for compatibility and tools I installed did not follow the requirements.

  • Succeeded versions and branch I tried.

Ubuntu 20.0.4 minikube v1.26.1
kustomize v3.2.0
kubectl v1.21.14
kubeflow/manifests v1.6-branch

  • Related