Kubernetes dashboard provides web-based interface to Kubernetes. The steps below are partially based on https://github.com/kubernetes/dashboard/blob/master/docs/user/installation.md article.
Follow this page - https://haproxy-ingress.github.io/docs/getting-started/.
Simply download https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml. Review its content and modify if necessary.
apiVersion: v1 kind: ServiceAccount metadata: name: ingress-controller namespace: ingress-controller --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ingress-controller rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: ingress-controller namespace: ingress-controller rules: - apiGroups: - "" resources: - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps verbs: - get - create - update - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: ingress-controller - kind: User name: ingress-controller apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: ingress-controller namespace: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: ingress-controller - kind: User name: ingress-controller apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: labels: run: ingress-default-backend name: ingress-default-backend namespace: ingress-controller spec: selector: matchLabels: run: ingress-default-backend template: metadata: labels: run: ingress-default-backend spec: containers: - name: ingress-default-backend image: docker.repo.com/google-containers/defaultbackend:1.0 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: ingress-default-backend namespace: ingress-controller spec: ports: - port: 8080 selector: run: ingress-default-backend --- apiVersion: v1 kind: ConfigMap metadata: name: haproxy-ingress namespace: ingress-controller --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: run: haproxy-ingress name: haproxy-ingress namespace: ingress-controller spec: updateStrategy: type: RollingUpdate selector: matchLabels: run: haproxy-ingress template: metadata: labels: run: haproxy-ingress spec: hostNetwork: true nodeSelector: role: ingress-controller serviceAccountName: ingress-controller containers: - name: haproxy-ingress image: quay.io/jcmoraisjr/haproxy-ingress args: - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend - --configmap=$(POD_NAMESPACE)/haproxy-ingress - --sort-backends #allow reading secrets from other namespaces # - --allow-cross-namespace # - --default-ssl-certificate=$(POD_NAMESPACE)/secretname ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: stat containerPort: 1936 livenessProbe: httpGet: path: /healthz port: 10253 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace
2. Apply the changes
kubectl create namespace ingress-controller kubectl apply -f haproxy-ingress.yaml
3. Grant the node where the ingress controller will run, its role
kubectl label node <node-name> role=ingress-controller
Prerequisite to Kubernetes Dashboard Deployment is a deployment of a metrics-server, basically apply the following manifest
cat metrics-server.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: system:aggregated-metrics-reader labels: rbac.authorization.k8s.io/aggregate-to-view: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-admin: "true" rules: - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.metrics.k8s.io spec: service: name: metrics-server namespace: kube-system group: metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 --- apiVersion: v1 kind: ServiceAccount metadata: name: metrics-server namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: metrics-server namespace: kube-system labels: k8s-app: metrics-server spec: selector: matchLabels: k8s-app: metrics-server template: metadata: name: metrics-server labels: k8s-app: metrics-server spec: serviceAccountName: metrics-server volumes: # mount in tmp so we can safely use from-scratch images and/or read-only containers - name: tmp-dir emptyDir: {} - name: ca-cert hostPath: path: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem type: File containers: - name: metrics-server image: docker.repo.com/google_containers/metrics-server-amd64:v0.3.4 imagePullPolicy: IfNotPresent command: - /metrics-server - --kubelet-insecure-tls # - --kubelet-certificate-authority=/etc/ssl/certs/ca-certificates.crt volumeMounts: - name: tmp-dir mountPath: /tmp - name: ca-cert mountPath: /etc/ssl/certs/ca-certificates.crt readOnly: true --- apiVersion: v1 kind: Service metadata: name: metrics-server namespace: kube-system labels: kubernetes.io/name: "Metrics-server" kubernetes.io/cluster-service: "true" spec: selector: k8s-app: metrics-server ports: - port: 443 protocol: TCP targetPort: 443 kubectl apply -f metrics-server.yaml
Create SSL certificate for the dashboard. Save the private key of the SSL cert in /certs/dashboard.key and certificate in /certs/dashboard.crt
Decrypt the private key
openssl rsa -in ~/certs/dashboard.key -out ~/certs/dashboard.key
Create name space kubernetes-dashboard
kubectl create namespace kubernetes-dashboard
Store the SSL certificate in the secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kubernetes-dashboard
Download dashboard manifest file -
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml and save it in ~/k8s-dashboard-recommended.yaml
Edit the file. Particularly, comment out tolerations and add the Ingress. Note the annotations section of Ingress where you can specify haproxy settings.
apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: v1 kind: Service metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: ConfigMap metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.0.0-beta5 imagePullPolicy: IfNotPresent ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. #- --apiserver-host=https://kubernetes.default.svc.cluster.local:443 volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "beta.kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master #tolerations: # - key: node-role.kubernetes.io/master # effect: NoSchedule --- apiVersion: v1 kind: Service metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.1 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "beta.kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master #tolerations: # - key: node-role.kubernetes.io/master # effect: NoSchedule volumes: - name: tmp-volume emptyDir: {} --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ds-k8s-dashboard-ingress namespace: kubernetes-dashboard annotations: ingress.kubernetes.io/ssl-passthrough: "true" # kubernetes.io/ingress.class: haproxy spec: rules: - host: ds-k8s-dashboard.example.com http: paths: - backend: serviceName: kubernetes-dashboard servicePort: 443
Apply the file
kubectl create namespace kubernetes-dashboard kubectl apply -f ~/k8s-dashboard-recommended.yaml
Grant default service account in kubernetes-dashboard namespace cluster-admin role
kubectl create clusterrolebinding kubernetes-dashboard-admin --clusterrole='cluster-admin' --serviceaccount='kubernetes-dashboard:default'
Get the secret name of the default service account in kubernetes-dashboard namespace, extract its token and save it PMP under custom attribute Token.
kubectl get serviceaccount default -n kubernetes-dashboard -o yaml kubectl describe secrets <secret-name-from-above> -n kubernetes-dashboard
To add redirect to port 443 from 80 modify ds-kube-server-proxy config on ds-kube-server-proxy:/opt/haproxy/etc/haproxy.conf, verify the file and reload haproxy on both servers.
vi /opt/haproxy/etc/haproxy.conf frontend redirect-http mode http bind :80 redirect scheme https if !{ ssl_fc } frontend https timeout client 86400000 bind :443 tcp-request inspect-delay 5s tcp-request content accept if { req_ssl_hello_type 1 } default_backend kubernetes-ingress backend kubernetes-ingress server <ingress-controller-node-name> <ingress-controller-node-ip-addr>:443 check-ssl verify none /opt/haproxy/sbin/haproxy -c -f /opt/haproxy/etc/haproxy.conf systemctl reload haproxy
Authenticate to kubernetes dashboard using the token extracted in step 10.
To uninstall Kubernetes dashboard, simply run
kubectl delete -f ~/k8s-dashboard-recommended.yaml
and remove the relative lines from haproxy.conf, reloading haproxy service.
You might need to remove SSL certificate secret, kubernetes-dashboard namespace and kubernetes-dashboard-admin cluster role binding using kubectl commands.