Kubernetes supports multiple ways to authenticate users. The best ones might be authenticating proxy such as nginx with nginx-auth-ldap module or Keystone when k8s is integrated with Openstack and Openstack cloud provider is installed including Keystone plugin module. Another way is to use OpenId Connect auth if Keycloak, for example, is installed. In this article, I describe simple SSL certificate auth for kubectl, bearer token for k8s dashboard and nginx proxy with nginx-auth-ldap module.
Kubernetes API server supports authenticating proxy - see https://kubernetes.io/docs/reference/access-authn-authz/authentication/#authenticating-proxy. Basically, we set nginx server in front of kubernetes API server and instruct kubectl to connect to it instead of the API server itself. The nginx server must authenticate to the API server before the API server trusts its X-Remote-User header. This is done with a client SSL certificate. By default, the API server is configured with its own front proxy CA and expect the CN of the client certificate signed by this CA to have “front-proxy-client” string.
Beware: k8s dashboard currently only supports authentication with bearer tokens - https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
Here is an example of building an nginx server https://kube-api-proxy.example.com:4443 with the nginx-auth-ldap module:
wget http://nginx.org/download/nginx-1.21.1.tar.gz tar -zxf nginx-1.21.1.tar.gz cd nginx-1.21.1 git clone https://github.com/kvspb/nginx-auth-ldap.git sudo yum install openldap-devel openssl-devel ./configure --prefix=/opt/nginx-1.21.1 --with-http_ssl_module --add-module=nginx-auth-ldap make sudo make install sudo groupadd --system nginx sudo useradd --system --gid nginx --no-create-home --home-dir /nonexistent --comment "nginx user" --shell /sbin/nologin nginx
Here is systemd service file /etc/systemd/system/nginx.service
[Unit] Description=Nginx Proxy Server [Service] Type=forking Restart=on-failure ExecStart=/opt/nginx/sbin/nginx ExecStop=/opt/nginx/sbin/nginx -s stop ExecReload=/opt/nginx/sbin/nginx -s reload [Install] WantedBy=default.target
And here is /opt/nginx/conf/nginx.conf file. In it we proxy to a load-balancing haproxy running on the same host as nginx instead of directly to the API server. Usually a k8s cluster has more than a single API server.
user nginx; events { } http { ldap_server ldap.example.com { url ldaps://ldap.example.com:3269/DC=corp,DC=example,DC=com?sAMAccountName?sub?(objectClass=user); ssl_ca_file "/etc/ssl/certs/ldap-ca.crt"; ssl_check_cert on; binddn "CN=k8-svc,OU=Service Accounts,DC=example,DC=com"; binddn_passwd "*****"; group_attribute member; group_attribute_is_dn on; satisfy any; require group "CN=auth-users,DC=Groups,DC=example,DC=com"; } ldap_server ldap2.example.com { url ldaps://ldap.example.com:3269/DC=corp,DC=example,DC=com?sAMAccountName?sub?(objectClass=user); ssl_ca_file "/etc/ssl/certs/ldap-ca.crt"; ssl_check_cert on; binddn "CN=k8-svc,OU=LDAP,OU=Service Accounts,DC=example,DC=com"; binddn_passwd "*****"; group_attribute member; group_attribute_is_dn on; satisfy any; require group "CN=auth-users,DC=Groups,DC=example,DC=com"; } server { listen 4443 ssl; server_name kube-api-proxy.example.com; ssl_certificate tls.crt; ssl_certificate_key tls.key; location / { proxy_pass https://127.0.0.1:6443; proxy_redirect https://127.0.0.1:6443/ $scheme://$host:4443/; proxy_http_version 1.1; auth_ldap "Please authenticate with example.com account"; auth_ldap_servers ldap.example.com ldap2.example.com; proxy_ssl_certificate client.crt; proxy_ssl_certificate_key client.key; proxy_set_header X-Remote-User $remote_user; } } }
There seems to be a bug with “require group” - https://github.com/kvspb/nginx-auth-ldap/issues/194
The tls.crt is a server’s SSL certificate issued by a PKI. The client certificate in this file client.crt was generated on one of the k8s master nodes:
cd /etc/kubernetes/pki openssl genrsa -out client.key 2048 openssl req -new -key client.key -out client.req -subj "/CN=front-proxy-client" vi /etc/pki/tls/openssl.cnf [ usr_cert ] keyUsage = critical,digitalSignature,keyEncipherment extendedKeyUsage = clientAuth openssl x509 -req -in client.req -CA front-proxy-ca.crt -CAkey front-proxy-ca.key -out client.crt -days 365 -CAcreateserial -extfile /etc/pki/tls/openssl.cnf -extensions usr_cert
And finally we configure kubectl to work with our nginx proxy
kubectl config set-cluster kubernetes-auth-proxy --server=https://kube-api-proxy.example.com:4443 --certificate-authority=/etc/ssl/certs/ldap-ca.crt --insecure-skip-tls-verify=false kubectl config set-context username@kubernetes-auth-proxy --cluster=kubernetes-auth-proxy --user=username --namespace=default kubectl config set-credentials username --username=username --password ********** kubectl config use-context username@kubernetes-auth-proxy
The /etc/ssl/certs/ldap-ca.crt file has PKI CA and subordinate CA certificates to validate nginx’s server certificate.
Last note regards the authorization part, which will associate a username with a kubernetes role via role binding. Here is an example manifest.yaml:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: clusteradmin rules: - verbs: - '*' apiGroups: - '*' resources: - '*' - verbs: - '*' nonResourceURLs: - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: clusteradmin subjects: - kind: User apiGroup: rbac.authorization.k8s.io name: username roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: clusteradmin
which could be applied with
kubectl apply -f manifest.yaml
Just don’t forget to enable and start nginx on the proxy server(s)
systemctl daemon-reload systemctl enable nginx systemctl start nginx
1. Generate RSA private key and SSL request
openssl genrsa -out rsa.key 2048 openssl req -new -key rsa.key -out csr.pem -subj "/CN=<username>/O=<groupname>"
2. Modify certificate extensions section of /etc/pki/tls/openssl.cnf and submit it for signing to k8s PKI admin
[ usr_cert ] keyUsage = critical,digitalSignature,keyEncipherment extendedKeyUsage = clientAuth
3. k8s PKI admin signs the request using kube-apiserver --client-ca-file=/etc/kubernetes/pki/ca.crt CA certificate (and its key file ca.key) and returns username.crt to a user
openssl req -in csr.pem -text|grep Subject: # to verify the subject openssl x509 -req -in csr.pem -CA ca.crt -CAkey ca.key -out username.crt -days 365 -CAcreateserial -extfile /etc/pki/tls/openssl.cnf -extensions usr_cert
4. Create k8s role (applicable only within a given namespace) or clusterrole (works across all namespaces) and bind it to a user with <username> or a group with <groupname>. In this example, we use !ClusterRole and group name <groupname>. Permissions are read-only and reading k8s secrets is not allowed. One may also specify User in !ClusterRoleBinding.subjects.kind and name: <username> from the SSL cert generated above
vi kubernetes-viewer.yaml kind: ClusterRole metadata: name: viewer rules: - apiGroups: [""] resources: ["componentstatuses","configmaps","endpoints","events","limitranges","namespaces","nodes","persistenvolumes","persistentvolumeclaims","pods","pods/log","podtemplates","replicationcontrollers","resourcequotas","services"] verbs: ["get", "watch", "list"] - apiGroups: [""] resources: ["secrets"] verbs: ["watch", "list"] - apiGroups: ["batch"] resources: ["cronjobs","jobs"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["daemonsets","deployments","replicasets","statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["networking.k8s.io","extensions"] resources: ["ingresses"] verbs: ["get", "watch", "list"] - apiGroups: ["metrics.k8s.io"] resources: ["nodes","pods"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets","podsecuritypolicies"] verbs: ["get", "watch", "list"] - apiGroups: ["scheduling.k8s.io"] resources: ["priorityclasses"] verbs: ["get", "watch", "list"] - apiGroups: ["rbac.authorization.k8s.io"] resources: ["clusterroles","clusterrolebindings","rolebindings","roles"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["csidrivers","csinodes","storageclasses","volumeattachments"] verbs: ["get", "watch", "list"] - apiGroups: ["admissionregistration.k8s.io"] resources: ["mutatingwebhookconfigurations","validatingwebhookconfigurations"] verbs: ["get", "watch", "list"] - apiGroups: ["apiregistration.k8s.io"] resources: ["apiservices"] verbs: ["get", "watch", "list"] - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests"] verbs: ["get", "watch", "list"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["get", "watch", "list"] - apiGroups: ["discovery.k8s.io"] resources: ["endpointslices"] verbs: ["get", "watch", "list"] - apiGroups: ["autoscaling"] resources: ["horizontalpodautoscalers"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: viewer subjects: - kind: Group name: <groupname> apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: viewer apiGroup: rbac.authorization.k8s.io kubectl apply -f kubernetes-viewer.yaml
5. Create .kube/config file and add new user to .kube/config file
kubectl config set-cluster kubernetes --server=https://kube-server-proxy.example.com:6443 --certificate-authority=ca.crt kubectl config set-context --current --cluster=kubernetes --user=<username> kubectl config set-credentials <username> --client-certificate=username.crt --client-key=rsa.key
6. Verify that the config works
kubectl get pods
The dashboard seems only accept service account tokens. So
1. Create a service account in default namespace and bind it to cluster role viewer that we created above
kubectl create serviceaccount viewer kubectl create clusterrolebinding viewers --clusterrole="viewer" --serviceaccount="default:viewer"
2. Retrieve the token from the serviceaccount
kubectl get serviceaccounts viewer -o yaml kubectl describe secrets <token-id-from-above-command>
kubectl can also use tokens for auth
kubectl config set-credentials <username> --token=<token>