How to automate Kubernetes cluster build in Openstack using a Heat (HOT) template

This article further enhances Kubernetes cluster build process described in How to build a Kubernetes cluster in Openstack for orchestrating Docker containers by using Openstack Heat module that provides Heat Orchestration Templates (HOT), reducing the time by automating the entire process. This procedure depends on the HOT spec for Pike - https://docs.openstack.org/heat/pike/template_guide/hot_spec.html

Step-by-step guide

1. Create YAML manifest for the Kubernetes cluster template. Creating flavors doesn’t seem to be allowed for non-admin user accounts.

 heat_template_version: pike
   
 description: >
   The template to install Kubernetes cluster, consisted of 3 master nodes,
   2 worker nodes and 2 haproxy load balancers with keepalived, plus a build server.
   
 parameters:
   primary_network:
     type: string
     label: Primary network
     default: my-network
   nfs_network:
     type: string
     label: NFS network
     default: my-nfs-network
   dns_domain:
     type: string
     label: DNS domain name
     default: example.com
   proxy_server_name:
     type: string
     label: haproxy server name prefix
     default: kube-server-proxy
   proxy_volume_size:
     type: string
     label: Proxy volume size
     default: 16
   proxy_flavor:
     type: string
     label: Openstack proxy server flavour
     default: public.1c.2r.0d
   master_server_name:
     type: string
     label: Kubernetes master server name prefix
     default: kube-master
   master_volume_size:
     type: string
     label: Master volume size
     default: 16
   master_flavor:
     type: string
     label: Openstack master server flavour
     default: public.2c.3r.0d
   worker_server_name:
     type: string
     label: Kubernetes worker server name prefix
     default: kube-worker
   worker_volume_size:
     type: string
     label: Worker volume size
     default: 64
   worker_flavor:
     type: string
     label: Openstack worker server flavour
     default: public.4c.8r.0d
   build_server_name:
     type: string
     label: build server name
     default: build-server
   build_volume_size:
     type: string
     label: Build server volume size
     default: 16
   build_flavor:
     type: string
     label: Openstack build server flavour
     default: public.2c.8r.0d
   proxy_virt_ip:
     label: Virtual IP address of proxy
     type: string
     default: 10.1.1.10
   email_addr:
     label: Email address to keepalived notifications
     type: string
     default: supports@example.com
   email_gateway:
     label: SMTP gateway
     type: string
     default: gateway.example.com
   docker_repo:
     label: Docker reporsitory
     type: string
     default: docker.example.com
   kube_version:
     label: Kubernetes version to install
     type: string
     default: 1.16.3-0.x86_64
   os_image:
     label: OS image for all the servers
     type: string
     default: 1f68a6dd-a2cf-48e8-8e4b-1dce5e9542cd
      
 outputs:
   ssh_private_key:
     description: ssh private key for the generated keypair; None, if public key is supplied
     value: { get_attr: [ssh_key, private_key] }
   build_server_ip:
     description: IP address of ds-build-server
     value: { get_attr: [build-server, addresses, { get_param: primary_network }, 0, addr] }
   proxy_port_ip_1:
     description: IP address of proxy_port_1
     value: { get_attr: [proxy_port_1, fixed_ips, 0, ip_address] }
   proxy_port_ip_2:
     description: IP address of proxy_port_2
     value: { get_attr: [proxy_port_2, fixed_ips, 0, ip_address] }
   master_port_ip_1:
     description: IP address of master_port_1
     value: { get_attr: [master_port_1, fixed_ips, 0, ip_address] }
   master_port_ip_2:
     description: IP address of master_port_2
     value: { get_attr: [master_port_2, fixed_ips, 0, ip_address] }
   master_port_ip_3:
     description: IP address of master_port_3
     value: { get_attr: [master_port_3, fixed_ips, 0, ip_address] }
   worker_port_ip_1:
     description: IP address of worker_port_1
     value: { get_attr: [worker_port_1, fixed_ips, 0, ip_address] }
   worker_port_ip_2:
     description: IP address of worker_port_2
     value: { get_attr: [worker_port_2, fixed_ips, 0, ip_address] }
   worker_port_ip_3:
     description: IP address of worker_port_3
     value: { get_attr: [worker_port_3, fixed_ips, 0, ip_address] }
     
 resources:
   ssh_key:
     type: OS::Nova::KeyPair
     properties:
       name: esportz
       save_private_key: True
   proxy-vol-1:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: proxy_volume_size }
       image: { get_param: os_image }
   proxy-vol-2:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: proxy_volume_size }
       image: { get_param: os_image }
   master-vol-1:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: master_volume_size }
       image: { get_param: os_image }
   master-vol-2:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: master_volume_size }
       image: { get_param: os_image }
   master-vol-3:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: master_volume_size }
       image: { get_param: os_image }
   worker-vol-1:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: worker_volume_size }
       image: { get_param: os_image }
   worker-vol-2:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: worker_volume_size }
       image: { get_param: os_image }
   worker-vol-3:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: worker_volume_size }
       image: { get_param: os_image }
   build-vol:
     type: OS::Cinder::Volume
     properties:
       size: { get_param: build_volume_size }
       image: { get_param: os_image }
   proxy_port_1:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   proxy_port_2:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   master_port_1:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   master_port_2:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   master_port_3:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   worker_port_1:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   worker_nfs_port_1:
     type: OS::Neutron::Port
     properties:
       network: { get_param: nfs_network }
       security_groups: []
       port_security_enabled: false
   worker_port_2:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   worker_nfs_port_2:
     type: OS::Neutron::Port
     properties:
       network: { get_param: nfs_network }
       security_groups: []
       port_security_enabled: false
   worker_port_3:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   worker_nfs_port_3:
     type: OS::Neutron::Port
     properties:
       network: { get_param: nfs_network }
       security_groups: []
       port_security_enabled: false
   build_port:
     type: OS::Neutron::Port
     properties:
       network: { get_param: primary_network }
       security_groups: []
       port_security_enabled: false
   build_nfs_port:
     type: OS::Neutron::Port
     properties:
       network: { get_param: nfs_network }
       security_groups: []
       port_security_enabled: false
   proxy_boot_config_1:
     type: OS::Heat::CloudConfig
     depends_on: [master_port_1]
     properties:
       cloud_config:
         write_files:
         - path: /etc/rsyslog.conf
           permissions: '0644'
           owner: root:root
           content: |
             $ModLoad imuxsock
             $ModLoad imjournal
             $WorkDirectory /var/lib/rsyslog
             $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
             $IncludeConfig /etc/rsyslog.d/*.conf
             $OmitLocalLogging off
             $IMJournalStateFile imjournal.state
             *.info;mail.none;authpriv.none;cron.none                /var/log/messages
             authpriv.*                                              /var/log/secure
             mail.*                                                  -/var/log/maillog
             cron.*                                                  /var/log/cron
             *.emerg                                                 :omusrmsg:*
             uucp,news.crit                                          /var/log/spooler
             local7.*                                                /var/log/boot.log
             local6.*                                                /var/log/haproxy.log
         - path: /etc/logrotate.d/syslog
           permissions: '0644'
           owner: root:root
           content: |
             /var/log/cron
             /var/log/maillog
             /var/log/messages
             /var/log/secure
             /var/log/spooler
             {
               missingok
               sharedscripts
               postrotate
               /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
               endscript
             }
             /var/log/haproxy.log
         - path: /etc/systemd/system/haproxy.service
           permissions: '0644'
           owner: root:root
           content: |
             [Unit]
             Description=HAProxy Server           
             [Service]
             Restart=on-failure
             ExecStart=/opt/haproxy/sbin/haproxy -f /opt/haproxy/etc/haproxy.conf
             ExecStop=/bin/pkill -SIGTERM haproxy
             ExecReload=/bin/pkill -SIGUSR2 haproxy
             [Install]
             WantedBy=default.target
         - path: /opt/haproxy/etc/haproxy.conf
           permissions: '0644'
           owner: root:root
           content:
             str_replace:
               template: |
                 global
                   master-worker
                   log /run/systemd/journal/syslog local6
                   tune.ssl.default-dh-param 2048
                 defaults
                   mode tcp
                   log global
                   timeout connect 5000ms
                   timeout server 5000ms
                   timeout client 5000ms
                   option http-server-close
                 frontend kube-endpoint
                   bind :6443
                   default_backend kube-servers
                 backend kube-servers
                   balance source
                   server $master-1.$domain $master_ip_1:6443 check-ssl verify none
               params:
                 $master: { get_param: master_server_name }
                 $domain: { get_param: dns_domain }
                 $master_ip_1: { get_attr: [master_port_1, fixed_ips, 0, ip_address] }
         - path: /etc/keepalived/keepalived.conf
           permissions: '0640'
           owner: root:root
           content:
             str_replace:
               template: |
                 ! Configuration File for keepalived
                 global_defs {
                   notification_email {
                     $email_address
                   }
                   notification_email_from $email_address
                   smtp_server $email_gateway
                   smtp_connect_timeout 30
                   smtp_alerts true
                   smtp_alert_checker true
                   checker_log_all_failures true
                   enable_script_security
                   script_user root root
                 }
                 vrrp_script chk_haproxy {
                   script "/bin/killall -0 haproxy"
                   interval 2
                 }
                 vrrp_instance VI_1 {
                   state MASTER #BACKUP on the other haproxy VM
                   interface eth0
                   virtual_router_id 51
                   priority 100 #50 on BACKUP keepalived instance
                   advert_int 1
                   authentication {
                     auth_type PASS
                     auth_pass CHANGEME
                   }
                   virtual_ipaddress {
                     $virtual_ip
                   }
                   track_script {
                     chk_haproxy
                   }
                 }
               params:
                 $virtual_ip: { get_param: proxy_virt_ip }
                 $email_address: { get_param: email_addr }
                 $email_gateway: {get_param: email_gateway }
         - path: /etc/sysctl.conf
           permissions: '0644'
           owner: root:root
           content: |
             net.ipv6.conf.all.disable_ipv6 = 1
             net.ipv6.conf.default.disable_ipv6 = 1
             net.ipv4.ip_nonlocal_bind = 1
         yum_repos:
           pulp-system:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/os/x86_64/20190111081705/
                 params:
                   $domain: { get_param: dns_domain }
             name: system
             enabled: true
             gpgcheck: false
           pulp-updates:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/updates/x86_64/20190714214303/
                 params:
                   $domain: { get_param: dns_domain }
             name: updates
             enabled: true
             gpgcheck: false
           pulp-extras:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/extras/x86_64/20190321090343/
                 params:
                   $domain: { get_param: dns_domain }
             name: extras
             enabled: true
             gpgcheck: false
           pulp-epel:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/epel/x86_64/20190328045100/
                 params:
                   $domain: { get_param: dns_domain }
             name: epel
             enabled: true
             gpgcheck: false
         packages:
           - wget
           - openssl-devel
           - pcre-devel
           - zlib-devel
           - systemd-devel
           - libnl3-devel
           - ipset-devel
           - iptables-devel
           - file-devel
           - net-snmp-devel
           - glib2-devel
           - json-c-devel
           - pcre2-devel
           - libnftnl-devel
           - libmnl-devel
           - python-sphinx
           - python-sphinx_rtd_theme
           - bind-utils
           - mlocate
           - tcpdump
           - telnet
         package_update: true
   proxy_boot_config_2:
     type: OS::Heat::CloudConfig
     depends_on: [master_port_1, master_port_2, master_port_3]
     properties:
       cloud_config:
         write_files:
         - path: /etc/rsyslog.conf
           permissions: '0644'
           owner: root:root
           content: |
             $ModLoad imuxsock
             $ModLoad imjournal
             $WorkDirectory /var/lib/rsyslog
             $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
             $IncludeConfig /etc/rsyslog.d/*.conf
             $OmitLocalLogging off
             $IMJournalStateFile imjournal.state
             *.info;mail.none;authpriv.none;cron.none                /var/log/messages
             authpriv.*                                              /var/log/secure
             mail.*                                                  -/var/log/maillog
             cron.*                                                  /var/log/cron
             *.emerg                                                 :omusrmsg:*
             uucp,news.crit                                          /var/log/spooler
             local7.*                                                /var/log/boot.log
             local6.*                                                /var/log/haproxy.log
         - path: /etc/logrotate.d/syslog
           permissions: '0644'
           owner: root:root
           content: |
             /var/log/cron
             /var/log/maillog
             /var/log/messages
             /var/log/secure
             /var/log/spooler
             {
               missingok
               sharedscripts
               postrotate
               /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true
               endscript
             }
             /var/log/haproxy.log
         - path: /etc/systemd/system/haproxy.service
           permissions: '0644'
           owner: root:root
           content: |
             [Unit]
             Description=HAProxy Server           
             [Service]
             Restart=on-failure
             ExecStart=/opt/haproxy/sbin/haproxy -f /opt/haproxy/etc/haproxy.conf
             ExecStop=/bin/pkill -SIGTERM haproxy
             ExecReload=/bin/pkill -SIGUSR2 haproxy
             [Install]
             WantedBy=default.target
         - path: /opt/haproxy/etc/haproxy.conf
           permissions: '0644'
           owner: root:root
           content:
             str_replace:
               template: |
                 global
                   master-worker
                   log /run/systemd/journal/syslog local6
                   tune.ssl.default-dh-param 2048
                 defaults
                   mode tcp
                   log global
                   timeout connect 5000ms
                   timeout server 5000ms
                   timeout client 5000ms
                   option http-server-close
                 frontend kube-endpoint
                   bind :6443
                   default_backend kube-servers
                 backend kube-servers
                   balance source
                   server $master-1.$domain $master_ip_1:6443 check-ssl verify none
               params:
                 $master: { get_param: master_server_name }
                 $domain: { get_param: dns_domain }
                 $master_ip_1: { get_attr: [master_port_1, fixed_ips, 0, ip_address] }
         - path: /etc/keepalived/keepalived.conf
           permissions: '0640'
           owner: root:root
           content:
             str_replace:
               template: |
                 ! Configuration File for keepalived
                 global_defs {
                   notification_email {
                     $email_address
                   }
                   notification_email_from $email_address
                   smtp_server $email_gateway
                   smtp_connect_timeout 30
                   smtp_alerts true
                   smtp_alert_checker true
                   checker_log_all_failures true
                   enable_script_security
                   script_user root root
                 }
                 vrrp_script chk_haproxy {
                   script "/bin/killall -0 haproxy"
                   interval 2
                 }
                 vrrp_instance VI_1 {
                   state BACKUP #MASTER on the other haproxy VM
                   interface eth0
                   virtual_router_id 51
                   priority 50 #100 on MASTER keepalived instance
                   advert_int 1
                   authentication {
                     auth_type PASS
                     auth_pass CHANGEME
                   }
                   virtual_ipaddress {
                     $virtual_ip
                   }
                   track_script {
                     chk_haproxy
                   }
                 }
               params:
                 $virtual_ip: { get_param: proxy_virt_ip }
                 $email_address: { get_param: email_addr }
                 $email_gateway: {get_param: email_gateway }
         - path: /etc/sysctl.conf
           permissions: '0644'
           owner: root:root
           content: |
             net.ipv6.conf.all.disable_ipv6 = 1
             net.ipv6.conf.default.disable_ipv6 = 1
             net.ipv4.ip_nonlocal_bind = 1
         yum_repos:
           pulp-system:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/os/x86_64/20190111081705/
                 params:
                   $domain: { get_param: dns_domain }
             name: system
             enabled: true
             gpgcheck: false
           pulp-updates:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/updates/x86_64/20190714214303/
                 params:
                   $domain: { get_param: dns_domain }
             name: updates
             enabled: true
             gpgcheck: false
           pulp-extras:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/extras/x86_64/20190321090343/
                 params:
                   $domain: { get_param: dns_domain }
             name: extras
             enabled: true
             gpgcheck: false
           pulp-epel:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/epel/x86_64/20190328045100/
                 params:
                   $domain: { get_param: dns_domain }
             name: epel
             enabled: true
             gpgcheck: false
         packages:
           - wget
           - openssl-devel
           - pcre-devel
           - zlib-devel
           - systemd-devel
           - libnl3-devel
           - ipset-devel
           - iptables-devel
           - file-devel
           - net-snmp-devel
           - glib2-devel
           - json-c-devel
           - pcre2-devel
           - libnftnl-devel
           - libmnl-devel
           - python-sphinx
           - python-sphinx_rtd_theme
           - bind-utils
           - mlocate
           - tcpdump
           - telnet
         package_update: true
   proxy_boot_script:
     type: OS::Heat::SoftwareConfig
     properties:
       config: |
         #!/bin/bash
         lvextend -l +100%FREE /dev/mapper/VolGroup00-root
         resize2fs /dev/mapper/VolGroup00-root
         rm -f /etc/yum.repos.d/pulp-*
         yum -y update
         touch /var/log/haproxy.log
         systemctl restart rsyslog
         yum -y groupinstall "Development Tools"
         wget http://www.haproxy.org/download/1.8/src/haproxy-1.8.22.tar.gz
         gunzip haproxy-1.8.22.tar.gz
         tar -xf haproxy-1.8.22.tar
         cd haproxy-1.8.22
         make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1
         make install PREFIX=/opt/haproxy-1.8.22 MANDIR=/usr/share/man DOCDIR=/usr/share/doc/haproxy
         cd ..
         ln -s /opt/haproxy-1.8.22/sbin /opt/haproxy/sbin
         systemctl enable haproxy
         systemctl start haproxy
         wget https://www.keepalived.org/software/keepalived-2.0.19.tar.gz
         gunzip keepalived-2.0.19.tar.gz
         tar -xf keepalived-2.0.19.tar
         cd keepalived-2.0.19
         ./configure --prefix=/opt/keepalived-2.0.19 --datarootdir=/usr/share --enable-snmp --enable-snmp-vrrp --enable-snmp-checker --enable-snmp-rfc --enable-dbus --enable-dbus-create-instance
         make
         make install
         ln -s /opt/keepalived-2.0.19 /opt/keepalived
         sysctl net.ipv4.ip_nonlocal_bind=1
         systemctl enable keepalived
         systemctl start keepalived
         updatedb
   proxy_init_1:
     type: OS::Heat::MultipartMime
     properties:
       parts:
       - config: { get_resource: proxy_boot_config_1 }
       - config: { get_resource: proxy_boot_script }
   proxy_init_2:
     type: OS::Heat::MultipartMime
     properties:
       parts:
       - config: { get_resource: proxy_boot_config_2 }
       - config: { get_resource: proxy_boot_script }
   k8s_boot_config:
     type: OS::Heat::CloudConfig
     depends_on: ssh_key
     properties:
       cloud_config:
         write_files:
         - path: /etc/docker/daemon.json
           permissions: '0644'
           owner: root:root
           content:
             str_replace:
               template: |
                 {
                   "exec-opts": ["native.cgroupdriver=systemd"],
                   "log-driver": "json-file",
                   "log-opts": {
                     "max-size": "100m"
                   },
                   "storage-driver": "overlay2",
                   "storage-opts": ["overlay2.override_kernel_check=true"],
                   "registry-mirrors": ["http://$docker_repo"]
                 }
               params:
                 $docker_repo: { get_param: docker_repo }
         - path: /etc/default/grub
           permissions: '0644'
           owner: root:root
           content: |
             GRUB_TIMEOUT=5
             GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
             GRUB_DEFAULT=saved
             GRUB_DISABLE_SUBMENU=true
             GRUB_TERMINAL_OUTPUT="console"
             GRUB_CMDLINE_LINUX="nofb console=tty0 console=ttyS0,115200n8 splash=quiet crashkernel=auto rd.lvm.lv=VolGroup00/root rhgb quiet"
             GRUB_DISABLE_RECOVERY="true"
         - path: /etc/sysctl.conf
           permissions: '0644'
           owner: root:root
           content: |
             vm.swappiness = 0
         - path: /home/esportz/.ssh/id_rsa
           permissions: '0400'
           owner: esportz:esportz
           content:
             str_replace:
               template: |
                 $ssh_key
               params:
                 $ssh_key: { get_attr: [ssh_key, private_key] }
         - path: /etc/kube-flannel.yml
           permissions: '0644'
           owner: root:root
           content: |
             ---
             apiVersion: policy/v1beta1
             kind: PodSecurityPolicy
             metadata:
               name: psp.flannel.unprivileged
               annotations:
                 seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
                 seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
                 apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
                 apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
             spec:
               privileged: false
               volumes:
                 - configMap
                 - secret
                 - emptyDir
                 - hostPath
               allowedHostPaths:
                 - pathPrefix: "/etc/cni/net.d"
                 - pathPrefix: "/etc/kube-flannel"
                 - pathPrefix: "/run/flannel"
               readOnlyRootFilesystem: false
               # Users and groups
               runAsUser:
                 rule: RunAsAny
               supplementalGroups:
                 rule: RunAsAny
               fsGroup:
                 rule: RunAsAny
               # Privilege Escalation
               allowPrivilegeEscalation: false
               defaultAllowPrivilegeEscalation: false
               # Capabilities
               allowedCapabilities: ['NET_ADMIN']
               defaultAddCapabilities: []
               requiredDropCapabilities: []
               # Host namespaces
               hostPID: false
               hostIPC: false
               hostNetwork: true
               hostPorts:
               - min: 0
                 max: 65535
               # SELinux
               seLinux:
                 # SELinux is unsed in CaaSP
                 rule: 'RunAsAny'
             ---
             kind: ClusterRole
             apiVersion: rbac.authorization.k8s.io/v1beta1
             metadata:
               name: flannel
             rules:
               - apiGroups: ['extensions']
                 resources: ['podsecuritypolicies']
                 verbs: ['use']
                 resourceNames: ['psp.flannel.unprivileged']
               - apiGroups:
                   - ""
                 resources:
                   - pods
                 verbs:
                   - get
               - apiGroups:
                   - ""
                 resources:
                   - nodes
                 verbs:
                   - list
                   - watch
               - apiGroups:
                   - ""
                 resources:
                   - nodes/status
                 verbs:
                   - patch
             ---
             kind: ClusterRoleBinding
             apiVersion: rbac.authorization.k8s.io/v1beta1
             metadata:
               name: flannel
             roleRef:
               apiGroup: rbac.authorization.k8s.io
               kind: ClusterRole
               name: flannel
             subjects:
             - kind: ServiceAccount
               name: flannel
               namespace: kube-system
             ---
             apiVersion: v1
             kind: ServiceAccount
             metadata:
               name: flannel
               namespace: kube-system
             ---
             kind: ConfigMap
             apiVersion: v1
             metadata:
               name: kube-flannel-cfg
               namespace: kube-system
               labels:
                 tier: node
                 app: flannel
             data:
               cni-conf.json: |
                 {
                   "cniVersion": "0.2.0",
                   "name": "cbr0",
                   "plugins": [
                     {
                       "type": "flannel",
                       "delegate": {
                         "hairpinMode": true,
                         "isDefaultGateway": true
                       }
                     },
                     {
                       "type": "portmap",
                       "capabilities": {
                         "portMappings": true
                       }
                     }
                   ]
                 }
               net-conf.json: |
                 {
                   "Network": "10.244.0.0/16",
                   "Backend": {
                     "Type": "host-gw"
                   }
                 }
             ---
             apiVersion: apps/v1
             kind: DaemonSet
             metadata:
               name: kube-flannel-ds-amd64
               namespace: kube-system
               labels:
                 tier: node
                 app: flannel
             spec:
               selector:
                 matchLabels:
                   app: flannel
               template:
                 metadata:
                   labels:
                     tier: node
                     app: flannel
                 spec:
                   affinity:
                     nodeAffinity:
                       requiredDuringSchedulingIgnoredDuringExecution:
                         nodeSelectorTerms:
                           - matchExpressions:
                               - key: beta.kubernetes.io/os
                                 operator: In
                                 values:
                                   - linux
                               - key: beta.kubernetes.io/arch
                                 operator: In
                                 values:
                                   - amd64
                   hostNetwork: true
                   tolerations:
                   - operator: Exists
                     effect: NoSchedule
                   serviceAccountName: flannel
                   initContainers:
                   - name: install-cni
                     image: quay.io/coreos/flannel:v0.11.0-amd64
                     command:
                     - cp
                     args:
                     - -f
                     - /etc/kube-flannel/cni-conf.json
                     - /etc/cni/net.d/10-flannel.conflist
                     volumeMounts:
                     - name: cni
                       mountPath: /etc/cni/net.d
                     - name: flannel-cfg
                       mountPath: /etc/kube-flannel/
                   containers:
                   - name: kube-flannel
                     image: quay.io/coreos/flannel:v0.11.0-amd64
                     command:
                     - /opt/bin/flanneld
                     args:
                     - --ip-masq
                     - --kube-subnet-mgr
                     resources:
                       requests:
                         cpu: "100m"
                         memory: "50Mi"
                       limits:
                         cpu: "100m"
                         memory: "50Mi"
                     securityContext:
                       privileged: false
                       capabilities:
                          add: ["NET_ADMIN"]
                     env:
                     - name: POD_NAME
                       valueFrom:
                         fieldRef:
                           fieldPath: metadata.name
                     - name: POD_NAMESPACE
                       valueFrom:
                         fieldRef:
                           fieldPath: metadata.namespace
                     volumeMounts:
                     - name: run
                       mountPath: /run/flannel
                     - name: flannel-cfg
                       mountPath: /etc/kube-flannel/
                   volumes:
                     - name: run
                       hostPath:
                         path: /run/flannel
                     - name: cni
                       hostPath:
                         path: /etc/cni/net.d
                     - name: flannel-cfg
                       configMap:
                         name: kube-flannel-cfg
         yum_repos:
           pulp-system:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/os/x86_64/20190111081705/
                 params:
                   $domain: { get_param: dns_domain }
             name: system
             enabled: true
             gpgcheck: false
           pulp-updates:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/updates/x86_64/20190714214303/
                 params:
                   $domain: { get_param: dns_domain }
             name: updates
             enabled: true
             gpgcheck: false
           pulp-extras:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/extras/x86_64/20190321090343/
                 params:
                   $domain: { get_param: dns_domain }
             name: extras
             enabled: true
             gpgcheck: false
           pulp-epel:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/epel/x86_64/20190328045100/
                 params:
                   $domain: { get_param: dns_domain }
             name: epel
             enabled: true
             gpgcheck: false
           kubernetes:
             baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
             name: Kubernetes
             enabled: true
             gpgcheck: true
             repo_gpgcheck: true
             gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
             exclude: kube*
           docker:
             baseurl: https://download.docker.com/linux/centos/7/x86_64/stable
             name: Docker
             enabled: true
             gpgcheck: true
             gpgkey: https://download.docker.com/linux/centos/gpg
             exclude: docker*
         packages:
           - bind-utils
           - mlocate
           - tcpdump
           - telnet
           - bash-completion
         package_update: true
   k8s_boot_script_master_1:
     type: OS::Heat::SoftwareConfig
     properties:
       config:
         str_replace:
           template: |
             #!/bin/bash
             swapoff -a
             sleep 3s
             lvremove -A -y /dev/VolGroup00/swap
             lvextend -l +100%FREE /dev/mapper/VolGroup00-root
             resize2fs /dev/mapper/VolGroup00-root
             rm -f /etc/yum.repos.d/pulp-*
             yum -y update
             #k8s 1.16.x is only tested with docker 18.09
             yum -y install docker-ce-18.09.9-3.el7.x86_64 docker-ce-cli-18.09.9-3.el7.x86_64 containerd.io kubelet-$kube_version kubeadm-$kube_version kubectl-$kube_version --disableexcludes=all
             systemctl enable docker
             systemctl start docker
             systemctl enable firewalld
             systemctl start firewalld
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 2379 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 2380 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 6443 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10251 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10252 -m conntrack --ctstate NEW -j ACCEPT
             #workaround flannel bug
             firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -d 10.244.0.0/16 -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -s 10.244.0.0/16 -j ACCEPT
             firewall-cmd --reload
             systemctl enable kubelet
             kubectl completion bash >/etc/bash_completion.d/kubectl
             grub2-mkconfig >/etc/grub2.cfg
             sysctl vm.swappiness=0
             cat /etc/fstab|grep -v swap > /etc/fstab
             kubeadm completion bash >/etc/bash_completion.d/kubeadm
             kubeadm init --control-plane-endpoint $cluster_ip:6443 --image-repository $docker_repo/google-containers --pod-network-cidr 10.244.0.0/16 --upload-certs |tee /var/log/kubeadm-init.log
             mkdir -p /home/esportz/.kube
             cp -i /etc/kubernetes/admin.conf /home/esportz/.kube/config
             chown esportz:esportz /home/esportz/.kube/config
             kubectl --kubeconfig='/etc/kubernetes/admin.conf' apply -f /etc/kube-flannel.yml
             JOIN_COMMAND_MASTER=`grep -A2 -m1 "kubeadm join" /var/log/kubeadm-init.log|tr -d '\\'`
             JOIN_COMMAND_WORKER=`grep -A1 -m1 "kubeadm join" /var/log/kubeadm-init.log|tr -d '\\'`
             ssh -i /home/esportz/.ssh/id_rsa -o StrictHostKeyChecking=false esportz@$master-2.$domain sudo $JOIN_COMMAND_MASTER |tee -a /var/log/kubeadm-init.log
             ssh -i /home/esportz/.ssh/id_rsa -o StrictHostKeyChecking=false esportz@$master-3.$domain sudo $JOIN_COMMAND_MASTER |tee -a /var/log/kubeadm-init.log
             ssh -i /home/esportz/.ssh/id_rsa -o StrictHostKeyChecking=false esportz@$worker-1.$domain sudo $JOIN_COMMAND_WORKER |tee -a /var/log/kubeadm-init.log
             ssh -i /home/esportz/.ssh/id_rsa -o StrictHostKeyChecking=false esportz@$worker-2.$domain sudo $JOIN_COMMAND_WORKER |tee -a /var/log/kubeadm-init.log
             ssh -i /home/esportz/.ssh/id_rsa -o StrictHostKeyChecking=false esportz@$proxy-1.$domain 'sudo echo -e "\n  server $master-2.$domain $master_ip_2:6443 check-ssl verify none\n  server $master-3.$domain $master_ip_3:6443 check-ssl verify none\n" |sudo tee -a /opt/haproxy/etc/haproxy.conf && sudo systemctl reload haproxy'
             ssh -i /home/esportz/.ssh/id_rsa -o StrictHostKeyChecking=false esportz@$proxy-2.$domain 'sudo echo -e "\n  server $master-2.$domain $master_ip_2:6443 check-ssl verify none\n  server $master-3.$domain $master_ip_3:6443 check-ssl verify none\n" |sudo tee -a /opt/haproxy/etc/haproxy.conf && sudo systemctl reload haproxy'
             #clusterrolebindings - might not be needed
             #kubectl --kubeconfig='/etc/kubernetes/admin.conf' create clusterrolebinding create-csrs-for-bootstrapping --clusterrole='system:node-bootstrapper' --group='system:bootstrappers'
             #kubectl --kubeconfig='/etc/kubernetes/admin.conf' create clusterrolebinding auto-approve-csrs-for-group --clusterrole='system:certificates.k8s.io:certificatesigningrequests:nodeclient' --group='system:bootstrappers'
             #kubectl --kubeconfig='/etc/kubernetes/admin.conf' create clusterrolebinding auto-approve-renewals-for-nodes --clusterrole='system:certificates.k8s.io:certificatesigningrequests:selfnodeclient' --group='system:nodes'
             updatedb
           params:
             $cluster_ip: { get_param: proxy_virt_ip }
             $docker_repo: { get_param: docker_repo }
             $proxy: { get_param: proxy_server_name }
             $master: { get_param: master_server_name }
             $worker: { get_param: worker_server_name }
             $domain: { get_param: dns_domain }
             $master_ip_2: { get_attr: [master_port_2, fixed_ips, 0, ip_address] }
             $master_ip_3: { get_attr: [master_port_3, fixed_ips, 0, ip_address] }
             $kube_version: { get_param: kube_version }
   k8s_boot_script_master:
     type: OS::Heat::SoftwareConfig
     properties:
       config:
         str_replace:
           template: |
             #!/bin/bash
             swapoff -a
             sleep 3s
             lvremove -A -y /dev/VolGroup00/swap
             lvextend -l +100%FREE /dev/mapper/VolGroup00-root
             resize2fs /dev/mapper/VolGroup00-root
             rm -f /etc/yum.repos.d/pulp-*
             yum -y update
             #k8s 1.16.x is only tested with docker 18.09
             yum -y install docker-ce-18.09.9-3.el7.x86_64 docker-ce-cli-18.09.9-3.el7.x86_64 containerd.io kubelet-$kube_version kubeadm-$kube_version kubectl-$kube_version --disableexcludes=all
             systemctl enable docker
             systemctl start docker
             systemctl enable firewalld
             systemctl start firewalld
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 2379 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 2380 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 6443 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10251 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10252 -m conntrack --ctstate NEW -j ACCEPT
             #workaround flannel bug
             firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -d 10.244.0.0/16 -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -s 10.244.0.0/16 -j ACCEPT
             firewall-cmd --reload
             systemctl enable kubelet
             kubectl completion bash >/etc/bash_completion.d/kubectl
             grub2-mkconfig >/etc/grub2.cfg
             sysctl vm.swappiness=0
             cat /etc/fstab|grep -v swap > /etc/fstab
             kubeadm completion bash >/etc/bash_completion.d/kubeadm
             updatedb
           params:
             $cluster_ip: { get_param: proxy_virt_ip }
             $docker_repo: { get_param: docker_repo }
             $kube_version: { get_param: kube_version }
   k8s_boot_script_worker:
     type: OS::Heat::SoftwareConfig
     properties:
       config:
         str_replace:
           template: |
             #!/bin/bash
             swapoff -a
             sleep 3s
             lvremove -A -y /dev/VolGroup00/swap
             lvextend -l +100%FREE /dev/mapper/VolGroup00-root
             resize2fs /dev/mapper/VolGroup00-root
             rm -f /etc/yum.repos.d/pulp-*
             yum -y update
             yum install -y docker-ce-18.09.9-3.el7.x86_64 docker-ce-cli-18.09.9-3.el7.x86_64 containerd.io kubelet-$kube_version kubeadm-$kube_version kubectl-$kube_version nfs-utils --disableexcludes=all
             systemctl enable docker
             systemctl start docker
             systemctl enable firewalld
             systemctl start firewalld
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 30000:32767 -m conntrack --ctstate NEW -j ACCEPT
             #workaround flannel bug
             firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -d 10.244.0.0/16 -j ACCEPT
             firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -s 10.244.0.0/16 -j ACCEPT
             #ACL for metrics-server
             firewall-cmd --permanent --direct --add-rule ipv4 filter IN_public_allow 0 -p tcp --dport 443 -m conntrack --ctstate NEW -j ACCEPT
             firewall-cmd --reload
             systemctl enable kubelet
             kubectl completion bash >/etc/bash_completion.d/kubectl
             grub2-mkconfig >/etc/grub2.cfg
             sysctl vm.swappiness=0
             cat /etc/fstab|grep -v swap > /etc/fstab
             updatedb
             echo -e '\nDenyGroups "QAS BI" "QAS BI Analytics" "QAS BI Analytics Admins" "QAS BI-Admins" "QAS BI-Analytics" "QAS BI-ITOS" "QAS Build"\n' >> /etc/ssh/sshd_config
             systemctl reload sshd
           params:
             $kube_version: { get_param: kube_version }
   k8s_init_master_1:
     type: OS::Heat::MultipartMime
     properties:
       parts:
       - config: { get_resource: k8s_boot_config }
       - config: { get_resource: k8s_boot_script_master_1 }
   k8s_init_master:
     type: OS::Heat::MultipartMime
     properties:
       parts:
       - config: { get_resource: k8s_boot_config }
       - config: { get_resource: k8s_boot_script_master }
   k8s_init_worker:
     type: OS::Heat::MultipartMime
     properties:
       parts:
       - config: { get_resource: k8s_boot_config }
       - config: { get_resource: k8s_boot_script_worker }
   build_boot_config:
     type: OS::Heat::CloudConfig
     properties:
       cloud_config:
         write_files:
         - path: /etc/docker/daemon.json
           permissions: '0644'
           owner: root:root
           content:
             str_replace:
               template: |
                 {
                   "exec-opts": ["native.cgroupdriver=systemd"],
                   "log-driver": "json-file",
                   "log-opts": {
                     "max-size": "100m"
                   },
                   "storage-driver": "overlay2",
                   "storage-opts": ["overlay2.override_kernel_check=true"],
                   "registry-mirrors": ["http://$docker_repo"]
                 }
               params:
                 $docker_repo: { get_param: docker_repo }
         - path: /etc/default/grub
           permissions: '0644'
           owner: root:root
           content: |
             GRUB_TIMEOUT=5
             GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
             GRUB_DEFAULT=saved
             GRUB_DISABLE_SUBMENU=true
             GRUB_TERMINAL_OUTPUT="console"
             GRUB_CMDLINE_LINUX="nofb console=tty0 console=ttyS0,115200n8 splash=quiet crashkernel=auto rd.lvm.lv=VolGroup00/root rhgb quiet"
             GRUB_DISABLE_RECOVERY="true"
         - path: /etc/sysctl.conf
           permissions: '0644'
           owner: root:root
           content: |
             vm.swappiness = 0
         yum_repos:
           pulp-system:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/os/x86_64/20190111081705/
                 params:
                   $domain: { get_param: dns_domain }
             name: system
             enabled: true
             gpgcheck: false
           pulp-updates:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7.6.1810/updates/x86_64/20190714214303/
                 params:
                   $domain: { get_param: dns_domain }
             name: updates
             enabled: true
             gpgcheck: false
           pulp-extras:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/extras/x86_64/20190321090343/
                 params:
                   $domain: { get_param: dns_domain }
             name: extras
             enabled: true
             gpgcheck: false
           pulp-epel:
             baseurl:
               str_replace:
                 template: http://pulp.$domain/pulp/repos/CentOS/7/epel/x86_64/20190328045100/
                 params:
                   $domain: { get_param: dns_domain }
             name: epel
             enabled: true
             gpgcheck: false
           kubernetes:
             baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
             name: Kubernetes
             enabled: true
             gpgcheck: true
             repo_gpgcheck: true
             gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
             #exclude: kube*
           docker:
             baseurl: https://download.docker.com/linux/centos/7/x86_64/stable
             name: Docker
             enabled: true
             gpgcheck: true
             gpgkey: https://download.docker.com/linux/centos/gpg
         packages:
           #k8s 1.16.x is only tested with docker 18.09
           - docker-ce-18.09.9-3.el7.x86_64
           - docker-ce-cli-18.09.9-3.el7.x86_64
           - containerd.io
           - bind-utils
           - mlocate
           - tcpdump
           - telnet
           - kubectl
           - bash-completion
           - git
           - nfs-utils
         package_update: true
   build_boot_script:
     type: OS::Heat::SoftwareConfig
     properties:
       config: |
         #!/bin/bash
         swapoff -a
         sleep 3s
         lvremove -A -y /dev/VolGroup00/swap
         lvextend -l +100%FREE /dev/mapper/VolGroup00-root
         resize2fs /dev/mapper/VolGroup00-root
         rm -f /etc/yum.repos.d/pulp-*
         yum -y update
         systemctl enable docker
         systemctl start docker
         kubectl completion bash >/etc/bash_completion.d/kubectl
         grub2-mkconfig >/etc/grub2.cfg
         sysctl vm.swappiness=0
         cat /etc/fstab|grep -v swap > /etc/fstab
         updatedb
         echo -e '\nDenyGroups "QAS BI" "QAS BI Analytics" "QAS BI Analytics Admins" "QAS BI-Admins" "QAS BI-Analytics" "QAS BI-ITOS" "QAS Build"\n' >> /etc/ssh/sshd_config
         systemctl reload sshd
   build_server_init:
     type: OS::Heat::MultipartMime
     properties:
       parts:
       - config: { get_resource: build_boot_config }
       - config: { get_resource: build_boot_script }
   proxy-1:
     type: OS::Nova::Server
     depends_on: [ssh_key, proxy_port_1, proxy-vol-1, proxy_init_1]
     properties:
       name:
         str_replace:
           template: $proxy-1.$domain
           params:
             $proxy: { get_param: proxy_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: proxy_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: proxy_port_1 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: proxy-vol-1 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: proxy_init_1 }
   proxy-2:
     type: OS::Nova::Server
     depends_on: [ssh_key, proxy_port_2, proxy-vol-2, proxy_init_2]
     properties:
       name:
         str_replace:
           template: $proxy-2.$domain
           params:
             $proxy: { get_param: proxy_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: proxy_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: proxy_port_2 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: proxy-vol-2 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: proxy_init_2 } 
   master-1:
     type: OS::Nova::Server
     depends_on: [ssh_key, master_port_1, master-vol-1, proxy-1, proxy-2, master-2, master-3, worker-1, worker-2, k8s_init_master_1]
     properties:
       name:
         str_replace:
           template: $master-1.$domain
           params:
             $master: { get_param: master_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: master_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: master_port_1 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: master-vol-1 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s_init_master_1 }
 #      scheduler_hints:
 #        group: { get_attr: [soft-ha, id] }
   master-2:
     type: OS::Nova::Server
     depends_on: [ssh_key, master_port_2, master-vol-2, k8s_init_master]
     properties:
       name:
         str_replace:
           template: $master-2.$domain
           params:
             $master: { get_param: master_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: master_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: master_port_2 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: master-vol-2 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s_init_master }
 #      scheduler_hints:
 #        group: { get_attr: [soft-ha, id] }
   master-3:
     type: OS::Nova::Server
     depends_on: [ssh_key, master_port_3, master-vol-3, k8s_init_master]
     properties:
       name:
         str_replace:
           template: $master-3.$domain
           params:
             $master: { get_param: master_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: master_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: master_port_3 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: master-vol-3 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s_init_master }
 #      scheduler_hints:
 #        group: { get_attr: [soft-ha, id] }
   worker-1:
     type: OS::Nova::Server
     depends_on: [ssh_key, worker_port_1, worker-vol-1, k8s_init_worker]
     properties:
       name:
         str_replace:
           template: $worker-1.$domain
           params:
             $worker: { get_param: worker_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: worker_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: worker_port_1 }
       - network: { get_param: nfs_network }
         port: { get_resource: worker_nfs_port_1 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: worker-vol-1 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s_init_worker }
 #      scheduler_hints:
 #        group: { get_attr: [worker-ha, id] }
   worker-2:
     type: OS::Nova::Server
     depends_on: [ssh_key, worker_port_2, worker-vol-2, k8s_init_worker]
     properties:
       name:
         str_replace:
           template: $worker-2.$domain
           params:
             $worker: { get_param: worker_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: worker_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: worker_port_2 }
       - network: { get_param: nfs_network }
         port: { get_resource: worker_nfs_port_2 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: worker-vol-2 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s_init_worker }
 #      scheduler_hints:
 #        group: { get_attr: [worker-ha, show, id] }
   worker-3:
     type: OS::Nova::Server
     depends_on: [ssh_key, worker_port_3, worker-vol-3, k8s_init_worker]
     properties:
       name:
         str_replace:
           template: $worker-3.$domain
           params:
             $worker: { get_param: worker_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: worker_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: worker_port_3 }
       - network: { get_param: nfs_network }
         port: { get_resource: worker_nfs_port_3 }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: worker-vol-3 }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: k8s_init_worker }
 #      scheduler_hints:
 #        group: { get_attr: [worker-ha, show, id] }
   build-server:
     type: OS::Nova::Server
     depends_on: [ssh_key, build_port, build-vol, build_server_init]
     properties:
       name:
         str_replace:
           template: $server_name.$domain
           params:
             $server_name: { get_param: build_server_name }
             $domain: { get_param: dns_domain }
       flavor: { get_param: build_flavor }
       key_name: { get_resource: ssh_key }
       networks:
       - network: { get_param: primary_network }
         port: { get_resource: build_port }
       - network: { get_param: nfs_network }
         port: { get_resource: build_nfs_port }
       block_device_mapping:
       - device_name: vda
         volume_id: { get_resource: build-vol }
         delete_on_termination: false
       user_data_format: SOFTWARE_CONFIG
       user_data: { get_resource: build_server_init }
 #      scheduler_hints:
 #        group: { get_attr: [soft-ha, id] }

2. Create a cluster using openstack stack create command, which unfortunately is not available in our Openstack deployments. Must use terrible Horizon dashboard to launch the stack…

Notes: Stack building might fail due to the old buggy pike version of openstack. In most cases it will succeed but one or more instances will get server error 500 trying to contact the data source on 169.254.169.254. Check /var/log/cloud-init.log. Rebooting the instance usually resolves the issue but breaks the automation.

This is work in progress. The next steps are to add kubernetes dashboard and metrics-server and possibly move config files to git repo for source control.