logo

Baremetal deployment

#Kubernetes cluster deployment

The recommended way to provision a baremetal Kubernetes cluster is Kubespay.

Make sure all of your future cluster nodes are accessible and have your SSH key provisioned.

To provision the cluster, perform the following steps:

  • git clone [email protected]:kubernetes-sigs/kubespray.git
  • cd kubespray
  • git checkout v2.10.4
  • pip3 install -r requirements.txt
  • cp -rfp inventory/sample inventory/*CLUSTER_NAME*
  • declare -a IPS=(*IPS_OF_MACHINES*)
  • CONFIG_FILE=inventory/*CLUSTER_NAME*/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
  • Change the value of kube_apiserver_ip in inventory/*CLUSTER_NAME*/group_vars/k8s-cluster/k8s-cluster.yml to IP of your preferred master node
  • Change the installation method of libselinux-python in roles/bootstrap-os/tasks/bootstrap-centos.yml From
package:
  name: libselinux-python
  state: present

to

raw:
  yum install libselinux-python
  • Enable the local_volume_provisioner_enabled field in inventory/pre-prod/group_vars/k8s-cluster/addons.yml :
local_volume_provisioner_enabled: true
local_volume_provisioner_namespace: kube-system
local_volume_provisioner_storage_classes:
  local-storage:
    host_dir: /mnt/k8s-volumes
    mount_dir: /mnt/volumes
  • Change the IP in inventory/pre-prod/group_vars/k8s-cluster/k8s-cluster.yml file and enable persistent volumes:
kube_apiserver_ip: "*IPS_OF_MACHINES*" # "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
persistent_volumes_enabled: true
  • ansible-playbook -i inventory/pre-prod/hosts.yml --become --become-user=root -u root cluster.yml

  • Copy /etc/kubernetes/admin.conf from the cluster node to your host one and configure kubectl to use use it by running export KUBECONFIG=*path_to_your_config*

#Further infrastructure provisioning

Connect to your Kubernetes cluster and run perform the following steps:

  • Run helm init --force-upgrade
  • Provision a MySQL database, either by running it in the cluster or in a dedicated instance(recommended)
  • Provision an object storage server/bucket either by using any S3-compliant service or by deploying (Minio)[https://github.com/minio/minio] into the cluster
  • Prepare the deployment values for an Ingress Controller to load-balance all the incoming traffic to the cluster: curl -o ingress.yml https://raw.githubusercontent.com/helm/charts/master/stable/nginx-ingress/values.yaml
  • And update the config file with the following values:
<   hostNetwork: false
---
>   hostNetwork: true
<   dnsPolicy: ClusterFirst
---
>   dnsPolicy: ClusterFirstWithHostNet
<   reportNodeInternalIp: false
---
>   reportNodeInternalIp: true
<     useHostPort: false
---
>     useHostPort: true
<       http: 80
<       https: 443
---
>       http: 30001
>       https: 30002
<     type: LoadBalancer
---
>     # type: LoadBalancer
<     # type: NodePort
<     # nodePorts:
<     #   http: 32080
<     #   https: 32443
<     #   tcp:
<     #     8080: 32808
---
>     type: NodePort
<       http: ""
<       https: ""
<       tcp: {}
<       udp: {}
---
>       http: 32080
>       https: 32443
  • Deploy the Ingress Controller using specified values helm install --name ingress stable/nginx-ingress -f ingress.yml

Congratulations, your infrastructure is now ready for the deployment!

#Vault configuration

To configure the Vault backend(e.g. OVH Openstack Swift), edit config/environments/*env*/vault.yml:

    storage:
      swift:
        auth_url: "https://auth.cloud.ovh.net/v2.0"
        container: "CONTAINER_NAME"
        username: changeme
        password: changeme
        tenant: "changeme"
        region: "REGION_NAME" # Should be uppercase
        tenant_id: "changeme"

You can find more information on Openstack Swift Vault backend here

#Database configuration

Barong database settings can be configured in config/environments/*env*/barong.yml:

 db:
   name: changeme
   user: changeme
   password: changeme
   host: changeme # SQL hostname(e.g. 42.1.33.7)
   port: "changeme" # Usually 3306

#Google Container Registry Access

To access GCR, create a service account with correct access rights and add a pull secret to the cluster:

kubectl create secret docker-registry pull-gcr --docker-server=https://gcr.io --docker-username=_json_key [email protected] --docker-password="$(cat *PATH_TO_JSON_FILE*)" -n *deployment_id*-app

Add the secret name to the configuration of any component that needs to use a private image pullSecret: pull-gcr.