K8S Multi-Node Cluster On AWS Using Ansible and Launching A WordPress Application With MYSQL Database in

deepak kapse
8 min readSep 6, 2022

--

Hey guys !! Back with another automation article. In this article, I am going to configuring a Kubernetes Multi-Node Cluster over EC2 instance and Launching a WordPress Application with MYSQL using Ansible.

Now to start this project we need to look at the required steps:

Steps to do this project:

  1. Launch 3 (t2.micro) ec2-instances on AWS using Ansible.
  2. Launched a three instance on AWS that has ansible installed in it and by that, I provisioned the above three instances.
  3. Install Ansible, you can visit here

So let’s launch the ec2-instances on AWS using Ansible.

Create a role

Launch ec2-instances

Now after creating all the required services we can launch no. of ec2-instances. Here I am launching three instances (2 instances will be behaving as a slave and 1 will be as a master). All the instances are launched using t2.micro instance type. t2.micro comes under the free tier and you can launch without free of cost. you need to specify the id’s subnet i.e in which subnet we are going to launch the instance. Also the group_id i.e security group. You also need to mention the count of instances you will be launching. As I have used a loop at the end of the below code so that loop will search the os_Names variable which I have created inside the vars file. The os_Names variable consists of three variables i.e Master, Slave, Slave so the loop will run three times and will launch three instances respectively. Also, you need to specify the key pair in the key_name variable.

variables to launch ec2 instances:

As you can in the above picture that the one roles are created successfully. After creating the role you need to write the code inside the respective files. we have a vars folder to keep variables and a tasks folder to write tasks in that respective files.

As you can see the os_Names variable is a list and includes three variables i.e Master,Slave, Slave. These three variable are the names of the ec2-instances that we are going to launch.

Let’s launch three roles for master,slave and wordpress-mysql:

As you can see the role is created in above snap successfully. Along with roles of Master and Slaves we now also have Wordpress-mysql role.

So lets configure master and slaves which we have configured above.

First we have to write a playbook for master. You need to do the following steps in the respective playbook.

Steps for the configuration of master in kubernetes cluster:

  1. Install docker (As we are using Amazon Linux 2 image so we don’t need to configure repo for docker).
  2. Start docker.
  3. enable docker.
  4. Configure Kubernetes Repo.
  5. Install Kubeadm (it will automatically install kubectl and kubelet).
  6. enable kubelet.
  7. pull docker images using kubeadm.
  8. change driver of docker from cgroupfs to systemd.
  9. restart docker.
  10. Installing iproute-tc.
  11. Setting bridge-nf-call-iptables to 1.
  12. Initializing Master.
  13. Creating .kube directory.
  14. Copying /etc/kubernetes/admin.conf $HOME/.kube/config.
  15. changing owner permission of $HOME/.kube/config.
  16. Creating Flannel.
  17. Generating Token.

Master main file

---
# Master tasks- name: "change mode"
file:
path: "{{ item.path }}"
mode: "go=wxr"
state: directory
loop:
- { path: "{{ dir }}" }
- name: "configuring yum for kubeadm"
yum_repository:
name: "kubernetes"
description: "repo for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: yes
repo_gpgcheck: yes
gpgcheck: yes
gpgkey:
- "{{ gpgkeys }}"
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"- name: "installing kubelet, kubeadm & kubectl"
yum:
name: "{{ item.package_name }}"
disable_excludes: "kubernetes"
state: present
loop:
- { package_name: "{{ softwares }}" }

- name: "insatll docker and iproute"
package:
name: "{{ item.name }}"
loop:
- { name: "docker" }
- { name: "iproute-tc" }- name: "starting, enabling docker and kubelet"
service:
name: "{{ item.name }}"
enabled: yes
state: started
loop:
- { name: "docker" }
- { name: "kubelet" }- name: "pulling images ..."
shell:
cmd: "kubeadm config images pull"- file:
path: "{{ item.path }}"
state: touch
loop:
- { path: "{{ CreateFile[0] }}" }
- { path: "{{ CreateFile[1] }}" }- name: "Creating daemon.json file inside /etc/docker"
shell: |
cat <<EOF | sudo tee "{{ CreateFile[0] }}"
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
changed_when: false- name: "Re-starting Docker"
service:
name: docker
state: restarted- name: "configuring ip tables..."
blockinfile:
path: "{{ CreateFile[1] }}"
block: |
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
state: present- name: "show system drive"
shell: "sysctl --system"- name: "Initializing the clusture"
shell: "sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem"
ignore_errors: yes- name: "Creating the directory"
file:
name: "$HOME/.kube"
state: directory
- name: "change mode"
shell: "sudo chown $(id -u):$(id -g) $HOME/.kube/config;sudo chmod go=wxr /etc/kubernetes/admin.conf"- name: "Copy admin conf to user's .kube config"
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes- name: "Setup Cluster"
shell: "sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"

variables to Configure Master:

Variable File

---
# vars file for master
gpgkeys: "https://packages.cloud.google.com/yum/doc/yum-key.gpg"
softwares:
- "kubelet"
- "kubeadm"
- "kubectl"
dir:
- "/etc/yum.repo.d"
- "/etc/docker/"
- "/etc/sysctl.d/"
- "/etc/yum.repo.d/kubernetes.repo"
CreateFile:
- "{{ dir[1] }}daemon.json"
- "{{ dir[2] }}k8s.conf"

So lets configure master and slaves which we have configured above.

We have to write a playbook for master. You need to do the following steps in the respective playbook.

Steps for the configuration of slave in kubernetes cluster:

  1. Install docker (As we are using Amazon Linux 2 image so we don’t need to configure repo for docker).
  2. Start docker.
  3. enable docker.
  4. Configure Kubernetes Repo.
  5. Install Kubeadm (it will automatically install kubectl and kubelet).
  6. enable kubelet.
  7. pull docker images using kubeadm.
  8. change driver of docker from cgroupfs to systemd.
  9. restart docker.
  10. Installing iproute-tc.
  11. Setting bridge-nf-call-iptables to 1.
  12. Join the Slave with the master

Slave Configuration

---
# Worker Configure
- name: "change mode"
file:
path: "{{ item.path }}"
mode: "go=wxr"
state: directory
loop:
- { path: "{{ dir }}" }
- name: "configuring yum for kubeadm"
yum_repository:
name: "kubernetes"
description: "repo for kubernetes"
baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled: yes
repo_gpgcheck: yes
gpgcheck: yes
gpgkey:
- "{{ gpgkeys }}"
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"- name: "installing kubelet, kubeadm & kubectl"
yum:
name: "{{ item.package_name }}"
disable_excludes: "kubernetes"
state: present
loop:
- { package_name: "{{ softwares }}" }

- name: "insatll docker and iproute"
package:
name: "{{ item.name }}"
loop:
- { name: "docker" }
- { name: "iproute-tc" }- name: "starting, enabling docker and kubelet"
service:
name: "{{ item.name }}"
enabled: yes
state: started
loop:
- { name: "docker" }
- { name: "kubelet" }- name: "pulling images ..."
shell:
cmd: "kubeadm config images pull"- file:
path: "{{ item.path }}"
state: touch
loop:
- { path: "{{ CreateFile[0] }}" }
- { path: "{{ CreateFile[1] }}" }- name: "Creating daemon.json file inside /etc/docker"
shell: |
cat <<EOF | sudo tee "{{ CreateFile[0] }}"
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
changed_when: false- name: "Re-starting Docker"
service:
name: docker
state: restarted- name: "configuring ip tables..."
blockinfile:
path: "{{ CreateFile[1] }}"
block: |
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
state: present- name: "show system drive"
shell: "sysctl --system"- name: Create token to join
command: "kubeadm token create --print-join-command"
delegate_to: "{{ groups['tag_Name_Master'][0] }}"
register: join_token- name: join worker Node
command: "{{ join_token.stdout }}"
ignore_errors: yes
changed_when: false

variables to Configure Slave:

Variable file for slave

---
# variables file for slave
gpgkeys: "https://packages.cloud.google.com/yum/doc/yum-key.gpg"
softwares:
- "kubelet"
- "kubeadm"
- "kubectl"
dir:
- "/etc/yum.repo.d"
- "/etc/docker/"
- "/etc/sysctl.d/"
- "/etc/yum.repo.d/kubernetes.repo"
CreateFile:
- "{{ dir[1] }}daemon.json"
- "{{ dir[2] }}k8s.conf"

Now we are going to create a pod for wordpress and myswl inside a role

---
# tasks file for wordpress-mysql
# task to launch Woedpress
- name: "Launching Wordpress"
shell: "kubectl run mywp1 --image=wordpress:5.1.1-php7.3-apache"
register: Wordpress
- debug:
var: "Wordpress.stdout_lines"#task to launch mysql
- name: "Launching MySql"
shell: "kubectl run mydb1 --image=mysql:5.7 --env=MYSQL_ROOT_PASSWORD=vinod2681997 --env=MYSQL_DATABASE=vinod_db --env=MYSQL_USER=vinod --env=MYSQL_PASSWORD=vinod2681997"
register: MySql- name: "Exposing wordpess"
shell: "kubectl expose pods mywp1 --type=NodePort --port=80"
register: expose
ignore_errors: yes- debug:
var: "expose.stdout_lines"- name: "get service"
shell: "kubectl get svc"
register: svc- debug:
var: "svc.stdout_lines"- name: "Pausing playbook for 60 seconds"
pause:
seconds: 30- name: "getting database IP"
shell: "kubectl get pods -o wide"
register: Database_IP- debug:
var: "Database_IP.stdout_lines"

Creating main playbook to configure master, slave, launch EC2 Instances,wordpress pod and mysql pod using Dynamic Inventory for AWS

This main playbook consists of the following:

  1. Launching three EC2 instances
  2. Configuring Master Node.
  3. Configuring Slaves Node.
  4. Launching Wordpress Pod.
  5. Launching Mysql Pod.

Main file

---
# Launch Inctances
- hosts: localhost
become: false
roles:
- role: launch_instances# Configure Master
- hosts: tag_Name_Master
roles:
- role: master# Configure Slaves
- hosts: tag_Name_Slave
roles:
- role: slave# Setup Wordpress and my-sql
- hosts: tag_Name_Master
roles:
- role: wordpress-mysql

Configure a /etc/ansible/ansible.cfg here I added a role path and inventory file path

As we are using dynamic inventory then the inventory plugins for AWS i.e ec2.ini and ec2.py will fetch the ip address of the master and slaves using the tag names respectively. The first role will run for configuring master node and then role for configuration of Slave Nodes. As you can see that i have used here the vars_prompt module that will prompt ask for the token while running the main playbook. The last role will launch the wordpress and mysql pods respectively as well as expose that pods.

You can run the main playbook (setup.yml) using below command.

ansible-playbook main_plybook.yml

Take Any Slave IP

Take Wordprss pod PORT number

Type IP and PORT

Finally we are ready to go

WordPress MySql

Finally Automated!!!

--

--