Thursday, March 15, 2018

Multi Node Swarm Cluster on Oracle Cloud Infrastructure Classic

In this post we will see how we can build Multi Node Docker Swarm Cluster on Oracle Cloud Infrastructure Classic.

To Start, we will create Three Nodes in the Swarm Cluster with One Nodes as Manager and other two nodes as Worker Nodes. Any Number of nodes can be added in the same fashion to scale the cluster.


Architecture Diagram




Pre Requisites 
  • Oracle Infrastructure Cloud Account :  If you do not have account, you can sign up for Free Trial account at : https://cloud.oracle.com/en_US/tryit
  • Basic Knowledge of Dockers & Containers
  • Basics of Swarm Clusters


Oracle Cloud Infrastructure Classic Set Up

For three node Cluster, we will create Three Compute Classic Instances. We will use Ubuntu Image for all the nodes of our Swarm Cluster.

It is very important to have correct Ingress Firewall Ports Open on Manager & Worker Nodes in order to have right communication between Manager & Worker Nodes in the Swarm Cluster. Below is the table of the Ports which needs to opened on Managers & Workers. Thanks to Bret Fisher for this awesome information.


1) In OCIC we will need to create Security Applications. Below is screenshot for Four Security Application along with Port Numbers.



2) Create Three Security Lists - swarm-manager-seclist, swarm-sec-list & swarm-worker-seclist

swarm-manager-seclist will be attached to Node1 - Manager Node. All INBOUND Rules to Node1 have swarm-manager-seclist as Destination

swarm-worker-seclist will be attached to Node2 & Node3 - Worker Nodes. All INBOUND Rules to Node2 & Node3 have swarm-manager-seclist as Destination

swarm-sec-list is attached to all the Nodes, this provides security rules - ssh and http/https or any port which needs to be accessed from Public Internet.

3) Create Security Rules as per The Architecture Diagram / Ports Tables mentioned above

Ingress For Swarm Manager Nodes


Ingress For Swarm Worker Nodes


Ingress for All Nodes




4) Next step is to create Compute Classic Instances.

Below are the Three Nodes Created on Compute Classic. For this example we used Shared Network, You can also use IP Networks for the same.



Node1 is Manager Node with Security Lists swarm-manager-seclist and swarm-seclist.

Node2 & Node3 are Worker Nodes with Security List swarm-worker-seclist and swarm-seclist

We are all good with OCIC.

DOCKER Installation on Manager and Worker Nodes

Now that we have all Nodes created/provisioned, First Step is to Install Docker on these nodes.

Best way to install docker on these nodes is to go to https://get.docker.com/

This is the best way to install latest CE Stable build of docker so that you get all the latest features of Docker. You can install it other way also. Completely up to you. Below is what the instructions on get.docker.com.

$ curl -fsSL get.docker.com -o get-docker.sh


Give permissions to get-docker.sh

After that run  $./get-docker.sh and this script will install Docker Latest Stable build on the Node.



After Installation you need to add ubuntu user in the group of Docker. So that you can issue docker CLI commands from ubuntu user. Run following command

$usermod -aG ubuntu docker

Exit and Login again using ubuntu user. You are all set to Fire Docker CLI Commands and create/manage Containers.

Configure Swarm Cluster 

On Manager Node : node1, 

To initialize Swarm, Execute the below command.

ubuntu@node1 : $docker swarm init --advertise-addr <private ip of node1>



The Command output shows that Current Node - node1 is now a Manager in the Swarm.

The Commands also returns a command docker swarm join --token <token> which needs to be run on worker nodes so that they can join this Manager node and create a Manager Worker Nodes Swarm Cluster.

You can run below command on this Node to get the tokens.

$docker swarm join-token manager -- Returns command that can run on other nodes to join as Swarm Manager in Addition of Current node.

$docker swarm join-token worker -- Returns command that can run on other nodes to join  Swarm Worker to the Manager Node.

Ok, So grab the output command docker swarm join --token <token> (out put of above command) and Run it after logging in both Node2 & Node3

Node 2


Node 3



That's it. Swarm Cluster is Configured now - Node1 as Manager and Node2 & Node3 as Workers.

To Check if everything is good Run below Command on Node1 - Manager Node.

$docker node ls



Manager Status as Leader means this Node is Manager. Blank Means this node is worker.

We are all set now. Lets try to create a Service and SCALE it all the nodes.

Run Below Command : This Command Creates a 

$docker service create --name demo_web --replicas 3  -p 80:3000 rohanwalia/node-web-app:latest



rohanwalia/node-web-app is docker image in my dockerhub. It is a simple Hello World NodeJS Application which exposes port 3000 on http.

Service demo_web is created and scaled on all three nodes. We can check the status by running below command.

$docker service ps demo_web


Not we can grab Public IP Address of any Node and open it in Browser.

The service can scaled using below command.

$ docker service scale demo_web= <number of replicas>

Below we Scaled down to 2 Replicas.



Let us quickly check the Status Again.


This shows that Service is now deployed on Node1 & Node 3. 

We have successfully Created a Multi Node Swarm Cluster which we can Scale Out to add more nodes. And we have also  created a service and scaled our Service to Swarm Cluster Nodes.

Do let me know if you have any questions or comments on this post.

Thanks






5 comments:

  1. Great work with screen shots to simplify the whole process.

    ReplyDelete
    Replies
    1. Thanks Maha!! Glad it was helpful for you.

      Delete
  2. CIITN is located in Prime location in Noida having best connectivity via all modes of public transport. CIITN offer both weekend and weekdays courses to facilitate Hadoop aspirants. Among all Hadoop Training Institute in Noida , CIITN's Big Data and Hadoop Certification course is designed to prepare you to match all required knowledge for real time job assignment in the Big Data world with top level companies. CIITN puts more focus in project based training and facilitated with Hadoop 2.7 with Cloud Lab—a cloud-based Hadoop environment lab setup for hands-on experience.

    CIITNOIDA is the good choice for Big Data Hadoop Training in NOIDA in the final year. I have also completed my summer training from here. It provides high quality Hadoop training with Live projects. The best thing about CIITNOIDA is its experienced trainers and updated course content. They even provide you placement guidance and have their own development cell. You can attend their free demo class and then decide.

    Hadoop Training in Noida
    Big Data Hadoop Training in Noida

    ReplyDelete