Container CLustering With Docker Swarm



Basic  Intro  :-

Distributing your web application over a cluster of cloud compute resources can significantly improve performance and availability. Docker Swarm is the Docker native clustering solution, which can turn a group of distributed Docker hosts into a single large virtual server.

Important  :   Docker  1.4  or later required  for  docker  Swarm
 ==========================================


Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts. Supported tools include, but are not limited to, the following:

    Dokku
    Docker Compose
    Docker Machine
    Jenkins


SWarm  Components :












Swarm Manager:

 Docker Swarm has a Master or Manager, that is  Docker Host, and is a single point for all administration. Currently only a single instance of manager is allowed in the cluster.
                                                                    

                  
Swarm  Node :


The containers are deployed on Nodes that are additional Docker Hosts. Each Swarm Node  must be accessible by the manager, each node must listen to the same network interface (TCP port). Each node runs a node agent that registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node’s status. The containers run on a node.

    
Scheduler   Strategy :


Different scheduler strategies (binpack, spread, and random) can be applied to pick the best node to run your container. The default strategy is spread which optimizes the node for least number of running containers. There are multiple kinds of filters, such as constraints and affinity.  This should allow for a decent scheduling algorithm. 


Important :  
---------------

Binpack  and  Spread  :   strategies compute rank according to a node’s available CPU, its RAM, and the number of containers it has


RanDom :     
strategy uses no computation. It selects a node at random and is primarily intended for debugging
 
 

    

Node  Discovery  Service  :

 By default, Swarm uses hosted discovery service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. However etcd, consul, and zookeeper can be also be used for service discovery as well. This is particularly useful if there is no access to Internet, or you are running the setup in a closed network. A new discovery backend can be created as explained here. It would be useful to have the hosted Discovery Service inside the firewall


Discovery   service  list :


i)       Hosted     (based on docker hub...>Using  TOken  to  Discover)

ii)     Etcd  

iii)    Consul 

iv)    Zookeeper 



Note :     Here  We  are Using     Docker  HUB  hosted  Discovery  Service  which is  token based  for find  Nodes


Here  We  can   3  Machines  
====================


Master Node :    IP   192.168.0.92     (also working as Discovery service)


Node 1 :   IP   192.168.0.32


Node 2 :   IP   192.168.0.26

  ############################################

Steps
#############################################



 Pre Requisite :
----------------------------

Each  Node  and  Swarm Manager  must be Running  on  tcp  port   not  in  unix  socket  manner

To  Do  that  follow  the steps  on  all the nodes  and  manager

===============================

[root@desktop92 ~]# systemctl  stop  docker       

[root@desktop92 ~]#     docker daemon -H  tcp://0.0.0.0:2375  &


#####################################



Note:   Every  Node  must  have  Swarm  Image  for Discovery service

 so  pull  swarm  on  each  host

by Using  command :

 [root@desktop92 ~]# docker  pull  swarm

 Important :    This  step  has to be perform  on all  the nodes


Step  1:     Create  swarm   Discovery  Token



[root@desktop92 ~]# docker  run swarm   create   >token.txt

 [root@desktop92 ~]#   cat    token.txt

15ed03f692f7db815d4d520faf20b9bf


Note:   share  this  token   to  every   node   and  ask  them to join


Step  2:   Go  to  Node 1  having  IP  192.168.0.32  


[root@desktop32 ~]#   docker run -d  swarm  join
--addr=192.168.0.32:2375       token://15ed03f692f7db815d4d520faf20b9bf 


Step  3:   Now   Go  to  Node  2  and   Join  from here  also



[root@desktop26 ~]#   docker run -d  swarm  join
--addr=192.168.0.26:2375       token://15ed03f692f7db815d4d520faf20b9bf 



Step  4 :      Now   start  Swarm  Manager


[root@desktop92 ~]#  docker  run -d  -p  5001:2375  swarm manage   token://15ed03f692f7db815d4d520faf20b9bf



Note:   Here   5001  Port  is  for  Client  to  connect   with  Manager  and  manage  all the nodes



Now  this  Done   with   Swarm  Cluster  :



How  to Use  :
==========

In  any  Real  world  Company   Client or system  admin  Only  have  access to  Manager  as a  Manager  Client  .


How to Connect   with  Manager 
========================


[root@client1 ~]# export  DOCKER_HOST=192.168.0.92:5001  


CLient  can  check   No of hosts:
========================


[root@client1 ~]# docker  run --rm  swarm  list token://15ed03f692f7db815d4d520faf20b9bf


192.168.0.26:2375
192.168.0.32:2375


CLient   Can  Run  all the docker  basic operation 
===================================


Like  :

docker  run 
docker  version
docker  info
docker  ps 


Enjoy   the  Swarm  Clustering   i will share more  blog with huge changes in future ...!!!



Comments