docker swarm을 사용해 두 대의 장비를 클러스터링하는 예시이다.
* 공부한 내용
docker-swarm
overlay 네트워크
docker-compose with overlay 네트워크
dc1와 dc2 라는 장비간을 docker swarm을 사용해 overlay 네트워크를 연결하는 예시이다. (널리 알려진 내용인데, 직접 해보면서 공부했다)
dc1, dc2에 docker 설치한 후 네트워크 정보를 보려면 docker network ls를 실행한다.
$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
049d935e785f bridge bridge local
1c59855b9cab host host local
e5403504070c none null local
network이 bridge, host, none이 있다.
sudo docker swarm init를 실행한다. proxy 이슈라면 sudo -E 를 사용한다.
[dc1] $ sudo docker swarm init
Error response from daemon: can't initialize raft node: rpc error: code = 4 desc = context deadline exceeded
[dc1 ]$ sudo -E docker swarm init
Swarm initialized: current node (7ak9p8t6mltl03q61r65rd3g0) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-52vwnqvh9ddumxvalskuvhj71ukvugnl27xe7p98qws4yqaxml-8vqc9p8r3fvceinemamo0merq \
10.195.26.253:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
dc2 장비에서 dc1 도커 스웜에 조인한다.
[dc2]$ sudo docker swarm join \
> --token SWMTKN-1-52vwnqvh9ddumxvalskuvhj71ukvugnl27xe7p98qws4yqaxml-8vqc9p8r3fvceinemamo0merq \
> 10.195.26.253:2377
This node joined a swarm as a worker.
dc1에 새로운 네트워크가 추가되었다.
[dc1]$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
049d935e785f bridge bridge local
1292ebbe7323 docker_gwbridge bridge local
1c59855b9cab host host local
vprubkhn0jh5 ingress overlay swarm
e5403504070c none null local
도커 노드는 연결되었다.
[dc1]$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
7ak9p8t6mltl03q61r65rd3g0 * dc1.google.io Ready Active Leader
rkl07p7fbbzwc1jp957g7k2fw dc2.google.io Ready Active
overnet overlay 네트워크를 생성한다.
[dc1]$ sudo docker network create -d overlay overnet
6wk4ni1952g6nw9kssm8aktan
어떻게 생겼는지 확인한다.
[dc1]$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
049d935e785f bridge bridge local
1292ebbe7323 docker_gwbridge bridge local
1c59855b9cab host host local
vprubkhn0jh5 ingress overlay swarm
e5403504070c none null local
6wk4ni1952g6 overnet overlay swarm
도커 서비스는 아무 것도 없는 상황이다.
[dc1]$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE
overnet overlay 네트워크를 사용해 myservice 라는 alpine 도커를 실행한다. 2개의 복제본을 둔다.
[dc1] $ sudo -E docker service create --name myservice --network overnet --replicas 2 alpine sleep 1d
unable to pin image alpine to digest: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
473p8c1e14kc5xm0pfpfp7g8l
도커 서비스를 확인한다.
[dc1]$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE
473p8c1e14kc myservice replicated 0/2 alpine
[deploy@dc1 ~]$ sudo docker service rm 473p8c1e14kc
473p8c1e14kc
잘 안된다. proxy 이슈이다.
docker proxy 이슈를 해결하는 방법은 다음과 같다.
proxy 이슈를 해결하면. 아래와 같이 나타날 것이다. 복제본 2개가 제대로 사용됨을 볼 수 있다.
[deploy@dc1 ~]$ sudo -E docker service create --name myservice --network overnet --replicas 2 alpine sleep 1d
473p8c1e14kc5xm0pfpfp7g8l
$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE
plz4ntlu7te1 myservice replicated 2/2 alpine
[dc1] $ sudo docker service ps myservice
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
s034lzxypyb0 myservice.1 alpine dc1.google.io Running Running 3 minutes ago
qr119ifi8ija myservice.2 alpine dc1.google.io Running Running 3 minutes ago
proxy 때문에 시행착오하다 보면 docker service ps 커맨드로 확인하면 지저분할 것이다. 싹 정리하는 커맨드는 다음과 같다.
$ sudo docker service rm $(sudo docker service ls -q)
dc2에 접속해서 제대로 동작하는 지 확인한다.
[dc2]$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
2b62a6f3903c bridge bridge local
8a6f79b95819 docker_gwbridge bridge local
b37d0526db96 host host local
vprubkhn0jh5 ingress overlay swarm
0148feb74e1e none null local
6wk4ni1952g6 overnet overlay swarm
[dc2]$ sudo docker service ps myservice
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
h4s2x8b95k5c myservice.1 alpine:latest dc1.google.io Running Running about a minute ago
7cwqwiziht3i myservice.2 alpine:latest dc2.google.io Running Running about a minute ago
이제 docker network inspect 커맨드를 사용해 정의한 overnet overlay 네트워크를 확인한다.
[dc1]$ sudo docker network inspect overnet
[
{
"Name": "overnet",
"Id": "6wk4ni1952g6nw9kssm8aktan",
"Created": "2019-07-09T19:49:52.348740198+09:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"170c3b4cae0b263ef3db52e531eb7d4b83f27cb0b5bb6f33172c5b016e0c5227": {
"Name": "myservice.1.ca4u9dfh79qcfe9mqqau1mh92",
"EndpointID": "ca009c0b5271a67145dcd7664864c36ef11adc76e98f99f5a08380c6a978aa18",
"MacAddress": "02:42:0a:00:00:03",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
},
"b92d28d57c5fe610efbb1077481489443901c93f0b56cb50815e91c9013c5518": {
"Name": "myservice.2.wx1s5amakz0vpct88njwft8g6",
"EndpointID": "134c2e5b81b1be84bd24cbf0d6ce71be52d1e449ce69b978a033bc747fa23421",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": {},
"Peers": [
{
"Name": "dc1.google.io-89dd001ba273",
"IP": "10.195.26.253"
}
]
}
]
두번째 장비를 보면 제대로 동작한다.
[dc2]$ sudo docker network inspect overnet
[
{
"Name": "overnet",
"Id": "6wk4ni1952g6nw9kssm8aktan",
"Created": "2019-07-09T19:55:06.096710356+09:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"139f5f4dc248b00e5ea2efc2e1b31a6c5a82e26b36e8c95d289c1367623eb330": {
"Name": "myservice.2.7cwqwiziht3ift9jwkg7cwp1w",
"EndpointID": "88d0fc9dc9a8bb50df13f43dd0e6d174f21db276334d9613779584c7a3908777",
"MacAddress": "02:42:0a:00:00:04",
"IPv4Address": "10.0.0.4/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": {},
"Peers": [
{
"Name": "dc1.google.io-89dd001ba273",
"IP": "10.195.26.253"
},
{
"Name": "dc2.google.io-47c6e3f13def",
"IP": "10.195.16.130"
}
]
}
]
overlay 네크워크가 구축되었다.
dc1은 10.0.0.3/24, dc2는 10.0.0.4/24이다.
이제 dc1 호스트의 도커에서 dc2 로 ping이 동작하는 확인한다.
[dc1]$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d27a7537892f alpine@sha256:ca1c944a4f8486a153024d9965aafbe24f5723c1d5c02f4964c045a16d19dc54 "sleep 1d" 4 minutes ago Up 4 minutes myservice.1.h4s2x8b95k5c8to5mje5mvj1k
[dc1]$ sudo docker exec -it d27a7537892f /bin/sh
/ # ping 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.238 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.132 ms
^C
--- 10.0.0.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.132/0.185/0.238 ms
/ # ping 10.0.0.4
PING 10.0.0.4 (10.0.0.4): 56 data bytes
64 bytes from 10.0.0.4: seq=0 ttl=64 time=2.213 ms
64 bytes from 10.0.0.4: seq=1 ttl=64 time=0.978 ms
64 bytes from 10.0.0.4: seq=2 ttl=64 time=0.830 ms
^C
--- 10.0.0.4 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.830/1.340/2.213 ms
ping은 잘 동작한다. dc1에서 docker node를 확인해본다. Leader 가 어디에 있는지 눈에 들어온다.
[dc1]$ sudo docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
7ak9p8t6mltl03q61r65rd3g0 * dc1.google.io Ready Active Leader
rkl07p7fbbzwc1jp957g7k2fw dc2.google.io Ready Active
참고로 가상 이더넷, 브릿지를 확인하고 udp 통신으로 통신하고 있는지도 확인한다.
[dc1]$ ip address list | grep veth
16: veth259d50d@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP group default
58: vethb9fa4eb@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ddbf0df18db8 state UP group default
60: veth37f43d4@if59: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ddbf0df18db8 state UP group default
72: vethf0a557a@vethb297b9b: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
73: vethb297b9b@vethf0a557a: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue master docker_gwbridge state LOWERLAYERDOWN group default
93: veth0a51446@if92: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5c8735403ad9 state UP group default
95: veth81ea20a@if94: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5c8735403ad9 state UP group default
[dc1]$ ip address list | grep br
8: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:cc:30:0f:b7 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global docker_gwbridge
92965: br-1d432218ad27: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:d5:f1:ea:66 brd ff:ff:ff:ff:ff:ff
inet 172.21.0.1/16 scope global br-1d432218ad27
48: br-ddbf0df18db8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:7a:1b:92:b9 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.1/16 scope global br-ddbf0df18db8
89: br-5c8735403ad9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:27:58:52:72 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 scope global br-5c8735403ad9
[dc1]$ netstat -anp | grep udp | grep 4789
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
udp 0 0 0.0.0.0:4789 0.0.0.0:* -
이자료를 보면, overlay 네트워크에 대해 잘 설명되어 있으니 참고하길 바란다.
http://blog.nigelpoulton.com/demystifying-docker-overlay-networking/
이제 Flask app 하나를 실행해 제대로 실행되는지 살펴본다.
[dc1]$ cat app.py
from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
count = redis.incr('hits')
return 'Hello World, {}'.format(count)
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000, debug=True)
[dc1]$ cat requirements.txt
flask
redis
[dc1]$ cat Dockerfile
FROM python:3.7-alpine
ENV HTTP_PROXY ...
ENV HTTPS_PROXY ...
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
[dc1] $ cat docker-compose.yml
version: '3'
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
redis:
image: redis:alpine
docker-compose up을 실행한 후 제대로 로컬에서 동작하는지 살펴본다. 잘 동작하는 것을 알 수 있다.
이제 추가한 overnet 네트워크로 로 분산 실행되는지 살펴본다.
[dc1]$ cat docker-compose.yml
version: '3'
services:
web:
image: 127.0.0.1:5000/stackdemo
build: .
ports:
- "8000:8000"
networks:
- overnet
redis:
image: redis:alpine
networks:
- overnet
networks:
overnet:
기존 도커 데몬 다 내리고.. docker compose 재시작한다.
$ sudo docker service rm $(sudo docker service ls -q)
$ sudo docker stack deploy -c docker-compose.yml samuel
Ignoring unsupported options: build
Creating service samuel_web
Creating service samuel_redis
samuel이란 이름이로 서비스들이 생겼고 네트워크도 생겼다.
[dc1]$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
6wk4ni1952g6 overnet overlay swarm
883qo2un2yfe samuel_default overlay swarm
9mb1bdb68yyf samuel_overnet overlay swarm
[dc1]$ sudo docker service ls
ID NAME MODE REPLICAS IMAGE
ch2jfu29kldt samuel_redis replicated 1/1 redis:alpine
kcu7yaiyxtgm samuel_web replicated 1/1 127.0.0.1:5000/stackdemo
samuel_redis, samuel_web이 실제로 어느 장비에 떠 있는지 확인한다.
[dc1]$ sudo docker service ps ch2jfu29kldt
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
cr0m1uadfhtb samuel_redis.1 redis:alpine dc2.google.io Running Running 4 minutes ago
[dc1]$ sudo docker service ps kcu7yaiyxtgm
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
2r9unk728pmi samuel_web.1 127.0.0.1:5000/stackdemo dc1.google.io Running Running 4 minutes ago
docker-swarm이 제대로 동작했다.
참고 자료
https://docs.docker.com/network/overlay-standalone.swarm/
https://www.youtube.com/watch?v=nGSNULpHHZc
https://docs.docker.com/engine/swarm/stack-deploy/
http://blog.nigelpoulton.com/demystifying-docker-overlay-networking/