gcp에서 nginx를 사용해 network loadbalancer를 생성하는 방법에 대한 예이다. 


export MY_REGION=[YOUR_REGION]


export MY_ZONE=[YOUR_ZONE]


export CLUSTER_NAME=httploadbalancer



gcloud config set project $DEVSHELL_PROJECT_ID


gcloud config set compute/region $MY_REGION


gcloud config set compute/zone $MY_ZONE




gcp에서 networklb를 생성하고 nginx를 띄웠다. 


$ gcloud container clusters create networklb --num-nodes 3



$ kubectl run nginx --image=nginx --replicas=3deployment "nginx" created


정상적으로 pod가 떠있는지 확인한다.


$ kubectl get pods NAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 13snginx-7c87f569d-2vhj9 1/1 Running 0 13snginx-7c87f569d-b4krw 1/1 Running 0 13s


nginx 클러스터를 lb로 expose한다.


$ kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer



이제 lb nginx 정보를 얻는다.

$ kubectl get service nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx LoadBalancer 10.39.253.240 35.230.93.151 80:30516/TCP 13m



정상적이다.



이제 lb를 정리한다. 


큐버네티스 nginx 서비스를 삭제한다.



$ kubectl delete service nginxservice "nginx" deleted



큐버네티스 레플리케이션 콘트롤러와 nginx pod(인스턴스)를 삭제한다.


$ kubectl delete deployment nginxdeployment "nginx" deleted



lb를 내린다. 


$ gcloud container clusters delete networklb --zone='us-west1-a' The following clusters will be deleted. - [networklb] in [us-west1-a]Do you want to continue (Y/n)? YDeleting cluster networklb...done.Deleted [https://container.googleapis.com/v1/projects/111/zones/us-west1-a/clusters/networklb].





이번에는 http load balancer를 사용한다.







$ export MY_ZONE='us-west1-a'


samuel이라는 cluster를 생성한다. 

$ gcloud container clusters create samuel --zone $MY_ZONE


WARNING: Currently node auto repairs are disabled by default. In the future this will change and they will be enabled by default. Use `--[no-]enable-autorepai

r` flag  to suppress this warning.

WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the lat

ter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_beh

avior property (gcloud config set container/new_scopes_behavior true).

piVersion: extensions/v1beta1

Creating cluster samuel...done.

Created ....

03470e


NAME    LOCATION    MASTER_VERSION  MASTER_IP        MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS

samuel  us-west1-a  1.8.10-gke.0    104.198.107.200  n1-standard-1  1.8.10-gke.0  3          RUNNING



$ kubectl run nginx --image=nginx --port=80

deployment "nginx" created



$ kubectl expose deployment nginx --target-port=80 --type=NodePort

service "nginx" exposed




basic-ingress.yaml 파일을 생성한다.


apiVersion: extensions/v1beta1

kind: Ingress

metadata:

        name: basic-ingress

spec:

        backend:

                serviceName: nginx

                servicePort: 80




$ kubectl create -f basic-ingress.yaml                                                                    

ingress "basic-ingress" created



$ kubectl get ingress basic-ingress --watch

NAME            HOSTS     ADDRESS   PORTS     AGE

basic-ingress   *                   80        7s

basic-ingress   *         35.227.208.116   80        57s

basic-ingress   *         35.227.208.116   80        57s



$ kubectl describe ingress basic-ingress

Name:             basic-ingress

Namespace:        default

Address:          35.227.208.116

Default backend:  nginx:80 (10.36.1.5:80)

Rules:

  Host  Path  Backends

  ----  ----  --------

  *     *     nginx:80 (10.36.1.5:80)

Annotations:

  forwarding-rule:  k8s-fw-default-basic-ingress--27688e79a493971e

  target-proxy:     k8s-tp-default-basic-ingress--27688e79a493971e

  url-map:          k8s-um-default-basic-ingress--27688e79a493971e

  backends:         {"k8s-be-32520--27688e79a493971e":"Unknown"}

Events:

  Type    Reason   Age              From                     Message

  ----    ------   ----             ----                     -------

  Normal  ADD      4m               loadbalancer-controller  default/basic-ingress

  Normal  CREATE   3m               loadbalancer-controller  ip: 35.227.208.116

  Normal  Service  3m (x3 over 3m)  loadbalancer-controller  default backend set to nginx:32520




이제는 정리하는 커맨드를 사용한다.



$ kubectl delete -f basic-ingress.yaml

ingress "basic-ingress" deleted


$ kubectl delete deployment nginx

deployment "nginx" deleted


$ gcloud container clusters delete samuel

ERROR: (gcloud.container.clusters.delete) One of [--zone, --region] must be supplied: Please specify location..


$ gcloud container clusters delete samuel --zone=$MY_ZONE

The following clusters will be deleted.

 - [samuel] in [us-west1-a]

Do you want to continue (Y/n)?  Y

Deleting cluster samuel...done.

Deleted 







kubectl get pods -owideNAME READY STATUS RESTARTS AGE IP NODEnginx-7c87f569d-2hsjp 1/1 Running 0 8s 10.36.0.6 gke-networklb-default-pool-3f6ca419-nmpbnginx-7c87f569d-2vhj9 1/1 Running 0 8s 10.36.2.6 gke-networklb-default-pool-3f6ca419-vs85nginx-7c87f569d-b4krw 1/1 Running 0 8s 10.36.1.6 gke-networklb-default-pool-3f6ca419-wxvl







$ kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx-7c87f569d-2hsjp 1/1 Running 0 2mdefault nginx-7c87f569d-2vhj9 1/1 Running 0 2mdefault nginx-7c87f569d-b4krw 1/1 Running 0 2mkube-system event-exporter-v0.1.8-599c8775b7-nc8xw 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-dqrnb 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-lrnjr 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-zh2qq 2/2 Running 0 3m kube-system heapster-v1.4.3-57c7677fc4-6mqz8 3/3 Running 0 2m kube-system kube-dns-778977457c-8xtfs 3/3 Running 0 2m kube-system kube-dns-778977457c-nvztz 3/3 Running 0 3m kube-system kube-dns-autoscaler-7db47cb9b7-jbdxv 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-nmpb 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-vs85 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-wxvl 1/1 Running 0 3m kube-system kubernetes-dashboard-6bb875b5bc-9r62n 1/1 Running 0 3mkube-system l7-default-backend-6497bcdb4d-sngpv 1/1 Running 0 3m




$ kubectl get pods --include-uninitializedNAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 3mnginx-7c87f569d-2vhj9 1/1 Running 0 3mnginx-7c87f569d-b4krw 1/1 Running 0 3m





$ kubectl get pods --field-selector=status.phase=RunningNAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 4mnginx-7c87f569d-2vhj9 1/1 Running 0 4mnginx-7c87f569d-b4krw 1/1 Running 0 4m



echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})nginx-7c87f569d-2hsjp nginx-7c87f569d-2vhj9 nginx-7c87f569d-b4krw



$ kubectl get pods -o json{ "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" "lastTransitionTime": "2018-06-11T01:01:18Z", }, "name": "nginx-7c87f569d-2hsjp", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "599", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-2hsjp", "uid": "ef2e0e4a-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-nmpb", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://4530901e7f82ad2b601a759a28f48a693c9944299318e3126ecba9edf0c2b615", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:17Z" } } } ], "hostIP": "10.138.0.3", "phase": "Running", "podIP": "10.36.0.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" }, "name": "nginx-7c87f569d-2vhj9", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "602", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-2vhj9", "uid": "ef29bf6b-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-vs85", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://81a65fd0f30327173eb5d41f1a5c0a7e3752aca7963ac510165aa007c4abcd0b", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:18Z" } } } ], "hostIP": "10.138.0.4", "phase": "Running", "podIP": "10.36.2.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" }, "name": "nginx-7c87f569d-b4krw", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "595", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-b4krw", "uid": "ef2d8181-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-wxvl", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://7818cd87a8cdf37853d6e44cbdc0f06cd9ca84108cd85772c0a22bc95ddaf41d", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:17Z" } } } ], "hostIP": "10.138.0.2", "phase": "Running", "podIP": "10.36.1.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } } ], "kind": "List", "metadata": { "resourceVersion": "", "selfLink": "" }}





Posted by 김용환 '김용환'


java10을 설치하고 나서 lombok 어노테이션 기반의 gradle 컴파일에 이슈가 있다. (Intellij는 이상없다)







$ gradle compileJava


> Task :compileJava FAILED

error: cannot find symbol

import com.google.api.entity.opentsdb.DownSample.DownSampleBuilder;

                                                      ^

  symbol:   class DownSampleBuilder

  location: class DownSample

1 error





@Data
@ToString
@Builder
@AllArgsConstructor
public class DownSample {

long interval;
AggregatorType aggregator;
DownSampleFill fill;

}


java 컴파일 이슈로 보인다..


https://github.com/rzwitserloot/lombok/issues/1646


Posted by 김용환 '김용환'



https://www.differencebetween.com/difference-between-rollout-and-vs-deploy/



롤아웃(roll-out)이란 간단히 번역하면 신제품 또는 정책 "출시" 또는 "릴리즈"라 할 수 있다. 



Posted by 김용환 '김용환'

consul에 대한 좋은 자료


https://blog.eleven-labs.com/en/consul-service-discovery-failure-detection/



Posted by 김용환 '김용환'


우분투 버전은 동물 이름이다..



The development codename of a release takes the form "Adjective Animal". So for example: Warty Warthog (Ubuntu 4.10), Hoary Hedgehog (Ubuntu 5.04), Breezy Badger (Ubuntu 5.10), are the first three releases of Ubuntu. In general, people refer to the release using the adjective, like "warty" or "breezy". The names live on in one hidden location



MarkShuttleworth said the following with regard to where the naming scheme originally came from:

  • So, what's with the "Funky Fairy" naming system? Many sensible people have wondered why we chose this naming scheme. It came about as a joke on a ferry between Circular Quay and somewhere else, in Sydney, Australia:

    • lifeless: how long before we make a first release?
      sabdfl: it would need to be punchy. six months max.
      lifeless: six months! thats not a lot of time for polish.
      sabdfl: so we'll have to nickname it the warty warthog release.

    And voila, the name stuck. The first mailing list for the Ubuntu team was called "warthogs", and we used to hang out on #warthogs on irc.freenode.net. For subsequent releases we wanted to stick with the "hog" names, so we had Hoary Hedgehog, and Grumpy Groundhog. But "Grumpy" just didn't sound right, for a release that was looking really good, and had fantastic community participation. So we looked around and came up with "Breezy Badger". We will still use "Grumpy Groundhog", but those plans are still a surprise to be announced... For those of you who think the chosen names could be improved, you might be relieved to know that the "Breezy Badger" was originally going to be the "Bendy Badger" (I still think that rocked). There were others... For all of our sanity we are going to try to keep these names alphabetical after Breezy. We might skip a few letters, and we'll have to wrap eventually. But the naming convention is here for a while longer, at least. The possibilities are endless. Gregarious Gnu? Antsy Aardvark? Phlegmatic Pheasant? You send 'em, we'll consider 'em.

  1. lifeless is Robert Collins. sabdfl is Mark Shuttleworth.


https://wiki.ubuntu.com/DevelopmentCodeNames


https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_18.04_LTS_(Bionic_Beaver)



Posted by 김용환 '김용환'


"네이버링"이라는 단어를 사용한다면, BGP 간을 연결하는 작업이라고 기억하는 것이 좋다.




https://www.networkcomputing.com/data-centers/bgp-basics-internal-and-external-bgp/1830126875

https://www.slideshare.net/vamsidharnaidu/bgp-1


Posted by 김용환 '김용환'

구글 클라우드에 apache kafka as service가 추가되었다. 



https://cloud.google.com/blog/big-data/2018/05/google-cloud-platform-and-confluent-partner-to-deliver-a-managed-apache-kafka-service




https://www.confluent.io/confluent-cloud/?utm_source=cloud.google.com&utm_medium=site-link&utm_campaign=gcp&utm_term=term&utm_content=content



Posted by 김용환 '김용환'

[펌] buildah

scribbling 2018.05.18 10:59



https://github.com/projectatomic/buildah



docker image 배포할 때, 특정 layer 를 추가/삭제할 수 있다고 한다. 레드햇에서 많이 미는 중인듯 하다.


간단한 예는 다음과 같다. 



https://fedoramagazine.org/daemon-less-container-management-buildah/


https://opensource.com/article/18/5/containers-buildah




Posted by 김용환 '김용환'


배포 방식도 간단히 deployment/release로 표현되기도 하지만.. 사실 많은 전략이 있다.



보통 웹 서비스에서 많이 사용되는 방식을 소개하면.. 다음과 같다. 




1) 롤링 업데이트 (rolling update), 또는 Ramped 라고도 한다.


일반적인 배포를 의미하는데, 


단순하게 한 대씩 재시작한다. 만약 코드 변경에 따른 side effect가 발생할 수 있다. 


하지만 롤백이 가능하다는 점, 관리가 편하다는 점에서 많이 사용되었다.


한 대만 배포해서 살펴볼 수도 있다.





2) 블루-그린(Blue-Green)


https://martinfowler.com/bliki/BlueGreenDeployment.html


예전 배포물을 블루(blue), 신규 배포물을 그린(green)이라고 해서 붙여진 이름이다. 


새로운 배포물을 배포하고 모든 연결을 새로운 배포물만 보게 한다. 코드 변경에 따른 side effect가 없다.


그러자 장애가 발생하면 크게 영향을 준다.




2) 카나리 (Canary)


1대 또는 특정 user에게만 미리 배포했다가 잘되면 전체 배포하는 방식이다.






수정한 코드가 워낙 많이 바뀌어서 좀 불안할 때 사용하는 방식이다. 


단순히 1대만 배포하는 경우도 있고. zookeeper/storage를 이용해 특정 user에게만 배포하는 형태를 가질 수 있다. 




3) A/B 테스트


카나리 배포와 비슷하지만 A/B 테스팅만을 위한 것이다. 






그림 출처 : https://martinfowler.com/


참조 

네이버에서의 SRE 경험.

https://martinfowler.com/

http://container-solutions.com/kubernetes-deployment-strategies/



Posted by 김용환 '김용환'

맥 OS 사용자에게 도움이 되는 오픈 소스 애플리케이션을 소개한 페이지다.

(사실 정확히 말하면 개발자를 위한 것 같다..)


https://github.com/serhii-londar/open-source-mac-os-apps





Posted by 김용환 '김용환'