YAML의 문자열(string)은 일반적으로 다음처럼 사용한다.







1
2
3
4
5
A string in YAML

'A singled-quoted string in YAML'

"A double-quoted string in YAML"



multi-line은 어떻게 표시할까? (사실 이번에 처음 알았다.. YAML에 multi line이 있을 줄이야..)

pipe(|) 또는 >를 사용한다. 



When a string contains line breaks, you can use the literal style, indicated by the pipe (|), to indicate that the string will span several lines. In literals, newlines are preserved:

1
2
3
|
  \/ /| |\/| |
  / / | |  | |__

Alternatively, strings can be written with the folded style, denoted by >, where each line break is replaced by a space:

1
2
3
4
5
>
  This is a very long sentence
  that spans several lines in the YAML
  but which will be rendered as a string
  without carriage returns.



출처 : http://symfony.com/doc/current/components/yaml/yaml_format.html

Posted by 김용환 '김용환'

댓글을 달아 주세요




소비자 주도 계약(consumer driven contracts)은 서비스에 대한 소비자의 기대 사항을 정의한다. 소비자의 기대 사항을 테스트할 수 있도록 코드로 표현한다. 해당 테스트는 생산자가 수행한다. 소비자 주도 계약이 완벽하게 수행되도록 생산자의 지속적 통합 서버 빌드의 일부분으로 수행되야 한다.



특히 테스트의 독립성을 보장해야 한다. 


마이크로 서비스 또는 레거시 시스템을 시뮬레이션하는 스텁(stub)이 테스트 목적에 유용하다.




실제 카카오 스토리에서 Functional 테스트를 진행할 때 바로 소비자/생산자를 독립시켜 테스트를 하도록 배웠고 docker를 이용한 병렬화 테스팅을 진행했다. Functional 테스팅의 정말 나에게는 정말 도움이 되었다. (멀린 땡큐!)


https://www.slideshare.net/knight1128/hancom-mds-conference-kakao-devops-practice-devops





간이 지나고 보고 이 Functional 테스트가 바로 소비자 주도 계약 테스트 중의 하나였다고 생각한다. (용어를 뭉뜨그리고 해도 개념은 동일했던 것 같다.)





개념에 대한 자세한 내용은 다음을 참조한다.


https://martinfowler.com/articles/consumerDrivenContracts.html



Posted by 김용환 '김용환'

댓글을 달아 주세요



gcp에서 nginx를 사용해 network loadbalancer를 생성하는 방법에 대한 예이다. 


export MY_REGION=[YOUR_REGION]


export MY_ZONE=[YOUR_ZONE]


export CLUSTER_NAME=httploadbalancer



gcloud config set project $DEVSHELL_PROJECT_ID


gcloud config set compute/region $MY_REGION


gcloud config set compute/zone $MY_ZONE




gcp에서 networklb를 생성하고 nginx를 띄웠다. 


$ gcloud container clusters create networklb --num-nodes 3



$ kubectl run nginx --image=nginx --replicas=3deployment "nginx" created


정상적으로 pod가 떠있는지 확인한다.


$ kubectl get pods NAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 13snginx-7c87f569d-2vhj9 1/1 Running 0 13snginx-7c87f569d-b4krw 1/1 Running 0 13s


nginx 클러스터를 lb로 expose한다.


$ kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer



이제 lb nginx 정보를 얻는다.

$ kubectl get service nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx LoadBalancer 10.39.253.240 35.230.93.151 80:30516/TCP 13m



정상적이다.



이제 lb를 정리한다. 


큐버네티스 nginx 서비스를 삭제한다.



$ kubectl delete service nginxservice "nginx" deleted



큐버네티스 레플리케이션 콘트롤러와 nginx pod(인스턴스)를 삭제한다.


$ kubectl delete deployment nginxdeployment "nginx" deleted



lb를 내린다. 


$ gcloud container clusters delete networklb --zone='us-west1-a' The following clusters will be deleted. - [networklb] in [us-west1-a]Do you want to continue (Y/n)? YDeleting cluster networklb...done.Deleted [https://container.googleapis.com/v1/projects/111/zones/us-west1-a/clusters/networklb].





이번에는 http load balancer를 사용한다.







$ export MY_ZONE='us-west1-a'


samuel이라는 cluster를 생성한다. 

$ gcloud container clusters create samuel --zone $MY_ZONE


WARNING: Currently node auto repairs are disabled by default. In the future this will change and they will be enabled by default. Use `--[no-]enable-autorepai

r` flag  to suppress this warning.

WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the lat

ter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_beh

avior property (gcloud config set container/new_scopes_behavior true).

piVersion: extensions/v1beta1

Creating cluster samuel...done.

Created ....

03470e


NAME    LOCATION    MASTER_VERSION  MASTER_IP        MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS

samuel  us-west1-a  1.8.10-gke.0    104.198.107.200  n1-standard-1  1.8.10-gke.0  3          RUNNING



$ kubectl run nginx --image=nginx --port=80

deployment "nginx" created



$ kubectl expose deployment nginx --target-port=80 --type=NodePort

service "nginx" exposed




basic-ingress.yaml 파일을 생성한다.


apiVersion: extensions/v1beta1

kind: Ingress

metadata:

        name: basic-ingress

spec:

        backend:

                serviceName: nginx

                servicePort: 80




$ kubectl create -f basic-ingress.yaml                                                                    

ingress "basic-ingress" created



$ kubectl get ingress basic-ingress --watch

NAME            HOSTS     ADDRESS   PORTS     AGE

basic-ingress   *                   80        7s

basic-ingress   *         35.227.208.116   80        57s

basic-ingress   *         35.227.208.116   80        57s



$ kubectl describe ingress basic-ingress

Name:             basic-ingress

Namespace:        default

Address:          35.227.208.116

Default backend:  nginx:80 (10.36.1.5:80)

Rules:

  Host  Path  Backends

  ----  ----  --------

  *     *     nginx:80 (10.36.1.5:80)

Annotations:

  forwarding-rule:  k8s-fw-default-basic-ingress--27688e79a493971e

  target-proxy:     k8s-tp-default-basic-ingress--27688e79a493971e

  url-map:          k8s-um-default-basic-ingress--27688e79a493971e

  backends:         {"k8s-be-32520--27688e79a493971e":"Unknown"}

Events:

  Type    Reason   Age              From                     Message

  ----    ------   ----             ----                     -------

  Normal  ADD      4m               loadbalancer-controller  default/basic-ingress

  Normal  CREATE   3m               loadbalancer-controller  ip: 35.227.208.116

  Normal  Service  3m (x3 over 3m)  loadbalancer-controller  default backend set to nginx:32520




이제는 정리하는 커맨드를 사용한다.



$ kubectl delete -f basic-ingress.yaml

ingress "basic-ingress" deleted


$ kubectl delete deployment nginx

deployment "nginx" deleted


$ gcloud container clusters delete samuel

ERROR: (gcloud.container.clusters.delete) One of [--zone, --region] must be supplied: Please specify location..


$ gcloud container clusters delete samuel --zone=$MY_ZONE

The following clusters will be deleted.

 - [samuel] in [us-west1-a]

Do you want to continue (Y/n)?  Y

Deleting cluster samuel...done.

Deleted 







kubectl get pods -owideNAME READY STATUS RESTARTS AGE IP NODEnginx-7c87f569d-2hsjp 1/1 Running 0 8s 10.36.0.6 gke-networklb-default-pool-3f6ca419-nmpbnginx-7c87f569d-2vhj9 1/1 Running 0 8s 10.36.2.6 gke-networklb-default-pool-3f6ca419-vs85nginx-7c87f569d-b4krw 1/1 Running 0 8s 10.36.1.6 gke-networklb-default-pool-3f6ca419-wxvl







$ kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx-7c87f569d-2hsjp 1/1 Running 0 2mdefault nginx-7c87f569d-2vhj9 1/1 Running 0 2mdefault nginx-7c87f569d-b4krw 1/1 Running 0 2mkube-system event-exporter-v0.1.8-599c8775b7-nc8xw 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-dqrnb 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-lrnjr 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-zh2qq 2/2 Running 0 3m kube-system heapster-v1.4.3-57c7677fc4-6mqz8 3/3 Running 0 2m kube-system kube-dns-778977457c-8xtfs 3/3 Running 0 2m kube-system kube-dns-778977457c-nvztz 3/3 Running 0 3m kube-system kube-dns-autoscaler-7db47cb9b7-jbdxv 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-nmpb 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-vs85 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-wxvl 1/1 Running 0 3m kube-system kubernetes-dashboard-6bb875b5bc-9r62n 1/1 Running 0 3mkube-system l7-default-backend-6497bcdb4d-sngpv 1/1 Running 0 3m




$ kubectl get pods --include-uninitializedNAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 3mnginx-7c87f569d-2vhj9 1/1 Running 0 3mnginx-7c87f569d-b4krw 1/1 Running 0 3m





$ kubectl get pods --field-selector=status.phase=RunningNAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 4mnginx-7c87f569d-2vhj9 1/1 Running 0 4mnginx-7c87f569d-b4krw 1/1 Running 0 4m



echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})nginx-7c87f569d-2hsjp nginx-7c87f569d-2vhj9 nginx-7c87f569d-b4krw



$ kubectl get pods -o json{ "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" "lastTransitionTime": "2018-06-11T01:01:18Z", }, "name": "nginx-7c87f569d-2hsjp", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "599", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-2hsjp", "uid": "ef2e0e4a-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-nmpb", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://4530901e7f82ad2b601a759a28f48a693c9944299318e3126ecba9edf0c2b615", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:17Z" } } } ], "hostIP": "10.138.0.3", "phase": "Running", "podIP": "10.36.0.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" }, "name": "nginx-7c87f569d-2vhj9", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "602", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-2vhj9", "uid": "ef29bf6b-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-vs85", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://81a65fd0f30327173eb5d41f1a5c0a7e3752aca7963ac510165aa007c4abcd0b", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:18Z" } } } ], "hostIP": "10.138.0.4", "phase": "Running", "podIP": "10.36.2.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" }, "name": "nginx-7c87f569d-b4krw", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "595", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-b4krw", "uid": "ef2d8181-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-wxvl", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://7818cd87a8cdf37853d6e44cbdc0f06cd9ca84108cd85772c0a22bc95ddaf41d", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:17Z" } } } ], "hostIP": "10.138.0.2", "phase": "Running", "podIP": "10.36.1.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } } ], "kind": "List", "metadata": { "resourceVersion": "", "selfLink": "" }}





Posted by 김용환 '김용환'

댓글을 달아 주세요


java10을 설치하고 나서 lombok 어노테이션 기반의 gradle 컴파일에 이슈가 있다. (Intellij는 이상없다)







$ gradle compileJava


> Task :compileJava FAILED

error: cannot find symbol

import com.google.api.entity.opentsdb.DownSample.DownSampleBuilder;

                                                      ^

  symbol:   class DownSampleBuilder

  location: class DownSample

1 error





@Data
@ToString
@Builder
@AllArgsConstructor
public class DownSample {

long interval;
AggregatorType aggregator;
DownSampleFill fill;

}


java 컴파일 이슈로 보인다..


https://github.com/rzwitserloot/lombok/issues/1646


Posted by 김용환 '김용환'

댓글을 달아 주세요



https://www.differencebetween.com/difference-between-rollout-and-vs-deploy/



롤아웃(roll-out)이란 간단히 번역하면 신제품 또는 정책 "출시" 또는 "릴리즈"라 할 수 있다. 



Posted by 김용환 '김용환'

댓글을 달아 주세요

consul에 대한 좋은 자료


https://blog.eleven-labs.com/en/consul-service-discovery-failure-detection/



Posted by 김용환 '김용환'

댓글을 달아 주세요


우분투 버전은 동물 이름이다..



The development codename of a release takes the form "Adjective Animal". So for example: Warty Warthog (Ubuntu 4.10), Hoary Hedgehog (Ubuntu 5.04), Breezy Badger (Ubuntu 5.10), are the first three releases of Ubuntu. In general, people refer to the release using the adjective, like "warty" or "breezy". The names live on in one hidden location



MarkShuttleworth said the following with regard to where the naming scheme originally came from:

  • So, what's with the "Funky Fairy" naming system? Many sensible people have wondered why we chose this naming scheme. It came about as a joke on a ferry between Circular Quay and somewhere else, in Sydney, Australia:

    • lifeless: how long before we make a first release?
      sabdfl: it would need to be punchy. six months max.
      lifeless: six months! thats not a lot of time for polish.
      sabdfl: so we'll have to nickname it the warty warthog release.

    And voila, the name stuck. The first mailing list for the Ubuntu team was called "warthogs", and we used to hang out on #warthogs on irc.freenode.net. For subsequent releases we wanted to stick with the "hog" names, so we had Hoary Hedgehog, and Grumpy Groundhog. But "Grumpy" just didn't sound right, for a release that was looking really good, and had fantastic community participation. So we looked around and came up with "Breezy Badger". We will still use "Grumpy Groundhog", but those plans are still a surprise to be announced... For those of you who think the chosen names could be improved, you might be relieved to know that the "Breezy Badger" was originally going to be the "Bendy Badger" (I still think that rocked). There were others... For all of our sanity we are going to try to keep these names alphabetical after Breezy. We might skip a few letters, and we'll have to wrap eventually. But the naming convention is here for a while longer, at least. The possibilities are endless. Gregarious Gnu? Antsy Aardvark? Phlegmatic Pheasant? You send 'em, we'll consider 'em.

  1. lifeless is Robert Collins. sabdfl is Mark Shuttleworth.


https://wiki.ubuntu.com/DevelopmentCodeNames


https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_18.04_LTS_(Bionic_Beaver)



Posted by 김용환 '김용환'

댓글을 달아 주세요


"네이버링"이라는 단어를 사용한다면, BGP 간을 연결하는 작업이라고 기억하는 것이 좋다.




https://www.networkcomputing.com/data-centers/bgp-basics-internal-and-external-bgp/1830126875

https://www.slideshare.net/vamsidharnaidu/bgp-1


Posted by 김용환 '김용환'

댓글을 달아 주세요

구글 클라우드에 apache kafka as service가 추가되었다. 



https://cloud.google.com/blog/big-data/2018/05/google-cloud-platform-and-confluent-partner-to-deliver-a-managed-apache-kafka-service




https://www.confluent.io/confluent-cloud/?utm_source=cloud.google.com&utm_medium=site-link&utm_campaign=gcp&utm_term=term&utm_content=content



Posted by 김용환 '김용환'

댓글을 달아 주세요

[펌] buildah

scribbling 2018. 5. 18. 10:59



https://github.com/projectatomic/buildah



docker image 배포할 때, 특정 layer 를 추가/삭제할 수 있다고 한다. 레드햇에서 많이 미는 중인듯 하다.


간단한 예는 다음과 같다. 



https://fedoramagazine.org/daemon-less-container-management-buildah/


https://opensource.com/article/18/5/containers-buildah




Posted by 김용환 '김용환'

댓글을 달아 주세요