786 페이지의 스파크 데이터 분석 책을 번역을 완료했다. 출간이 될지는 모르겠지만. ㅎㅎ




https://www.amazon.com/Scala-Spark-Big-Data-Analytics/dp/1785280848





이 책은 스파크의 데이터 분석 외에, 머신 러닝을 설명한 책이다. 


스파크의 머신 러닝 설명과 api를 통해 조금이나마 머신 러닝을 알게 되었다.


이 책이 출간되어 스파크를 잘 모르는 개발자가 스파크와 머신 러닝을 조금이나마 쉽게 더 알면 좋겠다... 


Posted by '김용환'
,


nova 커맨드를 실행하다가 아래와 같은 에러를 만났다.


$ nova quota-show --user $projectUser --tenant $project

ERROR (AttributeError): 'unicode' object has no attribute 'get'




다음 코드를 써서 해결할 수 있는 경우가 있다.


export PYTHONIOENCODING=UTF-8



그래도 에러가 발생하면. 이전 컴맨드에 --debug를 추가하면.. 확실히 알 수 있다.


 404 not found.. API 이슈이다. 


$ nova quota-show --user $projecUser --tenant $project --debug

DEBUG (session:198) REQ: curl -g -i -X GET https://code.google.io:5000/v2.0/ -H "Accept: application/json" -H "User-Agent: python-keystoneclient"

...

DEBUG (connectionpool:387) "GET /v2/ae17dbd7165142808e074579360a8b9c HTTP/1.1" 404 112

DEBUG (session:215) RESP: [404] Date: Tue, 12 Jun 2018 08:01:07 GMT Content-Length: 112 Content-Type: application/json; charset=UTF-8 X-Compute-Request-Id: req-0a8e18ed-860e-4dcb-a566-f7eb02fb19bd

RESP BODY: {"message": "The resource could not be found.<br /><br />\n\n\n", "code": "404 Not Found", "title": "Not Found"}


DEBUG (shell:909) 'unicode' object has no attribute 'get'







Posted by '김용환'
,


marathon에서 haproxy 기본 포트는 다 열려있다. 서비스는 80/443 포트 연결되게 했지만, 특정 마라톤에서는 http 접근시 80만 막고 싶을 때가 있다. 



  "labels": {

    "HAPROXY_0_MODE": "http",

    "HAPROXY_0_HTTP_FRONTEND_ACL": "",

    "HAPROXY_0_VHOST": "plus.google.com",

    "HAPROXY_0_SSL_CERT": "/etc/ssl/marathon/268.pem"

  },

  


HAPROXY_0_HTTP_FRONTEND_ACL가 정의되어 있으면 80포트 접근시 503 에러가 나타나게 한다.

반면 443 포트는 열려 있다.



$ curl -I -XPOST http://plus.google.com/

HTTP/1.0 503 Service Unavailable

Cache-Control: no-cache

Connection: close

Content-Type: text/html




$ curl -XPOST https://plus.google.com...

성공


Posted by '김용환'
,



gcp에서 nginx를 사용해 network loadbalancer를 생성하는 방법에 대한 예이다. 


export MY_REGION=[YOUR_REGION]


export MY_ZONE=[YOUR_ZONE]


export CLUSTER_NAME=httploadbalancer



gcloud config set project $DEVSHELL_PROJECT_ID


gcloud config set compute/region $MY_REGION


gcloud config set compute/zone $MY_ZONE




gcp에서 networklb를 생성하고 nginx를 띄웠다. 


$ gcloud container clusters create networklb --num-nodes 3



$ kubectl run nginx --image=nginx --replicas=3deployment "nginx" created


정상적으로 pod가 떠있는지 확인한다.


$ kubectl get pods NAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 13snginx-7c87f569d-2vhj9 1/1 Running 0 13snginx-7c87f569d-b4krw 1/1 Running 0 13s


nginx 클러스터를 lb로 expose한다.


$ kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer



이제 lb nginx 정보를 얻는다.

$ kubectl get service nginxNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEnginx LoadBalancer 10.39.253.240 35.230.93.151 80:30516/TCP 13m



정상적이다.



이제 lb를 정리한다. 


큐버네티스 nginx 서비스를 삭제한다.



$ kubectl delete service nginxservice "nginx" deleted



큐버네티스 레플리케이션 콘트롤러와 nginx pod(인스턴스)를 삭제한다.


$ kubectl delete deployment nginxdeployment "nginx" deleted



lb를 내린다. 


$ gcloud container clusters delete networklb --zone='us-west1-a' The following clusters will be deleted. - [networklb] in [us-west1-a]Do you want to continue (Y/n)? YDeleting cluster networklb...done.Deleted [https://container.googleapis.com/v1/projects/111/zones/us-west1-a/clusters/networklb].





이번에는 http load balancer를 사용한다.







$ export MY_ZONE='us-west1-a'


samuel이라는 cluster를 생성한다. 

$ gcloud container clusters create samuel --zone $MY_ZONE


WARNING: Currently node auto repairs are disabled by default. In the future this will change and they will be enabled by default. Use `--[no-]enable-autorepai

r` flag  to suppress this warning.

WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the lat

ter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_beh

avior property (gcloud config set container/new_scopes_behavior true).

piVersion: extensions/v1beta1

Creating cluster samuel...done.

Created ....

03470e


NAME    LOCATION    MASTER_VERSION  MASTER_IP        MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS

samuel  us-west1-a  1.8.10-gke.0    104.198.107.200  n1-standard-1  1.8.10-gke.0  3          RUNNING



$ kubectl run nginx --image=nginx --port=80

deployment "nginx" created



$ kubectl expose deployment nginx --target-port=80 --type=NodePort

service "nginx" exposed




basic-ingress.yaml 파일을 생성한다.


apiVersion: extensions/v1beta1

kind: Ingress

metadata:

        name: basic-ingress

spec:

        backend:

                serviceName: nginx

                servicePort: 80




$ kubectl create -f basic-ingress.yaml                                                                    

ingress "basic-ingress" created



$ kubectl get ingress basic-ingress --watch

NAME            HOSTS     ADDRESS   PORTS     AGE

basic-ingress   *                   80        7s

basic-ingress   *         35.227.208.116   80        57s

basic-ingress   *         35.227.208.116   80        57s



$ kubectl describe ingress basic-ingress

Name:             basic-ingress

Namespace:        default

Address:          35.227.208.116

Default backend:  nginx:80 (10.36.1.5:80)

Rules:

  Host  Path  Backends

  ----  ----  --------

  *     *     nginx:80 (10.36.1.5:80)

Annotations:

  forwarding-rule:  k8s-fw-default-basic-ingress--27688e79a493971e

  target-proxy:     k8s-tp-default-basic-ingress--27688e79a493971e

  url-map:          k8s-um-default-basic-ingress--27688e79a493971e

  backends:         {"k8s-be-32520--27688e79a493971e":"Unknown"}

Events:

  Type    Reason   Age              From                     Message

  ----    ------   ----             ----                     -------

  Normal  ADD      4m               loadbalancer-controller  default/basic-ingress

  Normal  CREATE   3m               loadbalancer-controller  ip: 35.227.208.116

  Normal  Service  3m (x3 over 3m)  loadbalancer-controller  default backend set to nginx:32520




이제는 정리하는 커맨드를 사용한다.



$ kubectl delete -f basic-ingress.yaml

ingress "basic-ingress" deleted


$ kubectl delete deployment nginx

deployment "nginx" deleted


$ gcloud container clusters delete samuel

ERROR: (gcloud.container.clusters.delete) One of [--zone, --region] must be supplied: Please specify location..


$ gcloud container clusters delete samuel --zone=$MY_ZONE

The following clusters will be deleted.

 - [samuel] in [us-west1-a]

Do you want to continue (Y/n)?  Y

Deleting cluster samuel...done.

Deleted 







kubectl get pods -owideNAME READY STATUS RESTARTS AGE IP NODEnginx-7c87f569d-2hsjp 1/1 Running 0 8s 10.36.0.6 gke-networklb-default-pool-3f6ca419-nmpbnginx-7c87f569d-2vhj9 1/1 Running 0 8s 10.36.2.6 gke-networklb-default-pool-3f6ca419-vs85nginx-7c87f569d-b4krw 1/1 Running 0 8s 10.36.1.6 gke-networklb-default-pool-3f6ca419-wxvl







$ kubectl get pods --all-namespacesNAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx-7c87f569d-2hsjp 1/1 Running 0 2mdefault nginx-7c87f569d-2vhj9 1/1 Running 0 2mdefault nginx-7c87f569d-b4krw 1/1 Running 0 2mkube-system event-exporter-v0.1.8-599c8775b7-nc8xw 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-dqrnb 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-lrnjr 2/2 Running 0 3mkube-system fluentd-gcp-v2.0.9-zh2qq 2/2 Running 0 3m kube-system heapster-v1.4.3-57c7677fc4-6mqz8 3/3 Running 0 2m kube-system kube-dns-778977457c-8xtfs 3/3 Running 0 2m kube-system kube-dns-778977457c-nvztz 3/3 Running 0 3m kube-system kube-dns-autoscaler-7db47cb9b7-jbdxv 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-nmpb 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-vs85 1/1 Running 0 3m kube-system kube-proxy-gke-networklb-default-pool-3f6ca419-wxvl 1/1 Running 0 3m kube-system kubernetes-dashboard-6bb875b5bc-9r62n 1/1 Running 0 3mkube-system l7-default-backend-6497bcdb4d-sngpv 1/1 Running 0 3m




$ kubectl get pods --include-uninitializedNAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 3mnginx-7c87f569d-2vhj9 1/1 Running 0 3mnginx-7c87f569d-b4krw 1/1 Running 0 3m





$ kubectl get pods --field-selector=status.phase=RunningNAME READY STATUS RESTARTS AGEnginx-7c87f569d-2hsjp 1/1 Running 0 4mnginx-7c87f569d-2vhj9 1/1 Running 0 4mnginx-7c87f569d-b4krw 1/1 Running 0 4m



echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})nginx-7c87f569d-2hsjp nginx-7c87f569d-2vhj9 nginx-7c87f569d-b4krw



$ kubectl get pods -o json{ "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" "lastTransitionTime": "2018-06-11T01:01:18Z", }, "name": "nginx-7c87f569d-2hsjp", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "599", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-2hsjp", "uid": "ef2e0e4a-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-nmpb", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://4530901e7f82ad2b601a759a28f48a693c9944299318e3126ecba9edf0c2b615", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:17Z" } } } ], "hostIP": "10.138.0.3", "phase": "Running", "podIP": "10.36.0.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" }, "name": "nginx-7c87f569d-2vhj9", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "602", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-2vhj9", "uid": "ef29bf6b-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-vs85", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://81a65fd0f30327173eb5d41f1a5c0a7e3752aca7963ac510165aa007c4abcd0b", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:18Z" } } } ], "hostIP": "10.138.0.4", "phase": "Running", "podIP": "10.36.2.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"nginx-7c87f569d\",\"uid\":\"ef24e95c-6d12-11e8-9bae-42010a8a0201\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"562\"}}\n", "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container nginx" }, "creationTimestamp": "2018-06-11T01:01:12Z", "generateName": "nginx-7c87f569d-", "labels": { "pod-template-hash": "374391258", "run": "nginx" }, "name": "nginx-7c87f569d-b4krw", "namespace": "default", "ownerReferences": [ { "apiVersion": "extensions/v1beta1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "nginx-7c87f569d", "uid": "ef24e95c-6d12-11e8-9bae-42010a8a0201" } ], "resourceVersion": "595", "selfLink": "/api/v1/namespaces/default/pods/nginx-7c87f569d-b4krw", "uid": "ef2d8181-6d12-11e8-9bae-42010a8a0201" }, "spec": { "containers": [ { "image": "nginx", "imagePullPolicy": "Always", "name": "nginx", "resources": { "requests": { "cpu": "100m" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-5khqf", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "nodeName": "gke-networklb-default-pool-3f6ca419-wxvl", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/notReady", "operator": "Exists", "tolerationSeconds": 300 }, { "effect": "NoExecute", "key": "node.alpha.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300 } ], "volumes": [ { "name": "default-token-5khqf", "secret": { "defaultMode": 420, "secretName": "default-token-5khqf" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:18Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": "2018-06-11T01:01:12Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://7818cd87a8cdf37853d6e44cbdc0f06cd9ca84108cd85772c0a22bc95ddaf41d", "image": "nginx:latest", "imageID": "docker-pullable://nginx@sha256:1f9c00b4c95ef931afa097823d902e7602aebc3ec5532e907e066978075ca3e0", "lastState": {}, "name": "nginx", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-06-11T01:01:17Z" } } } ], "hostIP": "10.138.0.2", "phase": "Running", "podIP": "10.36.1.6", "qosClass": "Burstable", "startTime": "2018-06-11T01:01:12Z" } } ], "kind": "List", "metadata": { "resourceVersion": "", "selfLink": "" }}





'scribbling' 카테고리의 다른 글

[펌] [YAML] 멀티 라인(multi line)  (0) 2018.06.25
소비자 주도 계약 테스트  (0) 2018.06.18
gradle 과 java 10 이슈  (0) 2018.06.08
롤아웃(roll-out) 의미  (0) 2018.06.04
[펌] consul service discovery & failure detection  (0) 2018.06.01
Posted by '김용환'
,


java10을 설치하고 나서 lombok 어노테이션 기반의 gradle 컴파일에 이슈가 있다. (Intellij는 이상없다)







$ gradle compileJava


> Task :compileJava FAILED

error: cannot find symbol

import com.google.api.entity.opentsdb.DownSample.DownSampleBuilder;

                                                      ^

  symbol:   class DownSampleBuilder

  location: class DownSample

1 error





@Data
@ToString
@Builder
@AllArgsConstructor
public class DownSample {

long interval;
AggregatorType aggregator;
DownSampleFill fill;

}


java 컴파일 이슈로 보인다..


https://github.com/rzwitserloot/lombok/issues/1646


Posted by '김용환'
,



마라톤(marathon)에서 80으로 들어온 특정 도메인 요청(예, plus.google.io)을 443 포트로 띄워진 앱을 실행하는 방법은 다음과 같다. 


참고로 HAPROXY_0_MODE는 tcp와 http만 된다. 따라서 다른 먼가가 필요한다. 그것이 HAPROXY_0_REDIRECT_TO_HTTPS이다. 


"labels": {
"HAPROXY_0_MODE"="http",

"HAPROXY_0_REDIRECT_TO_HTTPS"="true",

"HAPROXY_0_VHOST"="plus.google.io",

"HAPROXY_0_SSL_CERT"="/etc/ssl/marathon/268.pem"

}




실제 80 포트로 접속하면 301로 forwading된다.


$ curl -I -XGET http://...

HTTP/1.1 301 Moved Permanently

Content-length: 0

Location: https://...

Connection: close



Posted by '김용환'
,


간단 코드 예시


import requests


def main():
print('Hello, world!')
response = requests.get('https://httpbin.org/ip')
print(response.status_code)
print(response.headers)
print('Your IP is {0}'.format(response.json()['origin']))

if __name__ == '__main__':
main()



결과는 다음과 같다. 


Hello, world!

200

{'Connection': 'keep-alive', 'Server': 'gunicorn/19.8.1', 'Date': 'Mon, 04 Jun 2018 02:28:09 GMT', 'Content-Type': 'application/json', 'Content-Length': '26', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Credentials': 'true', 'Via': '1.1 vegur'}

Your IP is 1.1.1.1





HTTPAdapter를 이용하는 코드이다.


from requests import Session
from requests.adapters import HTTPAdapter


def main():
print('Hello, world!')

session = Session()
session.mount("http://", HTTPAdapter(max_retries=3))
response = session.get('https://httpbin.org/ip', timeout=0)

print(response.status_code)
print(response.headers)
print('Your IP is {0}'.format(response.json()['origin']))

if __name__ == '__main__':
main()




결과는 동일하다.




from requests import Session
from requests.adapters import HTTPAdapter
from requests.packages.urllib3.util.retry import Retry


def main():
print('Hello, world!')
retries_number = 3
backoff_factor = 0.3
status_forcelist = (500, 400)

retry = Retry(
total=retries_number,
read=retries_number,
connect=retries_number,
backoff_factor=backoff_factor,
status_forcelist=status_forcelist,
)
session = Session()
session.mount("http://", HTTPAdapter(max_retries=retry))
response = session.get('https://httpbin.org/ip', timeout=0)

print(response.status_code)
print(response.headers)
print('Your IP is {0}'.format(response.json()['origin']))

if __name__ == '__main__':
main()



아래 공식에 따르면, 다음과 같다.  총 소요되는 시간은 1.8인데... 


{backoff factor} * (2 ^ ({number of total retries} - 1))



0.3 * ( 2 ^ ( 1 - 1)) = 0

0.3 * ( 2 ^ ( 2 - 1)) = 0.6

0.3 * ( 2 ^ ( 3 - 1)) = 1.2



1.8 = 0 + 0.6 + 1.2






https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#module-urllib3.util.retry


backoff_factor (float) –

A backoff factor to apply between attempts after the second try (most errors are resolved immediately by a second try without a delay). urllib3 will sleep for:

{backoff factor} * (2 ^ ({number of total retries} - 1))

seconds. If the backoff_factor is 0.1, then sleep() will sleep for [0.0s, 0.2s, 0.4s, …] between retries. It will never be longer than Retry.BACKOFF_MAX.

By default, backoff is disabled (set to 0).






만약 타임아웃이 생기면, 중간에 쉬는 타임이 생긴다. retries와 timeout을 잘 사용하면 괜찮을 것 같다. 


response = session.get('https://httpbin.org/ip', timeout=5)

Posted by '김용환'
,



https://www.differencebetween.com/difference-between-rollout-and-vs-deploy/



롤아웃(roll-out)이란 간단히 번역하면 신제품 또는 정책 "출시" 또는 "릴리즈"라 할 수 있다. 



Posted by '김용환'
,

consul에 대한 좋은 자료


https://blog.eleven-labs.com/en/consul-service-discovery-failure-detection/



Posted by '김용환'
,



특정 mesos 에만 동작하도록 하는 방법이다.



$ sudo vi /etc/default/mesos-slave

export MESOS_ATTRIBUTES="ip:11.11.11.11;os:Ubuntu;os-version:16;server-type:user;service:google-api"


$ sudo vi /etc/default/mesos-slave

$ sudo service mesos-slave stop


MESOS_ATTRIBUTES를 수정할 때마다 MESOS_WORK_DIR의 모든 메타 파일을 삭제해야 합니다

$ sudo rm -rf $MESOS_WORK_DIR/*


$ sudo service mesos-slave start



확인하는 방법.



curl 'http://슬레이브_ip:5051/slave(1)/state' | jq


...


  "attributes": {

    "ip": "10.61.106.159",

    "os": "Ubuntu",

    "os-version": 16,

    "server-type": "user",

    "service": "google-api"

  },

  "master_hostname": "11.11.11.11",

  "log_dir": "/var/log/mesos",

  "flags": {

    "appc_simple_discovery_uri_prefix": "http://",

    "appc_store_dir": "/tmp/mesos/store/appc",

    "attributes": "ip:11.11.11.11;os:Ubuntu;os-version:16;server-type:user;service:google-api",


...





마라톤 앱에 다음 constraints를 추가한다.



"constraints": [

    [

      "service",

      "CLUSTER",

      "google-api"

    ]

  ],






kubernetes로 가면 저 labeling을 환경 설정(env)로 쉽게 진행할 수 있다. mesos 약간 불편한 점이 있다.


Posted by '김용환'
,