debezium을 사용하다 아래와 같은 에러가 발생한다..





kafka-connect_1    | [2019-03-20 09:23:35,591] INFO Step 7: rolling back transaction after abort (io.debezium.connector.mysql.SnapshotReader)

kafka-connect_1    | [2019-03-20 09:23:35,603] ERROR Execption while rollback is executed (io.debezium.connector.mysql.SnapshotReader)

kafka-connect_1    | java.sql.SQLNonTransientConnectionException: Can''t call rollback when autocommit=true

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110)

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89)

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63)

kafka-connect_1    | at com.mysql.cj.jdbc.ConnectionImpl.rollback(ConnectionImpl.java:1851)

kafka-connect_1    | at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:672)

kafka-connect_1    | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

kafka-connect_1    | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

kafka-connect_1    | at java.lang.Thread.run(Thread.java:748)

kafka-connect_1    | [2019-03-20 09:23:35,606] INFO Cluster ID: akSFNcLmRsK91EzRzUhD-A (org.apache.kafka.clients.Metadata)

kafka-connect_1    | [2019-03-20 09:23:35,606] ERROR Failed due to error: Aborting snapshot due to error when last running 'UNLOCK TABLES': Can''t call rollback when autocommit=true (io.debezium.connector.mysql.SnapshotReader)

kafka-connect_1    | org.apache.kafka.connect.errors.ConnectException: Can''t call rollback when autocommit=true Error code: 0; SQLSTATE: 08003.

kafka-connect_1    | at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)

kafka-connect_1    | at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:208)

kafka-connect_1    | at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:678)

kafka-connect_1    | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

kafka-connect_1    | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

kafka-connect_1    | at java.lang.Thread.run(Thread.java:748)

kafka-connect_1    | Caused by: java.sql.SQLNonTransientConnectionException: Can''t call rollback when autocommit=true

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:110)

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:89)

kafka-connect_1    | at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:63)

kafka-connect_1    | at com.mysql.cj.jdbc.ConnectionImpl.rollback(ConnectionImpl.java:1851)

kafka-connect_1    | at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:592)

kafka-connect_1    | ... 3 more






kafka-connect_1    | [2019-03-20 09:23:37,375] ERROR WorkerSourceTask{id=kc_debezium_connector_shopping_orders-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)

kafka-connect_1    | org.apache.kafka.connect.errors.ConnectException: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event '' at 4, the last event read from './mysql-bin.000003' at 194, the last byte read from './mysql-bin.000003' at 194. Error code: 1236; SQLSTATE: HY000.

kafka-connect_1    | at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)

kafka-connect_1    | at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:197)

kafka-connect_1    | at io.debezium.connector.mysql.BinlogReader$ReaderThreadLifecycleListener.onCommunicationFailure(BinlogReader.java:984)

kafka-connect_1    | at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:950)

kafka-connect_1    | at com.github.shyiko.mysql.binlog.BinaryLogClient.connect(BinaryLogClient.java:580)

kafka-connect_1    | at com.github.shyiko.mysql.binlog.BinaryLogClient$7.run(BinaryLogClient.java:825)

kafka-connect_1    | at java.lang.Thread.run(Thread.java:748)

kafka-connect_1    | Caused by: com.github.shyiko.mysql.binlog.network.ServerException: A slave with the same server_uuid/server_id as this slave has connected to the master; the first event '' at 4, the last event read from './mysql-bin.000003' at 194, the last byte read from './mysql-bin.000003' at 194.

kafka-connect_1    | at com.github.shyiko.mysql.binlog.BinaryLogClient.listenForEventPackets(BinaryLogClient.java:914)

kafka-connect_1    | ... 3 more

kafka-connect_1    | [2019-03-20 09:23:37,377] ERROR WorkerSourceTask{id=kc_debezium_connector_shopping_orders-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)





여러 개의 io.debezium.connector.mysql.MySqlConnector를 사용할 때 database.server.id를 동일한 id로 사용할 때 발생한다..





Posted by '김용환'
,


kubernetes의 pod의 bash에 접근하려면, 먼저 pod 이름을 알아야 한다.



$ kubectl get pod

NAME                                READY   STATUS    RESTARTS   AGE

jenkins-8498fcb9b5-8k8b8         1/1     Running   0          40m



docker와 비슷하게 pod 이름으로 bash에 접근한다. 


$ kubectl exec -it  jenkins-8498fcb9b5-8k8b8 -- /bin/bash




Posted by '김용환'
,

보통 kubernetes pod을 재시작하려면 deployment 파일을 이용하는 경우가 많지만..


설정 파일 없이 확인하는 방법도 있다.



먼저 도커 이미지 이름을 얻는다.


아래 커맨ㄷ는 실제 컨테이너의 데타 데이터 이름과 컨테이너의 도커 이미지 이름을 얻는 커맨드이다.


$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' |\ sort


ingress-nginx-controller-szb9s: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0,

ingress-nginx-controller-ttq2h: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0,

jenkins-8498fcb9b5-6n2vm: jenkins/jenkins:lts,

kube-apiserver-dkosv3-jenkins-master-1: gcr.io/google-containers/hyperkube-amd64:v1.11.5,

kube-apiserver-dkosv3-jenkins-master-2: gcr.io/google-containers/hyperkube-amd64:v1.11.5,

kube-apiserver-dkosv3-jenkins-master-3: gcr.io/google-containers/hyperkube-amd64:v1.11.5,




원하는 것은 바로 아래 커맨드이다. 메타데이터와 컨테이너 이름을 얻을 수 있다.


$ kubectl get pods -o=custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name

NAME                                                           CONTAINERS

jenkins-job-8f24e681-5b83-4f87-b713-69c86deedb22-25gsh-vjh9r   jnlp

jenkins-job-914tx-fthwt                                        jnlp

jenkins-8498fcb9b5-6n2vm                                    jenkins

my-release-mysql-65d89bd9c4-txkvn                              my-release-mysql








재시작을 진행한다. reboot 커맨드가 도커 안에 포함되어 있으면 다음처럼 실행한다.


$ kubectl exec jenkins-8498fcb9b5-6n2vm -c jenkins reboot



만약 다음 에러가 난다면, kill 커맨드를 사용해야 한다.


rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"reboot\": executable file not found in $PATH"




kubectl exec jenkins-8498fcb9b5-6n2vm -c jenkins  -- /bin/sh -c "kill 1"



Posted by '김용환'
,


debezium을 이용해 CDC(change data caputre) 기능을 접목하려 할 때.  GTID와 binlog 설정에 도움이 되는 설정(테스트 환경 설정)은 다음과 같다.



server-id         = 111

log_bin           = mysql-bin

expire_logs_days  = 1

gtid-mode         = ON

enforce-gtid-consistency = ON

binlog_format     = row
enforce_gtid_consistency  = on

#binlog_cache_size

#max_binlog_size





snapshot 모드를 위해 select, reload, show databases 권한이..

connector 최소 연결을 위해 replication slave, replication client 권한이 필요하다.



GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'debezium' IDENTIFIED BY 'dbz';




참고 


https://github.com/debezium/docker-images/blob/master/examples/mysql/0.8/mysql.cnf


https://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html


https://debezium.io/docs/connectors/mysql/#topic-names

Posted by '김용환'
,