Kafka에 zookeeper를 사용할 때. zookeeper 기본 설정 사용하다가 주키퍼에서 메모리 부족하고 난리도 아니다.


메모리 설정과 jmx 설정을 해주는 것이 좋다.




conf/java.env파일을 추가해 메모리 설정도 gc 로그 파일을 생성한다. 아래는 대략 기본 설정으로 보는게 좋다.


export JVMFLAGS="-Xmx3g -Xms3g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:CompileThreshold=200 -verbosegc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/lib/zookeeper/gc.log -XX:+UseGCLogFileRotation -XX:GCLogFileSize=10m -XX:NumberOfGCLogFiles=10"



zkServer.sh에 다음을 추가해 jmx 모니터링을 진행한다.


-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=8989 -Djava.rmi.server.hostname=my.remoteconsole.org





Posted by 김용환 '김용환'




요즘에 https://github.com/square/okhttp은 점차 안드로이드 진영에서 인정받으면서 서버단에서 많이 쓰기 시작하고 있다. 

okhttp에 pooling이 내부적으로 되어 있어서, 편하게 사용가능하다.



OKHttpClient를 생성할 때 OKHttpClient.Builder를 호출하는데, 기본 생성자인 ConnectionPool을 생성하면서 5개 connection을 자동으로 생성한다.





https://github.com/square/okhttp/blob/master/okhttp/src/main/java/okhttp3/OkHttpClient.java



https://github.com/square/okhttp/blob/master/okhttp/src/main/java/okhttp3/OkHttpClient.java#L487



https://github.com/square/okhttp/blob/master/okhttp/src/main/java/okhttp3/ConnectionPool.java#L85


 

public ConnectionPool() {

this(5, 5, TimeUnit.MINUTES);

}



Posted by 김용환 '김용환'


자바 개발할 때 json 파싱은 사실 상 헬이다..


Apache Nifi에서 사용하고 있는 JsonPath(https://github.com/json-path/JsonPath)는 성능도 괜찮고, 캐시 기능이 있어서 여러 번 파싱하지 않아도 되고... 조건 검색, REGEX 검색도 가능하고 아주 쓸만하다.



예제는 홈피에 있다. (scala의 json4s보다 훨 나은 느낌이다..)

Path Examples

Given the json

{
    "store": {
        "book": [
            {
                "category": "reference",
                "author": "Nigel Rees",
                "title": "Sayings of the Century",
                "price": 8.95
            },
            {
                "category": "fiction",
                "author": "Evelyn Waugh",
                "title": "Sword of Honour",
                "price": 12.99
            },
            {
                "category": "fiction",
                "author": "Herman Melville",
                "title": "Moby Dick",
                "isbn": "0-553-21311-3",
                "price": 8.99
            },
            {
                "category": "fiction",
                "author": "J. R. R. Tolkien",
                "title": "The Lord of the Rings",
                "isbn": "0-395-19395-8",
                "price": 22.99
            }
        ],
        "bicycle": {
            "color": "red",
            "price": 19.95
        }
    },
    "expensive": 10
}
JsonPath (click link to try)Result
$.store.book[*].authorThe authors of all books
$..authorAll authors
$.store.*All things, both books and bicycles
$.store..priceThe price of everything
$..book[2]The third book
$..book[-2]The second to last book
$..book[0,1]The first two books
$..book[:2]All books from index 0 (inclusive) until index 2 (exclusive)
$..book[1:2]All books from index 1 (inclusive) until index 2 (exclusive)
$..book[-2:]Last two books
$..book[2:]Book number two from tail
$..book[?(@.isbn)]All books with an ISBN number
$.store.book[?(@.price < 10)]All books in store cheaper than 10
$..book[?(@.price <= $['expensive'])]All books in store that are not "expensive"
$..book[?(@.author =~ /.*REES/i)]All books matching regex (ignore case)
$..*Give me every thing
$..book.length()The number of books


Posted by 김용환 '김용환'



apache commons에 CircularFifoBuffer 클래스가 있다. 



첫번째 예시 코드이다.



object Test extends App {

import org.apache.commons.collections.buffer.CircularFifoBuffer

val tasks = new CircularFifoBuffer(10)
tasks.add(1)
tasks.add(2)
tasks.add(5)
tasks.add(6)
println("max size : " + tasks.maxSize)
println("size: " + tasks.size)
println("--get")
println(tasks.get())
println(tasks.get())
println(tasks.get())
println(tasks.get())

println("--remove")
println(tasks.remove)
println(tasks.remove)
println(tasks.remove)
println(tasks.remove)

println("-- error")
println(tasks.remove)
}


가지고 있는 buffer보다 더 많이 remove하면 exception이 발생한다. 

get을 호출하면 계속 첫번째 엘리먼트를 리턴한다. 현재 값 확인할 때 좋을 듯 하다. 


max size : 10

size: 4

--get

1

1

1

1

--remove

1

2

5

6


at org.apache.commons.collections.buffer.BoundedFifoBuffer.remove(BoundedFifoBuffer.java:275)





두 번째 예시 코드이다. 


object Test extends App {

import org.apache.commons.collections.buffer.CircularFifoBuffer

val tasks = new CircularFifoBuffer(10)
tasks.add(1)
tasks.add(2)
tasks.add(3)
tasks.add(4)
tasks.add(5)
tasks.add(6)
tasks.add(7)
tasks.add(8)
tasks.add(9)
tasks.add(10)
tasks.add(11)
tasks.add(12)
println("max size : " + tasks.maxSize)
println("size: " + tasks.size)

for(i <- tasks.toArray) {
println(tasks.remove)
}
}


데이터를 계속 추가하면 처음에 들어온 데이터는 삭제된다. overflow가 발생되지 않는다. 



max size : 10

size: 10

3

4

5

6

7

8

9

10

11

12


Posted by 김용환 '김용환'


젠킨스에서 반복적인 job은 job dsl 플러그인을 쓰면 편하다~

multi-job 대신 쓸만할 수도. 


https://wiki.jenkins.io/display/JENKINS/Job+DSL+Plugin



Configuration As Code: The Job DSL Plugin de Daniel Spilker




Posted by 김용환 '김용환'




jenkins pipeline 플러그인은 사용하면 각 단계로 로그를 상세히 볼 수 있다!!!





어디 단계에서 문제가 발생하는지 보니까 완전 좋다!!



한국 분의 설치 내용

https://shortstories.gitbooks.io/studybook/content/jenkins_pipeline_c0bd_c9c8_ae30.html



결과 내용


참조 : https://www.cloudbees.com/blog/top-10-best-practices-jenkins-pipeline-plugin


 



Posted by 김용환 '김용환'

java8 stream 사용시 주의할 점을 작성한 좋은 블로그 내용이 있어서 링크를 건다.



https://blog.jooq.org/2014/06/13/java-8-friday-10-subtle-mistakes-when-using-the-streams-api/



이 내용을 이용진 블로거님이 번역한 블로그이다.


http://leeyongjin.tistory.com/entry/Java8-%EC%9E%90%EB%B0%948-Stream-API-%EC%A3%BC%EC%9D%98%EC%82%AC%ED%95%AD


Posted by 김용환 '김용환'




java9을 설치하니 Spring Tool Suite가 동작이 되지 않는다. 


!ENTRY org.eclipse.e4.ui.workbench 4 0 2017-10-20 19:29:56.365

!MESSAGE FrameworkEvent ERROR

!STACK 0

java.lang.NoClassDefFoundError: javax/annotation/PreDestroy

        at org.eclipse.e4.core.internal.di.InjectorImpl.disposed(InjectorImpl.java:426)

        at org.eclipse.e4.core.internal.di.Requestor.disposed(Requestor.java:154)

        at org.eclipse.e4.core.internal.contexts.ContextObjectSupplier$ContextInjectionListener.update(ContextObjectSupplier.java:78)

        at org.eclipse.e4.core.internal.contexts.TrackableComputationExt.update(TrackableComputationExt.java:111)

        at org.eclipse.e4.core.internal.contexts.TrackableComputationExt.handleInvalid(TrackableComputationExt.java:74)

        at org.eclipse.e4.core.internal.contexts.EclipseContext.dispose(EclipseContext.java:176)

        at org.eclipse.e4.core.internal.contexts.osgi.EclipseContextOSGi.dispose(EclipseContextOSGi.java:106)

        at org.eclipse.e4.core.internal.contexts.osgi.EclipseContextOSGi.bundleChanged(EclipseContextOSGi.java:139)

        at org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:903)

        at org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)

        at org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)

        at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEventPrivileged(EquinoxEventPublisher.java:213)

        at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:120)

        at org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:112)

        at org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor.publishModuleEvent(EquinoxContainerAdaptor.java:156)

        at org.eclipse.osgi.container.Module.publishEvent(Module.java:476)

        at org.eclipse.osgi.container.Module.doStop(Module.java:634)

        at org.eclipse.osgi.container.Module.stop(Module.java:498)

        at org.eclipse.osgi.container.SystemModule.stop(SystemModule.java:191)

        at org.eclipse.osgi.internal.framework.EquinoxBundle$SystemBundle$EquinoxSystemModule$1.run(EquinoxBundle.java:165)

        at java.base/java.lang.Thread.run(Thread.java:844)

Caused by: java.lang.ClassNotFoundException: javax.annotation.PreDestroy cannot be found by org.eclipse.e4.core.di_1.6.0.v20160319-0612

        at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:398)

        at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:361)

        at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:353)

        at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:161)

        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)

        ... 21 more





STS.ini에서 -vmargs 라인을 찾고 다음을 추가한다.


-vmargs

--add-modules=java.se.ee



참고로 STS.ini 파일은 STS.app/Contents/Eclipse에 위치한다. 



Posted by 김용환 '김용환'




maven에서 매개변수를 전달하는 방법은 exec.args를 사용한다.


특별히 스페이스가 포함된 문자열을 하나의 토큰으로 전달하려면 sing quote를 사용한다. 



예제


$ maven exec:java -Dexec.mainClass=com.google.photo.Main  -Dexec.args="local 'a   a' "




만약 classpath나 jvm 옵션을 추가하고 싶다면 아래와 같이 사용한다.



예제



$ mvn exec:exec -Dmaven.run.skip=true -Dexec.executable="java"  -Dexec.args="-classpath /usr/local/apache-storm-1.0.1/lib/*:/home/google/lib/photo.jar com.google.photo.Main local"



Posted by 김용환 '김용환'


apache storm은 apache zookeeper를 필요로한다. 




* apache zookeeper 설치와 실행

http://www.apache.org/dyn/closer.cgi/zookeeper/



$ ./zkServer.sh start

ZooKeeper JMX enabled by default

Using config: /usr/local/zookeeper-3.4.10/bin/../conf/zoo.cfg

Starting zookeeper ... STARTED






* apache storm 설치 


http://storm.apache.org/downloads.html에서 설치 후 /usr/local로 옮긴다.


(archieves는 http://archive.apache.org/dist/storm/에 존재한다. 1.0.1로 테스트한다.)



설정을 변경해 local zookeeper를 바라보도록 한다. 


$ vi conf/storm.yaml


storm.zookeeper.servers:

      - localhost

nimbus.seeds: ["localhost"]





master node인 nimbus를 실행한다. 



$ ./bin/storm nimbus

Running: /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java -server -Ddaemon.name=nimbus .... org.apache.storm.daemon.nimbus




supervisor 노드를 실행한다.


$ ./bin/storm supervisor

Running: /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java -server -Ddaemon.name=supervisor... org.apache.storm.daemon.supervisor




admin ui를 위해 ui도 실행한다.


$ ./bin/storm ui

Running: /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java -server -Ddaemon.name=ui ... org.apache.storm.ui.core



웹 브라우져에서 http://localhost:8080에 접속해서 정상적으로 접근되는지 확인한다.




데몬을 확인한다. 

$ ps -ef | grep apache-storm


3개의 데몬 nimbus, ui, supervisor가 제대로 떠 있는지 확인할 수 있다. 


  


테스트를 위해 wordcount topology를 submit 한다. 


$ ./bin/storm jar ./examples/storm-starter/storm-starter-topologies-1.0.1.jar org.apache.storm.starter.WordCountTopology wordcount

Running: /Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/bin/java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/usr/local/apache-storm-1.0.1 -Dstorm.log.dir=/usr/local/apache-storm-1.0.1/logs -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /usr/local/apache-storm-1.0.1/lib/asm-5.0.3.jar:/usr/local/apache-storm-1.0.1/lib/clojure-1.7.0.jar:/usr/local/apache-storm-1.0.1/lib/disruptor-3.3.2.jar:/usr/local/apache-storm-1.0.1/lib/kryo-3.0.3.jar:/usr/local/apache-storm-1.0.1/lib/log4j-api-2.1.jar:/usr/local/apache-storm-1.0.1/lib/log4j-core-2.1.jar:/usr/local/apache-storm-1.0.1/lib/log4j-over-slf4j-1.6.6.jar:/usr/local/apache-storm-1.0.1/lib/log4j-slf4j-impl-2.1.jar:/usr/local/apache-storm-1.0.1/lib/minlog-1.3.0.jar:/usr/local/apache-storm-1.0.1/lib/objenesis-2.1.jar:/usr/local/apache-storm-1.0.1/lib/reflectasm-1.10.1.jar:/usr/local/apache-storm-1.0.1/lib/servlet-api-2.5.jar:/usr/local/apache-storm-1.0.1/lib/slf4j-api-1.7.7.jar:/usr/local/apache-storm-1.0.1/lib/storm-core-1.0.1.jar:/usr/local/apache-storm-1.0.1/lib/storm-rename-hack-1.0.1.jar:./examples/storm-starter/storm-starter-topologies-1.0.1.jar:/usr/local/apache-storm-1.0.1/conf:/usr/local/apache-storm-1.0.1/bin -Dstorm.jar=./examples/storm-starter/storm-starter-topologies-1.0.1.jar org.apache.storm.starter.WordCountTopology wordcount

534  [main] INFO  o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -8806961472207405752:-5043320726578157353

585  [main] INFO  o.a.s.s.a.AuthUtils - Got AutoCreds []

640  [main] INFO  o.a.s.StormSubmitter - Uploading topology jar ./examples/storm-starter/storm-starter-topologies-1.0.1.jar to assigned location: /usr/local/apache-storm-1.0.1/storm-local/nimbus/inbox/stormjar-609f1ee2-82e5-4111-bba2-c36b830e0b15.jar

Start uploading file './examples/storm-starter/storm-starter-topologies-1.0.1.jar' to '/usr/local/apache-storm-1.0.1/storm-local/nimbus/inbox/stormjar-609f1ee2-82e5-4111-bba2-c36b830e0b15.jar' (62432746 bytes)

[==================================================] 62432746 / 62432746

File './examples/storm-starter/storm-starter-topologies-1.0.1.jar' uploaded to '/usr/local/apache-storm-1.0.1/storm-local/nimbus/inbox/stormjar-609f1ee2-82e5-4111-bba2-c36b830e0b15.jar' (62432746 bytes)

935  [main] INFO  o.a.s.StormSubmitter - Successfully uploaded topology jar to assigned location: /usr/local/apache-storm-1.0.1/storm-local/nimbus/inbox/stormjar-609f1ee2-82e5-4111-bba2-c36b830e0b15.jar

936  [main] INFO  o.a.s.StormSubmitter - Submitting topology wordcount in distributed mode with conf {"storm.zookeeper.topology.auth.scheme":"digest","storm.zookeeper.topology.auth.payload":"-8806961472207405752:-5043320726578157353","topology.workers":3,"topology.debug":true}

1196 [main] INFO  o.a.s.StormSubmitter - Finished submitting topology: wordcount


job업로드하고 잘 실행되는지 확인할 수 있다.




실제 소스 디렉토리는 https://github.com/apache/storm/tree/1.0.x-branch/examples/storm-starter에 있다.




main 메소드는 아래 링크에 있다. 


https://github.com/apache/storm/blob/1.0.x-branch/examples/storm-starter/src/jvm/org/apache/storm/starter/WordCountTopology.java


로컬 테스트를 위해 LocalCluster를 사용하고 있고 Spout -> Bolt("split")-> Bolt("count")의 토롤리지를 구성하고 있다. 




  public static void main(String[] args) throws Exception {


    TopologyBuilder builder = new TopologyBuilder();


    builder.setSpout("spout", new RandomSentenceSpout(), 5);


    builder.setBolt("split", new SplitSentence(), 8).shuffleGrouping("spout");

    builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new Fields("word"));


    Config conf = new Config();

    conf.setDebug(true);


    if (args != null && args.length > 0) {

      conf.setNumWorkers(3);


      StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());

    }

    else {

      conf.setMaxTaskParallelism(3);


      LocalCluster cluster = new LocalCluster();

      cluster.submitTopology("word-count", conf, builder.createTopology());


      Thread.sleep(10000);


      cluster.shutdown();

    }

  }




데몬을 확인하면 3개의 주요 데몬 외에.. 6개가 더 떠 있는 것을 확인할 수 있다. (주목할 점은 6700~6702!!)


$ ps -ef | grep apache-storm


  ...  org.apache.storm.daemon.worker wordcount-1-1503398931 6dd4cbd3-1f43-4f17-a90f-b470fddbccbc 6700 28fae52d-9d91-4de8-9130-fe1dff2f2487

... org.apache.storm.daemon.worker wordcount-1-1503398931 6dd4cbd3-1f43-4f17-a90f-b470fddbccbc 6701 8b53b715-0184-4639-b33f-68208cc7d2e3

...  org.apache.storm.daemon.worker wordcount-1-1503398931 6dd4cbd3-1f43-4f17-a90f-b470fddbccbc 6702 e8357b79-dd08-4d70-b4de-2bf69598eadf

비슷한 것 3개가 또 있음 

 


 


제대로 동작했는지 웹 브라우져에서 살펴볼 수 있다. 


http://localhost:8080/topology.html?id=wordcount-1-1503398931

  



http://localhost:8080/api/v1/topology/summary을 호출해 api summary로 확인할 수 있다.


$ curl http://localhost:8080/api/v1/topology/summary

{"topologies":[{"assignedTotalMem":2496.0,"owner":"samuel.kim","requestedMemOnHeap":0.0,"encodedId":"wordcount-1-1503398931","assignedMemOnHeap":2496.0,"id":"wordcount-1-1503398931","uptime":"11m 28s","schedulerInfo":null,"name":"wordcount","workersTotal":3,"status":"INACTIVE","requestedMemOffHeap":0.0,"tasksTotal":28,"requestedCpu":0.0,"replicationCount":1,"executorsTotal":28,"uptimeSeconds":688,"assignedCpu":0.0,"assignedMemOffHeap":0.0,"requestedTotalMem":0.0}],"schedulerDisplayResource":false}




목록을 확인할 수 있다. 


$ ./bin/storm list

Topology_name        Status     Num_tasks  Num_workers  Uptime_secs

-------------------------------------------------------------------

wordcount            ACTIVE     28         3            370




로그로 확인해보려면 logs 밑에 디렉토리가 있는데, 아까 봤던 ps ef로 봤던 6700~6702가 보인다. 


$ls -al logs/workers-artifacts/wordcount-1-1503398931/670

6700/ 6701/ 6702/




$ ls -al logs/workers-artifacts/wordcount-1-1503398931/6700/

gc.log.0.current    worker.log          worker.log.err      worker.log.metrics  worker.log.out      worker.pid          worker.yaml




정상적으로 작동했는지 하나 살펴본다. 


$ ls -al logs/workers-artifacts/wordcount-1-1503398931/6700/worker.log

-rw-r--r--  1 samuel.kim  staff  48832569  8 22 19:57 logs/workers-artifacts/wordcount-1-1503398931/6700/worker.log

[/usr/local/apache-storm-1.0.1] tail -n 10 logs/workers-artifacts/wordcount-1-1503398931/6700/worker.log

2017-08-22 19:57:46.245 o.a.s.d.executor [INFO] Processing received message FOR 10 TUPLE: source: split:21, stream: default, id: {}, [cow]

2017-08-22 19:57:46.245 o.a.s.d.executor [INFO] BOLT ack TASK: 13 TIME:  TUPLE: source: split:22, stream: default, id: {}, ["away"]

2017-08-22 19:57:46.245 o.a.s.d.task [INFO] Emitting: count default [cow, 4976]

2017-08-22 19:57:46.245 o.a.s.d.executor [INFO] Execute done TUPLE source: split:22, stream: default, id: {}, ["away"] TASK: 13 DELTA:

2017-08-22 19:57:46.245 o.a.s.d.executor [INFO] BOLT ack TASK: 10 TIME:  TUPLE: source: split:21, stream: default, id: {}, [cow]

2017-08-22 19:57:46.245 o.a.s.d.executor [INFO] Execute done TUPLE source: split:21, stream: default, id: {}, [cow] TASK: 10 DELTA:

2017-08-22 19:57:46.246 o.a.s.d.executor [INFO] Processing received message FOR 7 TUPLE: source: split:19, stream: default, id: {}, ["am"]

2017-08-22 19:57:46.246 o.a.s.d.task [INFO] Emitting: count default [am, 5050]

2017-08-22 19:57:46.246 o.a.s.d.executor [INFO] BOLT ack TASK: 7 TIME:  TUPLE: source: split:19, stream: default, id: {}, ["am"]

2017-08-22 19:57:46.246 o.a.s.d.executor [INFO] Execute done TUPLE source: split:19, stream: default, id: {}, ["am"] TASK: 7 DELTA:






deactivate 할 수도 있다. 


$ ./bin/storm deactivate wordcount

1650 [main] INFO  o.a.s.c.deactivate - Deactivated topology: wordcount




list로 상태를 본다.


$ ./bin/storm list

Topology_name        Status     Num_tasks  Num_workers  Uptime_secs

-------------------------------------------------------------------

wordcount            INACTIVE   28         3            617








kill해본다.

$./bin/storm kill wordcount

1696 [main] INFO  o.a.s.c.kill-topology - Killed topology: wordcount


ui에서 목록이 사라진다. 


$ curl http://localhost:8080/api/v1/topology/summary

{"topologies":[],"schedulerDisplayResource":false}



웹 브라우저에서 http://localhost:8080/index.html를 열어보면 wordcount topology가 보이지 않거나 wordcount topology가 나타나되 KILLED 상태인 것을 확인할 수도 있다. 

잠깐 wordcount topology가 보여도 결국은 사라진다..





nimbus를 중지해야 worker 데몬은 모두 종료된다. SPOF의 위험성이 높긴 하다.. 


Posted by 김용환 '김용환'