hadoop streaming을 처음 하는 사람에게 추천하는 글



http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/




mapper.py

#!/usr/bin/env python import sys # input comes from STDIN (standard input) for line in sys.stdin: # remove leading and trailing whitespace line = line.strip() # split the line into words words = line.split() # increase counters for word in words: # write the results to STDOUT (standard output); # what we output here will be the input for the # Reduce step, i.e. the input for reducer.py # # tab-delimited; the trivial word count is 1 

print '%s\t%s' % (word, 1)




reducer.py


#!/usr/bin/env python from operator import itemgetter import sys current_word = None current_count = 0 word = None # input comes from STDIN for line in sys.stdin: # remove leading and trailing whitespace line = line.strip() # parse the input we got from mapper.py word, count = line.split('\t', 1) # convert count (currently a string) to int try: count = int(count) except ValueError: # count was not a number, so silently # ignore/discard this line continue # this IF-switch only works because Hadoop sorts map output # by key (here: word) before it is passed to the reducer if current_word == word: current_count += count else: if current_word: # write result to STDOUT print '%s\t%s' % (current_word, current_count) current_count = count current_word = word # do not forget to output the last word if needed! if current_word == word: 

print '%s\t%s' % (current_word, current_count)



실행


hadoop jar contrib/streaming/hadoop-*streaming*.jar \

-mapper ./mapper.py \

-reducer ./reducer.py \

-file ./mapper.py  \

-file ./reducer.py \ -input /user/hduser/gutenberg/* \

-output /user/hduser/gutenberg-output

'hadoop' 카테고리의 다른 글

[hive] 데이터를 하나로 합치기  (0) 2016.02.29
[hive] 날짜 구하기  (0) 2016.02.26
[hadoop] top n 소팅  (0) 2016.02.16
[hadoop] scoop 쓸 때 유의사항  (0) 2016.02.05
[hadoop] hadoop distcp  (0) 2016.02.05
Posted by '김용환'
,