1. 以下关系型数据库中的表和数据,要求将其转换为适合于HBase存储的表并插入数据:
学生表(Student)(不包括最后一列)
学号(S_No) |
姓名(S_Name) |
性别(S_Sex) |
年龄(S_Age) |
课程(course) |
2015001 |
Zhangsan |
male |
23 |
|
2015003 |
Mary |
female |
22 |
|
2015003 |
Lisi |
male |
24 |
数学(Math)85 |
create 'Student', ' S_No ','S_Name', ’S_Sex’,'S_Age' put 'Student','1','S_No','2015001' put 'Student','1','S_Name','Wangwu' put 'Student','1','S_Sex','male' put 'Student','1','S_Age','23' put 'Student','2','S_No','2015003' put 'Student','2','S_Name','Mary' put 'Student','2','S_Sex','female' put 'Student','2','S_Age','22' put 'Student','3','S_No','2015003' put 'Student','3','S_Name','Lisi' put 'Student','3','S_Sex','male' put 'Student','3','S_Age','24'
2. 用Hadoop提供的HBase Shell命令完成相同任务:
- 列出HBase所有的表的相关信息;list
- 在终端打印出学生表的所有记录数据;
- 向学生表添加课程列族;
- 向课程列族添加数学列并登记成绩为85;
- 删除课程列;
- 统计表的行数;count 's1'
- 清空指定的表的所有记录数据;truncate 's1'
scan 'Student' alter 'Student','NAME'=>'course' put 'Student','3','course:Math','85' dorp 'Student','course' count 's1' truncate 's1'
3. 用Python编写WordCount程序任务
程序 |
WordCount |
输入 |
一个包含大量单词的文本文件 |
输出 |
文件中每个单词及其出现次数(频数),并按照单词字母顺序排序,每个单词和其频数占一行,单词和频数之间有间隔 |
- 编写map函数,reduce函数
#创造mapper.py文件 cd /home/hadoop/wc sudo gedit mapper.py #map函数 #!/usr/bin/env python import sys for i in stdin: i = i.strip() words = i.split() for word in words: print '%s %s' % (word,1) #reduce函数 #!/usr/bin/env python from operator import itemgetter import sys current_word = None current_count = 0 word = None for i in stdin: i = i.strip() word, count = i.split(' ',1) try: count = int(count) except ValueError: continue if current_word == word: current_count += count else: if current_word: print '%s %s' % (current_word, current_count) current_count = count current_word = word if current_word == word: print '%s %s' % (current_word, current_count)
- 将其权限作出相应修改
#!/usr/bin/env python cd /home/hadoop/wc sudo gedit reducer.py #赋予权限 chmod a+x /home/hadoop/mapper.py
- 本机上测试运行代码
echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py | sort -k1,1 | /home/hadoop/wc/reducer.p
- 放到HDFS上运行
- 下载并上传文件到hdfs上
#上传文件 cd /home/hadoop/wc wget http://www.gutenberg.org/files/5000/5000-8.txt wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt #下载文件 cd /usr/hadoop/wc hdfs dfs -put /home/hadoop/hadoop/gutenberg/*.txt /user/hadoop/input
- 用Hadoop Streaming命令提交任务