1. 以下关系型数据库中的表和数据,要求将其转换为适合于HBase存储的表并插入数据:
学生表(Student)(不包括最后一列)
学号(S_No)
姓名(S_Name)
性别(S_Sex)
年龄(S_Age)
课程(course)
2015001
Zhangsan
male
23
2015003
Mary
female
22
2015003
Lisi
male
24
数学(Math)85
create 'student','S_no','S_name','S_sex','S_age'
put 'student','2015001','S_name','Zhangsan'
put 'student','2015001','S_sex','male'
put 'student','2015001','S_age','23'
put 'student','2015003','S_name','Mary'
put 'student','2015003','S_sex','female'
put 'student','2015003','S_age','22'
put 'student','2015003','S_name','Lisi'
put 'student','2015003','S_sex','male'
put 'student','2015003','S_age','24'
2. 用Hadoop提供的HBase Shell命令完成相同任务:
列出HBase所有的表的相关信息;
hbase(main)>list
在终端打印出学生表的所有记录数据;
hbase(main)> scan 'student',{LIMIT=>4}
向学生表添加课程列族;
hbase(main)>disable 'student'
hbase(main)>alter 'student', NAME => 'course', VERSIONS =>3
向课程列族添加数学列并登记成绩为85;
put 'student','2015001','course','85'
删除课程列;
alter 'student', {NAME => 'course', METHOD => 'delete'}
统计表的行数;count 's1'
count 'student'
清空指定的表的所有记录数据;truncate 's1'
truncate 'student'
3. 用Python编写WordCount程序任务
程序
WordCount
输入
一个包含大量单词的文本文件
输出
文件中每个单词及其出现次数(频数),并按照单词字母顺序排序,每个单词和其频数占一行,单词和频数之间有间隔
cd /home
mkdir wordcount.py
mkdir wordcount1.py
编写map函数,reduce函数
vim /home/wordcount.py
import sys
for line in sys.stdin:
line=line.strip()
words=line.split()
for word in words:
print '%s %s' % (word,1)
from operator import itemgetter
import sys
current_word=None
current_count=0
word=None
for line in sys.stdin:
line=line.strip()
word,count=line.split(' ',1)
try:
count=int(count)
except ValueError:
continue
if current_word==word:
current_count+=count
else
if current_word:
print '%s %s' %(current_word,current_count)
current_count=count
current_word=word
if current_word==word:
print '%s %s' %(current_word,current_count)
将其权限作出相应修改
chomd a+x /home/wordcount.py
chomd a+x /home/wordcount1.py
本机上测试运行代码
echo "foo foo quux labs foo bar quux" | /home/wordcount.py
echo "foo foo quux labs foo bar quux" | /home/wordcount.py |sort -k1,1 |/home/wordcount1.py
放到HDFS上运行
cd /home
wget http://www.gutenberg.org/files/5000/5000-8.txt
wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt
下载并上传文件到hdfs上
$ hdfs dfs -put /home/hadoop/hadoop/gutenberg/*.txt
用Hadoop Streaming命令提交任务
cd $HADOOP_HOME
find ./-name "*streaming*.jar"