部署环境
VMware WorkStation 7.x
Ubuntu Sever 11.10
JDK 1.6.25
Hadoop 0.20.203.0
HBase 0.90.4
-----------------------------------------------------------------------------------------------
准备工作
安装Ubuntu Server和JDK,看这里
建立用户和目录
# groupadd hadoop
# user add -r -g hadoop -d /home/hadoop -m -s /bin/bash hadoop
# mkdir -p /u01/app
# chgrp -R hadoop /u01/app
# chown -R hadoop /u01/app
环境变量
$ vi ~/.profile
export HADOOP_HOME=/u01/app/hadoop
export HBASE_HOME=/u01/app/hbase
-----------------------------------------------------------------------------------------------
安装Hadoop
$ tar zxf hadoop-0.20.203.0rc1.tar.gz
$ ln -s hadoop-0.20.203.0 hadoop
修改配置文件
$ vi conf/hadoop-env.sh
# The java implementation to use. Required.
export JAVA_HOME=/usr/jdk1.6.0_25
$ vi conf/core-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
$ vi conf/hadfs-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
$ vi conf/mapred-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
格式化,启动,关闭Hadoop
$ bin/hadoop namenode –format
$ bin/start-all.sh
$ bin/stop-all.sh
可以通过浏览器查看NameNode - http://localhost:50070/ ,和 JobTracker - http://localhost:50030/
-----------------------------------------------------------------------------------------------
安装HBase
$ tar zxf hbase-0.90.4.tar.gz
$ ln -s hbase-0.90.4/ hbase
修改配置文件
$ vi conf/hbase-env.sh
# The java implementation to use. Java 1.6 required.
export JAVA_HOME=/usr/jdk1.6.0_25
# Extra Java CLASSPATH elements. Optional.
export HBASE_CLASSPATH=/u01/app/hadoop/conf
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true
$ vi conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
替换jar,用$HADOOP_HOME下的jar替换$HBASE_HOME/lib下的这个jar包
hadoop@ubuntu01:/u01/app/hbase/lib$ rm hadoop-core-0.20-append-r1056497.jar
hadoop@ubuntu01:/u01/app/hbase/lib$ cp /u01/app/hadoop/hadoop-core-0.20.203.0.jar .
hadoop@ubuntu01:/u01/app/hbase/lib$ chmod +x hadoop-core-0.20.203.0.jar
启动,关闭HBase
$ bin/start-hbase.sh
$ bin/hbase shell
$ bin/stop-hbase.sh
-----------------------------------------------------------------------------------------------
PS:安装的过程中遇到一些小问题,解决方法在这里