• Hadoop + HBase 伪分布式安装


    部署环境

    VMware WorkStation 7.x

    Ubuntu Sever 11.10

    JDK 1.6.25

    Hadoop 0.20.203.0

    HBase 0.90.4

    -----------------------------------------------------------------------------------------------

    准备工作

    安装Ubuntu Server和JDK,看这里

    建立用户和目录

    # groupadd hadoop
    # user add -r -g hadoop -d /home/hadoop -m -s /bin/bash hadoop

    # mkdir -p /u01/app
    # chgrp -R hadoop /u01/app
    # chown -R hadoop /u01/app

    环境变量

    $ vi ~/.profile

    export HADOOP_HOME=/u01/app/hadoop
    export HBASE_HOME=/u01/app/hbase

    -----------------------------------------------------------------------------------------------

    安装Hadoop

    $ tar zxf hadoop-0.20.203.0rc1.tar.gz
    $ ln -s hadoop-0.20.203.0 hadoop

    修改配置文件

    $ vi conf/hadoop-env.sh

    # The java implementation to use.  Required.
    export JAVA_HOME=/usr/jdk1.6.0_25

    $ vi conf/core-site.xml

    <!-- Put site-specific property overrides in this file. -->
    <configuration>
      <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
      </property>
    </configuration>

    $ vi conf/hadfs-site.xml

    <!-- Put site-specific property overrides in this file. -->
    <configuration>
      <property>
        <name>dfs.replication</name>
        <value>1</value>
      </property>
    </configuration>

    $ vi conf/mapred-site.xml

    <!-- Put site-specific property overrides in this file. -->
    <configuration>
      <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
      </property>
    </configuration>

    格式化,启动,关闭Hadoop

    $ bin/hadoop namenode –format

    $ bin/start-all.sh

    $ bin/stop-all.sh

    可以通过浏览器查看NameNode - http://localhost:50070/ ,和 JobTracker - http://localhost:50030/

    -----------------------------------------------------------------------------------------------

    安装HBase

    $ tar zxf hbase-0.90.4.tar.gz
    $ ln -s hbase-0.90.4/ hbase

    修改配置文件

    $ vi conf/hbase-env.sh

    # The java implementation to use.  Java 1.6 required.
    export JAVA_HOME=/usr/jdk1.6.0_25

    # Extra Java CLASSPATH elements.  Optional.
    export HBASE_CLASSPATH=/u01/app/hadoop/conf

    # Tell HBase whether it should manage it's own instance of Zookeeper or not.
    export HBASE_MANAGES_ZK=true

    $ vi conf/hbase-site.xml

    <configuration>
      <property>
        <name>hbase.rootdir</name>
        <value>hdfs://localhost:9000/hbase</value>
      </property>
      <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
      </property>
    </configuration>

    替换jar,用$HADOOP_HOME下的jar替换$HBASE_HOME/lib下的这个jar包

    hadoop@ubuntu01:/u01/app/hbase/lib$ rm hadoop-core-0.20-append-r1056497.jar
    hadoop@ubuntu01:/u01/app/hbase/lib$ cp /u01/app/hadoop/hadoop-core-0.20.203.0.jar .
    hadoop@ubuntu01:/u01/app/hbase/lib$ chmod +x hadoop-core-0.20.203.0.jar

    启动,关闭HBase

    $ bin/start-hbase.sh
    $ bin/hbase shell
    $ bin/stop-hbase.sh

    -----------------------------------------------------------------------------------------------

    PS:安装的过程中遇到一些小问题,解决方法在这里

  • 相关阅读:
    【Html】Clipboard.js 实现点击复制,剪切板操作
    【转】【Python】python使用urlopen/urlretrieve下载文件时出现403 forbidden的解决方法
    【Html】div 加载 html页面的方法
    【WPF】创建文本字符串的路径PathGeometry
    【WPF】自定义鼠标样式
    Linux 错误记录
    微信开放平台代公众号管理
    微信开放平台获取授权公众号的流程
    vue-router "path" is required in a route configuration
    最大连接数“65535”的误解
  • 原文地址:https://www.cnblogs.com/wait4friend/p/2391760.html
Copyright © 2020-2023  润新知