• Ubuntu kylin优麒麟下配置Hadoop环境


    Ubuntu kylin优麒麟下配置Hadoop环境

    JDK目录

    cd /usr/lib/jvm/java-8-openjdk-amd64

    Hadoop目录

    cd /usr/local/hadoop

    IP地址

    ifconfig

    ssh服务开启

    Linux系统的ssh要打开,不然后面连接不上HDFS。

    1.问题:

    ssh连接时报以下错误:

    $ ssh root@192.168.199.22
    root@192.168.199.22's password: 
    Permission denied, please try again.

    2.原因:

    系统默认禁止root用户登录ssh

    3. 解决:

    (1)修改/etc/ssh/sshd_config文件
    vim /etc/ssh/sshd_config

    PermitRootLogin without-password

    改为

    PermitRootLogin yes
    (2)重启ssh
    sudo service ssh restart

    Hadoop配置

    查看Hadoop的etc文件

    cd /usr/local/hadoop/etc/hadoop/

    配置$HADOOP_HOME/etc/hadoop下的hadoop-env.sh文件 

    vim hadoop-env.sh 

    # The java implementation to use.
    #export JAVA_HOME=${JAVA_HOME}
    export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

    配置$HADOOP_HOME/etc/hadoop下的core-site.xml文件

    vim core-site.xml
    复制代码
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
    <!-- HDFS file path -->
    <property>
      <name>fs.default.name</name>
      <value>hdfs://172.16.12.37:9000</value>
     </property>
    
    <property>
      <name>fs.defaultFS</name>
      <value>hdfs://172.16.12.37:9000</value>
     </property>
    
     <property>
      <name>io.file.buffer.size</name>
      <value>131072</value>
     </property>
    
     <property>
      <name>hadoop.tmp.dir</name>
      <value>/home/cai/simple/soft/hadoop-2.7.1/tmp</value>
      <description>Abasefor other temporary directories.</description>
     </property>
    
    
    </configuration>
    
                 
    复制代码

    配置$HADOOP_HOME/etc/hadoop下的hdfs-site.xml文件

    vim hdfs-site.xml
    
    这里需要注意的是:如果找不到 /hdfs 文件,可以把文件路径改为 /tmp/dfs 下查找name与data文件
    <value>/home/cai/simple/soft/hadoop-2.7.1/hdfs/name</value>
    复制代码
    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributeid on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      ldoop时,需要对conf目录下的三个文件进行配置imitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
      <property>
       <name>dfs.namenode.name.dir</name>
       <value>/home/cai/simple/soft/hadoop-2.7.1/hdfs/name</value>
     </property>
    
     <property>
      <name>dfs.datanode.data.dir</name>
        <value>/home/cai/simple/soft/hadoop-2.7.1/hdfs/data</value>
      </property>
    
    <!--
      <property>
       <name>dfs.namenode.name.dir</name>
       <value>/home/cai/simple/soft/hadoop-2.7.1/etc/hadoop
    /hdfs/name</value>
     </property>
    
     <property>
      <name>dfs.datanode.data.dir</name>
        <value>/home/cai/simple/soft/hadoop-2.7.1/etc/hadoop
    /hdfs/data</value>
      </property>
    -->
    
     <property>
      <name>dfs.replication</name>
      <value>1</value>
     </property>
    
     <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
     </property>
    
    
    </configuration>
    复制代码

    配置$HADOOP_HOME/etc/hadoop下的mapred-site.xml文件

    vim mapred-site.xml
    如果没有mapred-site.xml,需要自己新创建一个mapred-site.xml。

    复制代码
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
    
    <configuration>
      <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
     </property>
    
     <property>
      <name>mapreduce.jobhistory.address</name>
      <value>172.16.12.37:10020</value>
     </property>
    
     <property>
      <name>mapreduce.jobhistory.webapp.address</name>
      <value>172.16.12.37:19888</value>
     </property>
    
    </configuration>
    
    </configuration>
    复制代码

    配置$HADOOP_HOME/etc/hadoop下的yarn-site.xml文件

    vim yarn-site.xml

    复制代码
    <?xml version="1.0"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    <configuration>
    
    <!-- Site specific YARN configuration properties -->
    <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
      </property>
    
      <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
      </property>
    
     <property>
       <name>yarn.resourcemanager.address</name>
       <value>172.16.12.37:8032</value>
      </property>
    
      <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>172.16.12.37:8030</value>
      </property>
    
      <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>172.16.12.37:8035</value>
      </property>
    
     <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>172.16.12.37:8033</value>
      </property>
    
      <property>
       <name>yarn.resourcemanager.webapp.address</name>
       <value>172.16.12.37:8088</value>
      </property>
    
    </configuration>
    复制代码

    配置/etc/profile文件

    vim /etc/profile

    复制代码
    # /etc/profile
    
    # System wide environment and startup programs, for login setup
    # Functions and aliases go in /etc/bashrc
    
    # It's NOT a good idea to change this file unless you know what you
    # are doing. It's much better to create a custom.sh shell script in
    # /etc/profile.d/ to make custom changes to your environment, as this
    # will prevent the need for merging in future updates.
    
    pathmunge () {
        case ":${PATH}:" in
            *:"$1":*)
                ;;
            *)
                if [ "$2" = "after" ] ; then
                    PATH=$PATH:$1
                else
                    PATH=$1:$PATH
                fi
        esac
    }
    
    if [ -x /usr/bin/id ]; then
        if [ -z "$EUID" ]; then
            # ksh workaround
            EUID=`id -u`
            UID=`id -ru`
        fi
        USER="`id -un`"
        LOGNAME=$USER
        MAIL="/var/spool/mail/$USER"
    fi
    
    # Path manipulation
    if [ "$EUID" = "0" ]; then
        pathmunge /usr/sbin
        pathmunge /usr/local/sbin
    else
        pathmunge /usr/local/sbin after
        pathmunge /usr/sbin after
    fi
    HOSTNAME=`/usr/bin/hostname 2>/dev/null`
    HISTSIZE=1000
    if [ "$HISTCONTROL" = "ignorespace" ] ; then
        export HISTCONTROL=ignoreboth
    else
        export HISTCONTROL=ignoredups
    fi
    
    export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
    
    # By default, we want umask to get set. This sets it for login shell
    # Current threshold for system reserved uid/gids is 200
    # You could check uidgid reservation validity in
    # /usr/share/doc/setup-*/uidgid file
    if [ $UID -gt 199 ] && [ "`id -gn`" = "`id -un`" ]; then
        umask 002
    else
        umask 022
    fi
    
    for i in /etc/profile.d/*.sh ; do
        if [ -r "$i" ]; then
            if [ "${-#*i}" != "$-" ]; then
                . "$i"
            else
                . "$i" >/dev/null
            fi
        fi
    done
    
    unset i
    unset -f pathmunge
    
    #java environment
      export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
      export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
      export PATH=$PATH:${JAVA_HOME}/bin
    
    
      export HADOOP_HOME=/usr/local/hadoop
      export PATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
    复制代码

    更新配置文件

    让配置文件生效,需要执行命令source /etc/profile

    source /etc/profile

    格式化NameNode

    格式化NameNode,在任意目录下执行 hdfs namenode -format 或者 hadoop namenode -format ,实现格式化。

    hdfs namenode -format 
    或者
    hadoop namenode -format 

    启动Hadoop集群

    启动Hadoop进程,首先执行命令start-dfs.sh,启动HDFS系统。

    start-dfs.sh

    启动yarn集群

    start-yarn.sh

    jps查看配置信息

    jps

    UI测试

    测试HDFS和yarn(推荐使用火狐浏览器)有两种方法,一个是在命令行中打开,另一个是直接双击打开

    firefox

    端口:8088与50070端口

    首先在浏览器中输入http://172.16.12.37:50070/  (HDFS管理界面)(此IP是自己虚拟机的IP地址,端口为固定端口)每个人的IP不一样,根据自己的IP地址来定。。。

    在浏览器中输入http://172.16.12.37:8088/(MR管理界面)(此IP是自己虚拟机的IP地址,端口为固定端口)每个人的IP不一样,根据自己的IP地址来定。。。

  • 相关阅读:
    你的MongoDB Redis设置用户名密码了吗?看看shodan这款邪恶的搜索引擎吧!~
    聊聊数据库(MySql)连接吧,你真的清楚吗?
    .net Mongo Driver 1.0与2.0的对比与2.0的优化
    浅谈 JavaScript new 执行过程及function原理
    Jquery源码分析与简单模拟实现
    使用HttpWebRequest模拟登陆阿里巴巴(alibaba、httpwebrequest、login)
    Url以.(点)结尾,在使用httpwebrequest读取的时候,微软会有一个bug……
    VS超强调试技巧--忍不住想赞一下
    ClickOnce部署疑难杂症:更新时部署与应用程序标识不一致问题。要安装此应用程序,请修改此文件的清单版本或卸载之前存在的应用程序。
    MAC地址的介绍(单播、广播、组播、数据收发)
  • 原文地址:https://www.cnblogs.com/cainiao-chuanqi/p/13865358.html
Copyright © 2020-2023  润新知