• 安装hadoop


    生成yum源

    cd /var/ftp/pub/cdh/5

    createrepo --update .

    从节点

    yum clean all

    配置yum库

    /etc/yum.repos.d

    # cat /etc/yum.repos.d/cloudera-cdh.repo
    [hadoop]
    name=hadoop
    baseurl=ftp://192.168.34.135/pub/cdh/5/
    enabled=1
    gpgcheck=0

    安装JDK

    yum install jdk

    echo "export JAVA_HOME=/usr/java/latest" >> /root/.bash_profile 

    echo "export PATH=$JAVA_HOME/bin:$PATH" >> /root/.bash_profile 

    安装namenode

    yum install hadoop hadoop-hdfs hadoop-client hadoop-doc hadoop-debuginfo hadoop-hdfs-namenode

    安装webhdfs

    yum install hadoop-httpfs

    安装secondary namenode

    yum install hadoop-hdfs-secondarynamenode

    安装datanode

    yum install hadoop hadoop-hdfs hadoop-client hadoop-doc hadoop-debuginfo hadoop-hdfs-datanode

    配置HDFS

    配置文件路径

    /etc/hadoop/conf

    core-site.xml

    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/yimr/var/filesystem</value>
    </property>

    <property>
    <name>fs.default.name</name>
    <value>hdfs://yi01:9000</value>
    </property>

    hdfs-site.xml

    <property>
    <name>dfs.replication</name>
    <value>2</value>
    </property>

    <property>
    <name>dfs.permissions.superusergroup</name>
    <value>yimr</value>
    </property>

    配置Secondary Namenode

    hdfs-site.xml

    <property>
    <name>dfs.secondary.http.address</name>
    <value>yi01:50090</value>
    </property>

    配置WebHdfs

    core-site.xml

    <property>
    <name>hadoop.proxyuser.httpfs.hosts</name>
    <value>*</value>
    </property>

    <property>
    <name>hadoop.proxyuser.httpfs.groups</name>
    <value>*</value>
    </property>

    启动webhdfs

    service hadoop-httpfs start

    启动HDFS

    同步配置到所有节点

    rsync -av /etc/hadoop/conf/ 192.168.34.130:/etc/hadoop/conf/

    格式化namenode

    mkdir -p /home/yimr/var/filesystem 

    chown -R hdfs.hdfs /home/yimr/var/filesystem 

    sudo -u hdfs hadoop namenode -format

    每个节点启动hdfs

    for x in `ls /etc/init.d/|grep  hadoop-hdfs` ; do service $x start ; done

    service hadoop-hdfs-datanode start
    service hadoop-hdfs-namenode start

    安装Yarn

    安装resource manager

    yum install hadoop-yarn hadoop-yarn-resourcemanager

    安装node manager

    yum install hadoop-yarn hadoop-yarn-nodemanager hadoop-mapreduce

    安装history server

    yum install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver

    配置Yarn

    mapred-site.xml

    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
    </property>

    yarn-site.xml

    <property>
    <description>The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname</name>
    <value>yi01</value>
    </property>

    <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
    </property>

    配置history server

    mapred-site.xml

    <property>
    <name>yarn.app.mapreduce.am.staging-dir</name>
    <value>/user</value>
    </property>

    启动服务

    启动history server

    sudo -u hdfs hadoop fs -mkdir -p /user

    sudo -u hdfs hadoop fs -chmod 777 /user

    sudo -u hdfs hadoop fs -mkdir -p /user/history

    sudo -u hdfs hadoop fs -chmod -R 1777 /user/history

    sudo -u hdfs hadoop fs -chown mapred:hadoop /user/history

    /etc/init.d/hadoop-mapreduce-historyserver start

    启动yarn

    for x in `ls /etc/init.d/|grep hadoop-yarn` ; do service $x start ; done

    service hadoop-yarn-nodemanager start

    HDFS界面

    yi00:50070

    Yarn界面

    yi00:8088

    history界面

    yi00:19888

    测试

  • 相关阅读:
    Redisson分布式锁学习总结:公平锁 RedissonFairLock#lock 获取锁源码分析
    Redisson分布式锁学习总结:可重入锁 RedissonLock#lock 获取锁源码分析
    Redisson分布式锁学习总结:公平锁 RedissonFairLock#unLock 释放锁源码分析
    npm更改为淘宝镜像
    博客园统计阅读量
    自动下载MarkDown格式会议论文的程序
    修改linux ll 命令的日期显示格式
    Canal 实战 | 第一篇:SpringBoot 整合 Canal + RabbitMQ 实现监听 MySQL 数据库同步更新 Redis 缓存
    Log4j2 Jndi 漏洞原理解析、复盘
    一个菜鸡技术人员,很另类的总结
  • 原文地址:https://www.cnblogs.com/osroot/p/3875820.html
Copyright © 2020-2023  润新知