• hadoop的client搭建-即集群外主机访问hadoop


    1增加主机映射(与namenode的映射一样):

    增加最后一行 

     [root@localhost ~]# su - root
    1
    [root@localhost ~]# vi /etc/hosts 2 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 3 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 4 192.168.48.129 hadoop-master 5 [root@localhost ~]#


    2新建用户hadoop

    建立hadoop用户组

    新建用户,useradd -d /usr/hadoop -g hadoop -m hadoop (新建用户hadoop指定用户主目录/usr/hadoop 及所属组hadoop)

    passwd hadoop 设置hadoop密码(这里设置密码为hadoop)

    [root@localhost ~]# groupadd hadoop  
    [root@localhost ~]# useradd -d /usr/hadoop -g hadoop -m hadoop
    [root@localhost ~]# passwd hadoop

     

    3配置jdk环境

    本次安装的是hadoop-2.7.5,需要JDK 7以上版本。若已安装可跳过。

    JDK安装可参考:https://www.cnblogs.com/shihaiming/p/5809553.html

    或者直接拷贝master上的JDK文件更有利于保持版本的一致性。

    [root@localhost java]# su - root
    [root@localhost java]# mkdir -p /usr/java
    [root@localhost java]# scp -r hadoop@hadoop-master:/usr/java/jdk1.7.0_79 /usr/java
    [root@localhost java]# ll
    total 12
    drwxr-xr-x. 8 root root 4096 Feb 13 01:34 default
    drwxr-xr-x. 8 root root 4096 Feb 13 01:34 jdk1.7.0_79
    drwxr-xr-x. 8 root root 4096 Feb 13 01:34 latest

    设置Java及hadoop环境变量

    确保/usr/java/jdk1.7.0.79存在

    su - root

    vi /etc/profile

     确保/usr/java/jdk1.7.0.79存在

    1 unset i
    2 unset -f pathmunge
    3 JAVA_HOME=/usr/java/jdk1.7.0_79
    4 CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    5 PATH=/usr/hadoop/hadoop-2.7.5/bin:$JAVA_HOME/bin:$PATH

    设置生效(重要)

    [root@localhost ~]# source /etc/profile
    [root@localhost ~]# 

    JDK安装后确认:

    [hadoop@localhost ~]$ java -version
    java version "1.7.0_79"
    Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
    Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
    [hadoop@localhost ~]$ 

    4设置hadoop的环境变量

    拷贝namenode上已配置好的hadoop目录到当前主机

     

    [root@localhost ~]# su - hadoop
    Last login: Sat Feb 24 14:04:55 CST 2018 on pts/1
    [hadoop@localhost ~]$ pwd
    /usr/hadoop
    [hadoop@localhost ~]$ scp -r hadoop@hadoop-master:/usr/hadoop/hadoop-2.7.5 .
    The authenticity of host 'hadoop-master (192.168.48.129)' can't be established.
    ECDSA key fingerprint is 1e:cd:d1:3d:b0:5b:62:45:a3:63:df:c7:7a:0f:b8:7c.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'hadoop-master,192.168.48.129' (ECDSA) to the list of known hosts.
    hadoop@hadoop-master's password:
    [hadoop@localhost ~]$ ll
    total 0
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Desktop
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Documents
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Downloads
    drwxr-xr-x 10 hadoop hadoop 150 Feb 24 14:30 hadoop-2.7.5
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Music
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Pictures
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Public
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Templates
    drwxr-xr-x  2 hadoop hadoop   6 Feb 24 11:32 Videos
    [hadoop@localhost ~]$ 

    到此,Hadoop的客户端安装就算完成了,接下来就可以使用了。

    执行hadoop命令结果如下,

     

    [hadoop@localhost ~]$ hadoop
    Usage: hadoop [--config confdir] [COMMAND | CLASSNAME]
      CLASSNAME            run the class named CLASSNAME
     or
      where COMMAND is one of:
      fs                   run a generic filesystem user client
      version              print the version
      jar <jar>            run a jar file
                           note: please use "yarn jar" to launch
                                 YARN applications, not this command.
      checknative [-a|-h]  check native hadoop and compression libraries availability
      distcp <srcurl> <desturl> copy file or directories recursively
      archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
      classpath            prints the class path needed to get the
      credential           interact with credential providers
                           Hadoop jar and the required libraries
      daemonlog            get/set the log level for each daemon
      trace                view and modify Hadoop tracing settings
    
    Most commands print help when invoked w/o parameters.
    [hadoop@localhost ~]$

     5.使用hadoop

    创建本地文件

    [hadoop@localhost ~]$ hdfs dfs -ls
    Found 1 items
    drwxr-xr-x   - hadoop supergroup          0 2018-02-22 23:41 output

    [hadoop@localhost ~]$ vi my-local.txt hello boy! yehyeh

    上传本地文件至集群

    [hadoop@localhost ~]$ hdfs dfs -mkdir upload
    [hadoop@localhost ~]$ hdfs dfs -ls upload
    [hadoop@localhost ~]$ hdfs dfs -ls 
    Found 2 items
    drwxr-xr-x   - hadoop supergroup          0 2018-02-22 23:41 output
    drwxr-xr-x   - hadoop supergroup          0 2018-02-23 22:38 upload
    [hadoop@localhost ~]$ hdfs dfs -ls upload
    [hadoop@localhost ~]$ hdfs dfs -put my-local.txt upload
    [hadoop@localhost ~]$ hdfs dfs -ls upload
    Found 1 items
    -rw-r--r--   3 hadoop supergroup         18 2018-02-23 22:45 upload/my-local.txt
    [hadoop@localhost ~]$ hdfs dfs -cat upload/my-local.txt
    hello boy!
    yehyeh
    [hadoop@localhost ~]$

     ps:注意本地java版本与master拷贝过来的文件中/etc/hadoop-env.sh配置的JAVA_HOME是否要保持一致没有验证过,本文是保持一致的  

  • 相关阅读:
    微服务之间的通讯安全(四)-JWT优化之日志、错误处理、限流及JWT改造后执行流程梳理
    微服务之间的通讯安全(三)-JWT优化之权限控制
    微服务之间的通讯安全(二)-使用JWT优化认证机制
    微服务之间的通讯安全(一)-针对认证和SSO现有架构的问题
    认证和SSO(五)-基于token的SSO
    认证和SSO(四)-基于session的SSO存在的问题之token问题及基于session的SSO优缺点
    认证和SSO(三)-基于session的SSO存在的问题之session问题
    认证和SSO(二)-OAuth2四种授权模式及项目改造为授权码模式实现单点登陆SSO
    认证和SSO(一)-基于OAuth2单点登陆基本架构
    网关安全(五)-引入网关,在网关上实现流控,认证,审计,授权
  • 原文地址:https://www.cnblogs.com/pu20065226/p/8464867.html
Copyright © 2020-2023  润新知