• [zz]hdfsoverftp安装


    原文见:http://nubetech.co/accessing-hdfs-over-ftp
    这个程序是通过hdfs的9000端口访问的。听说还有hadoop自己的拓展包,需要重新编译hadoop。有机会的话安装一次来对比一下效率。
    下载压缩包:hdfs-over-ftp-0.20.0.tar.gz(我的hadoop是0.20.2)
     
    1.解压之后在目录下执行
    ./register-user.sh username password >> users.conf
    这会在users.conf中生成新的ftp账户配置。
    废话一句,xxxx.homedirectory=/,这里的/就是你的hadoop的根目录。
    2.修改hdfs-over-ftp.conf,有两个要注意的地方:
    hdfs-uri = hdfs://localhost:9000###确认localhost:9000可以访问到你的hadoop。
    superuser = hadoop#####hadoop是不是你起hadoop服务的用户。
    3.修改log4j.conf的
    log4j.appender.R.File=xxxxxxx
     
    4.启动/关闭:
    sudo ./hdfs-over-ftp.sh start
    ###必须使用sudo,没有sudo权限的话,修改/etc/sudoers ,在root下一行添加。
    username    ALL=(ALL)       ALL
    sudo ./hdfs-over-ftp.sh stop
     
     

    The Hadoop Distributed File System provides different interfaces so that clients can interact with it. Besides the HDFS shell, the file system exposes itself through WebDAV, Thrift, FTP and FUSE. In this post, we access HDFS over FTP. We have used Hadoop 0.20.2.

    1. Download the hdfs-over-ftp tar from https://issues.apache.org/jira/secure/attachment/12409518/hdfs-over-ftp-0.20.0.tar.gz

    2. Untar hdfs-over-ftp-0.20.0.tar.gz.

    3. We now need to create the configuration with ftp username and password.

    ./register-user.sh username password >> users.conf

    # the username user
    ftpserver.user.username.userpassword=0238775C7BD96E2EAB98038AFE0C4279
    ftpserver.user.username.homedirectory=/
    ftpserver.user.username.enableflag=true
    ftpserver.user.username.writepermission=true
    ftpserver.user.username.maxloginnumber=0
    ftpserver.user.username.maxloginperip=0
    ftpserver.user.username.idletime=0
    ftpserver.user.username.uploadrate=0
    ftpserver.user.username.downloadrate=0
    ftpserver.user.username.groups=users

    4. Configure log4j.conf so that you can diagnose whats happening.

    5. Now make changes according to your requirement in hdfs-over-ftp.conf

    #uncomment this to run ftp server
    port = 21
    data-ports = 20

    #uncomment this to run ssl ftp server
    #ssl-port = 990
    #ssl-data-ports = 989

    # hdfs uri
    hdfs-uri = hdfs://localhost:9000

    # max number of login
    max-logins = 1000

    # max number of anonymous login
    max-anon-logins = 1000

    # have to be a user which runs HDFS
    # this allows you to start ftp server as a root to use 21 port
    # and use hdfs as a superuser
    superuser = hadoop

    Please provide hdfs-uri according to your requirement.

    6. Now start ftp server:
    sudo ./hdfs-over-ftp.sh start

    7. To login to hdfs as ftp client
    ftp {ip address of namenode machine}
    (Note- Use username and password which you registered in user.conf)

    8. To put file or write in any folder of hdfs you need to provide permission to your user through hadoop ‘chown’ command

    bin/hadoop fs -chown -R group:username {path}

    10. You can stop the ftp server by
    sudo ./hdfs-over-ftp.sh stop

  • 相关阅读:
    用小程序·云开发两天搭建mini论坛丨实战
    巧用小程序·云开发实现邮件发送功能丨实战
    借助云开发,10行代码实现小程序支付功能!丨实战
    你的心事我全知晓——心情日记小程序丨实战
    只需20小时,让0基础的你掌握小程序云开发!这个暑假,约否?
    诗词歌赋,样样精通!诗词古语小程序带你领略魅力古风丨实战
    在线教育项目-day02【后台讲师管理模块】
    在线教育项目-day02【搭建项目结构】
    在线教育项目-day01【wrapper条件构造器】
    在线教育项目-day01【性能分析插件】
  • 原文地址:https://www.cnblogs.com/zhangzhang/p/2441478.html
Copyright © 2020-2023  润新知