• hadoop2.6.0 + hbase-1.0.0 伪分布配置


    1 基本配置

    主机名: 192.168.145.154 hadoop2

    =======

    2 etc/hadoop下文件配置

    1)core-site.xml

    <configuration>
        <property>
            <name>fs.defaultFS</name>
            <value>hdfs://hadoop2:8020</value>
        </property>
        <property>
            <name>io.file.buffer.size</name>
            <value>131072</value>
        </property>
        <property>
            <name>hadoop.tmp.dir</name>
            <value>/home/${user.name}/tmp/hadoop-${user.name}</value>
        </property>
    </configuration>

    2) hdfs-site.xml

    <configuration>
        <property>
            <name>dfs.namenode.name.dir</name>
            <value>file:/home/${user.name}/hdfs/name</value>
        </property>
        <property>
            <name>dfs.datanode.data.dir</name>
            <value>file:/home/${user.name}/hdfs/data</value>
        </property>
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
        <property>
            <name>dfs.webhdfs.enabled</name>
            <value>true</value>
        </property>
        <property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>hadoop2:50090</value>
        </property>
      <property>
        <name>dfs.namenode.max.transter.threads</name>
        <value>4096</value>
      </property>
    </configuration>

    3)mapred-site.xml  即MapReduce批处理计算框架配置

    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.address</name>
            <value>hadoop2:10020</value>
        </property>
        <property>
            <name>mapreduce.jobhistory.webapp.address</name>
            <value>hadoop2:19888</value>
        </property>
    </configuration>

    4) yarn-site.xml

    <configuration>
        <!-- Configuration for ResourceManager -->
        <property> ##资源管理器节点所在的主机
            <name>yarn.resourcemanager.hostname</name>
            <value>hadoop2</value>
            <description>The hostname of the ResourceManager</description>
        </property>

    <property> ##ApplicationMaster通过地址hadoop2:8030与资源管理器节点上的调度器通信 - 资源管理上的调度器的RPC访问地址 <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop2:8030</value> <description> ResourceManager host:port for ApplicationMasters to talk to Scheduler to obtain resources. The address of the scheduler interface. </description> </property>
    <property> ##NodeManager通过hadoop2:8031与资源管理器通信 - 节点管理器NM与资源管理器通信的RPC端口地址 <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop2:8031</value> <description> ResourceManager host:port for NodeManagers</description> </property>

    <property> ##客户端通过 地址‘hadoop2:8032’向资源管理器提交作业 <name>yarn.resourcemanager.address</name> <value>hadoop2:8032</value> <description> ResourceManager host:port for clients to submit jobs. The address of the applications manager interface in the ResourceManager. </description> </property>

    <property> ##管理员命令通过 hadoop2主机上的8033端口 发布到资源管理器上执行 <name>yarn.resourcemanager.admin.address</name> <value>hadoop2:8033</value> <description> ResourceManager host:port for administrative commands.</description> </property>

    <property> ##资源管理器WEB应用的http访问地址 <name>yarn.resourcemanager.webapp.address</name> <value>hadoop2:8088</value> <description> ResourceManager web-ui host:port The http address of the ResourceManager web application. </description> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value> <description>The class to use as the resource scheduler.</description> </property> <!-- Configuration for NodeManager --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>Shuffle service that needs to be set for Map Reduce applications.</description> </property> </configuration>

     5)slaves

    hadoop2

    6) JAVA_HOME配置

    7)dfs namenode -format

    8) 启动日志

    asn@hadoop2:~$ /opt/cdh5/hadoop-2.6.0-cdh5.4.4/sbin/start-dfs.sh
    Starting namenodes on [hadoop2]
    hadoop2: starting namenode, logging to /opt/cdh5/hadoop-2.6.0-cdh5.4.4/logs/hadoop-asn-namenode-hadoop2.out
    hadoop2: starting datanode, logging to /opt/cdh5/hadoop-2.6.0-cdh5.4.4/logs/hadoop-asn-datanode-hadoop2.out
    Starting secondary namenodes [hadoop2]
    hadoop2: starting secondarynamenode, logging to /opt/cdh5/hadoop-2.6.0-cdh5.4.4/logs/hadoop-asn-secondarynamenode-hadoop2.out
    
    asn@hadoop2:~$ /opt/cdh5/hadoop-2.6.0-cdh5.4.4/sbin/start-yarn.sh
    starting yarn daemons
    starting resourcemanager, logging to /opt/cdh5/hadoop-2.6.0-cdh5.4.4/logs/yarn-asn-resourcemanager-hadoop2.out
    hadoop2: starting nodemanager, logging to /opt/cdh5/hadoop-2.6.0-cdh5.4.4/logs/yarn-asn-nodemanager-hadoop2.out

    ############################

    hbase-1.0.0-cdh5.4.4伪分布配置

    下面是为了安装hbase对ubuntu系统做的一些配置

    1) 配置hadoop管理用户asn可以同时可以使用的最大文件句柄数和文件可以被引用的进程数上限

    /etc/security/limits.conf 配置  参考 http://blog.csdn.net/taijianyu/article/details/5976319

    # /etc/security/limits.conf
    #
    #Each line describes a limit for a user in the form:
    #
    #<domain>        <type>  <item>  <value>
    #
    #Where:
    #<domain> can be:
    #        - a user name
    #        - a group name, with @group syntax
    #        - the wildcard *, for default entry
    #        - the wildcard %, can be also used with %group syntax, for maxlogin limit
    #        - NOTE: group and wildcard limits are not applied to root.
    #          To apply a limit to the root user, <domain> must be the literal username root.
    #
    #<type> can have the two values:
    #        - "soft" for enforcing the soft limits
    #        - "hard" for enforcing hard limits
    #
    #<item> can be one of the following:
    #        - core - limits the core file size (KB)
    #        - data - max data size (KB)
    #        - fsize - maximum filesize (KB)
    #        - memlock - max locked-in-memory address space (KB)
    #        - nofile - max number of open files
    #        - rss - max resident set size (KB)
    #        - stack - max stack size (KB)
    #        - cpu - max CPU time (MIN)
    #        - nproc - max number of processes
    #        - as - address space limit (KB)
    #        - maxlogins - max number of logins for this user
    #        - maxsyslogins - max number of logins on the system
    #        - priority - the priority to run user process with
    #        - locks - max number of file locks the user can hold
    #        - sigpending - max number of pending signals
    #        - msgqueue - max memory used by POSIX message queues (bytes)
    #        - nice - max nice priority allowed to raise to values: [-20, 19]
    #        - rtprio - max realtime priority
    #        - chroot - change root to directory (Debian-specific)
    #
    #<domain>      <type>  <item>         <value>
    #
    
    #*               soft    core            0
    #root            hard    core            100000
    #*               hard    rss             10000
    #@student        hard    nproc           20
    #@faculty        soft    nproc           20
    #@faculty        hard    nproc           50
    #ftp             hard    nproc           0
    #ftp             -       chroot          /ftp
    #@student        -       maxlogins       4
    asn              -       nofile         32768
    asn              -       nproc          16384
    
    # End of file

    参考oracle11gr2安装中对oracle用户的文件句柄和进程的限制

     #vim /etc/security/limits.conf 在文件后增加 
      Oracle            soft    nproc           2047 
      Oracle            hard    nproc           16384 
      Oracle            soft    nofile          1024 
      Oracle            hard    nofile          65536 
      Oracle            soft    stack           10240 

    type:有 soft,hard 和 -

      soft 指的是当前系统生效的设置值

      hard 表明系统中所能设定的最大值

      用 - 就表明同时设置了 soft 和 hard 的值

    在/etc/pam.d/common-session文件中添加 session required pam_limits.so,重新登录使更改生效

    #
    # /etc/pam.d/common-session - session-related modules common to all services
    #
    # This file is included from other service-specific PAM config files,
    # and should contain a list of modules that define tasks to be performed at the start and end of sessions of *any* kind (both interactive and non-interactive).
    #
    # As of pam 1.0.1-6, this file is managed by pam-auth-update by default.
    # To take advantage of this, it is recommended that you configure any local modules either before or after the default block,
    # and use pam-auth-update to manage selection of other modules. 
    # See pam-auth-update(8) for details.
    
    # here are the per-package modules (the "Primary" block)
    session [default=1]                     pam_permit.so
    # here's the fallback if no module succeeds session requisite pam_deny.so
    # prime the stack with a positive return value if there isn't one already; # this avoids us returning an error just because nothing sets a success code # since the modules above will each just jump around session required pam_permit.so
    # The pam_umask module will set the umask according to the system default in # /etc/login.defs and user settings, solving the problem of different # umask settings with different shells, display managers, remote sessions etc. # See "man pam_umask". session optional pam_umask.so
    # and here are more per-package modules (the "Additional" block) session required pam_unix.so session optional pam_systemd.so

    session required pam_limits.so # end of pam-auth-update config

     2) 修改内核参数

    在/etc/sysctl.conf文件后增加

    # Uncomment the next two lines to enable Spoof protection (reverse-path filter)
    # Turn on Source Address Verification in all interfaces to prevent some spoofing attacks
    #net.ipv4.conf.default.rp_filter=1
    #net.ipv4.conf.all.rp_filter=1
    
    # Uncomment the next line to enable TCP/IP SYN cookies
    # See http://lwn.net/Articles/277146/
    # Note: This may impact IPv6 TCP sessions too
    #net.ipv4.tcp_syncookies=1
    
    # Uncomment the next line to enable packet forwarding for IPv4
    #net.ipv4.ip_forward=1
    
    # Uncomment the next line to enable packet forwarding for IPv6
    #  Enabling this option disables Stateless Address Autoconfiguration
    #  based on Router Advertisements for this host
    #net.ipv6.conf.all.forwarding=1
    
    
    ###################################################################
    # Additional settings - these settings can improve the network
    # security of the host and prevent against some network attacks
    # including spoofing attacks and man in the middle attacks through redirection.
    # Some network environments, however, require that these settings are disabled so review and enable them as needed.
    #
    # Do not accept ICMP redirects (prevent MITM attacks)
    #net.ipv4.conf.all.accept_redirects = 0
    #net.ipv6.conf.all.accept_redirects = 0
    # _or_
    # Accept ICMP redirects only for gateways listed in our default
    # gateway list (enabled by default)
    # net.ipv4.conf.all.secure_redirects = 1
    #
    # Do not send ICMP redirects (we are not a router)
    #net.ipv4.conf.all.send_redirects = 0
    #
    # Do not accept IP source route packets (we are not a router)
    #net.ipv4.conf.all.accept_source_route = 0
    #net.ipv6.conf.all.accept_source_route = 0
    #
    # Log Martian Packets
    #net.ipv4.conf.all.log_martians = 1
    #
    
    fs.aio-max-nr = 1048576 
    fs.file-max = 6553600

    参考oracle11gr2安装中的内核参数配置

    #vim /etc/sysctl.conf 在文件最后增加 
     fs.aio-max-nr = 1048576 
     fs.file-max = 6553600 
     kernel.shmall = 2097152 
     kernel.shmmax = 2147483648 
     kernel.shmmni = 4096 
     kernel.sem = 250 32000 100 128 
     net.ipv4.ip_local_port_range = 1024 65000 
     net.core.rmem_default = 262144 
     net.core.rmem_max = 4194304 
     net.core.wmem_default = 262144 
     net.core.wmem_max = 1048586 
     保存文件。 
     #/sbin/sysctl -p          //让参数生效 

    3) 指定DataNode中用于传进、传出数据的最大线程数

    在hdfs-site.xml中配置属性 dfs.datanode.max.transfer.threads值为4096

    2 conf目录下文件配置

    1)hbase-env.sh

    配置JAVA_HOME环境变量

    解注export HBASE_MANAGES_ZK=true,让hbase管理它自己的zookeeper实例(即启用hbase内置的zookeeper)

    2)hbase-site.xml

    <configuration>
        <property>
            <name>hbase.rootdir</name>
            <value>hdfs://hadoop2:8020/hbase</value>
        </property>
        <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
        </property>

      <property>
        <name>hbase.tmp.dir</name>
        <value>/home/asn/tmp/hbase-asn</value>
      </property>
    <property> <name>hbase.zookeeper.quorum</name> <value>hadoop2</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/asn/zookeeperdata</value> </property> </configuration>

    3) regionservers

    hadoop2

    4)启动日志

    asn@hadoop2:~$ /opt/cdh5/hbase-1.0.0-cdh5.4.4/bin/start-hbase.sh
    hadoop2: starting zookeeper, logging to /opt/cdh5/hbase-1.0.0-cdh5.4.4/bin/../logs/hbase-asn-zookeeper-hadoop2.out
    starting master, logging to /opt/cdh5/hbase-1.0.0-cdh5.4.4/bin/../logs/hbase-asn-master-hadoop2.out
    hadoop2: starting regionserver, logging to /opt/cdh5/hbase-1.0.0-cdh5.4.4/bin/../logs/hbase-asn-regionserver-hadoop2.out
  • 相关阅读:
    OpenERP Framework API存档
    OpenERP 7 picking order 继承需要注意的地方
    Unity战斗系统之AI自主决策
    简易2D横版RPG游戏制作
    UGUI之Canvas Group
    UGUI之Canvas和EventSystem
    NGUI之scroll view的制作和踩坑总结
    NGUI之Toggle实现单选框
    Unity中对象池的使用
    继承MonoBehaviour类的优缺点和相关报错
  • 原文地址:https://www.cnblogs.com/asnjudy/p/4774174.html
Copyright © 2020-2023  润新知