• Docker一键部署Hadoop心得(一)


    最近一直在折腾使用docker一键部署全分布式hadoop集群,虽然一键部署的脚本写好了并且可以成功运行出各个节点,但在运行一个wordcount实例时出现了错误,错误如下:

    java.io.IOException: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024
    at
    org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:272)
    at
    org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
    atorg.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
    at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
    at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:330)
    at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:282)
    at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
    at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
    at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)

    at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:316)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

    问题:请求内存是1536M,而最大内存只有1024M,这个最大内存指的是运行MapReduce程序使用的最大内存,NodeManager运行MapReduce程序默认最大内存只有1024M,因此出现了错误。

    解决办法(这两步所有节点都需要修改):第一步,修改yarn的配置文件yarn-site.xml,改动两个地方:


    <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2000</value>
    </property>
    <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>2000</value>
    </property>

    第二步,由于NodeManager运行MapReduce程序使用的内存是由其上的docker容器分配的,而docker容器是在虚拟机上创建的,那么你在创建虚拟机时,就需要分配较大的内存才行,因为在虚拟机上创建的所有docker容器都会共用虚拟机的内存,如果虚拟机设置的内存过小,比如1G,那么创建的docker容器最大也只有1G内存,运行MapReduce程序肯定会报错,因此在创建虚拟机时尽可能分配较大的内存,我这里给虚拟机分配了4G。而在创建docker容器时需要使用docker命令对容器使用的资源进行限制,我在利用docker搭建hadoop集群时,分别创建了1个Master和3个Slave,命令如下:


    docker run -dit --name hadoop-master -m 2G --net shadownet --ip 172.18.0.10 -h hadoop-master -P -p 50070:50070 -p 8088:8088 lijinze9456yy000/ubuntu14-hadoop:base
      docker run -dit --name hadoop-slave1 -m 2G --net shadownet --ip 172.18.0.11 -h hadoop-slave1 lijinze9456yy000/ubuntu14-hadoop:base
      docker run -dit --name hadoop-slave2 -m 2G --net shadownet --ip 172.18.0.12 -h hadoop-slave2 lijinze9456yy000/ubuntu14-hadoop:base
      docker run -dit --name hadoop-slave3 -m 2G --net shadownet --ip 172.18.0.13 -h hadoop-slave3 lijinze9456yy000/ubuntu14-hadoop:base

    我这里设置的4个docker容器内存均为2G,swap分区大小默认与内存相同,因此也为2G。

    经过这两个步骤以后,我的wordcount实例完美的运行了!

    由于jar包中的wordcount程序只设置了一个reduce,因此分词出的所有结果都在一个part-r-0000X中,我们可以自己写wordcount程序设置多个reduce,map的输出结果将会分配到不同的part-r-0000X中。

  • 相关阅读:
    Windows 8.1 序列化与反序列化
    window store app 附件读取
    Window 8.1 计时器功能及图片切换
    c#多层嵌套Json
    isNotNull与isNotEmpty的区别
    商务用语
    国家气象局三天天气WebService接口
    WebServise
    EF架构基础代码
    接口定义
  • 原文地址:https://www.cnblogs.com/lijinze-tsinghua/p/8331523.html
Copyright © 2020-2023  润新知