• spark-sql master on yarn 模式运行 select count(*) 报错日志


    启动hive --service metastore
    启动 dfs yarn
    [root@bigdatastorm bin]# ./spark-sql --master yarn --deploy-mode client --driver-memory 512m --executor-memory 512m --total-executor-cores 1



    spark-sql>select count(*)  ;

    Log

    =======================================================

    SLF4J: Class path contains multiple SLF4J bindings.
    SLF4J: Found binding in [jar:file:/opt/hadoop-2.5.1/nm-local-dir/usercache/root/filecache/11/spark-assembly-1.6.0-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
    SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
    SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
    16/09/05 21:59:45 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
    16/09/05 21:59:50 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1473082245027_0003_000001
    16/09/05 21:59:53 INFO spark.SecurityManager: Changing view acls to: root
    16/09/05 21:59:53 INFO spark.SecurityManager: Changing modify acls to: root
    16/09/05 21:59:53 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
    16/09/05 21:59:55 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable.
    16/09/05 21:59:55 INFO yarn.ApplicationMaster: Driver now available: 192.168.184.188:45475
    16/09/05 21:59:57 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> bigdatastorm, PROXY_URI_BASES -> http://bigdatastorm:8088/proxy/application_1473082245027_0003),/proxy/application_1473082245027_0003)
    16/09/05 21:59:57 INFO yarn.YarnRMClient: Registering the ApplicationMaster
    16/09/05 21:59:58 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 896 MB memory including 384 MB overhead
    16/09/05 21:59:58 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:896, vCores:1>)
    16/09/05 21:59:58 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
    16/09/05 21:59:58 INFO impl.AMRMClientImpl: Received new token for : bigdatastorm:59055
    16/09/05 21:59:58 INFO yarn.YarnAllocator: Launching container container_1473082245027_0003_01_000002 for on host bigdatastorm
    16/09/05 21:59:58 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@192.168.184.188:45475,  executorHostname: bigdatastorm
    16/09/05 21:59:58 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
    16/09/05 21:59:58 INFO yarn.ExecutorRunnable: Starting Executor Container
    16/09/05 21:59:58 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
    16/09/05 21:59:58 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
    16/09/05 21:59:58 INFO yarn.ExecutorRunnable: Preparing Local resources
    16/09/05 21:59:59 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "mycluster" port: -1 file: "/user/root/.sparkStaging/application_1473082245027_0003/spark-assembly-1.6.0-hadoop2.6.0.jar" } size: 187548272 timestamp: 1473083954792 type: FILE visibility: PRIVATE)
    16/09/05 21:59:59 INFO yarn.ExecutorRunnable: 
    ===============================================================================
    YARN executor launch context:
      env:
        CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
        SPARK_LOG_URL_STDERR -> http://bigdatastorm:8042/node/containerlogs/container_1473082245027_0003_01_000002/root/stderr?start=-4096
        SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1473082245027_0003
        SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187548272
        SPARK_USER -> root
        SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE
        SPARK_YARN_MODE -> true
        SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1473083954792
        SPARK_LOG_URL_STDOUT -> http://bigdatastorm:8042/node/containerlogs/container_1473082245027_0003_01_000002/root/stdout?start=-4096
        SPARK_YARN_CACHE_FILES -> hdfs://mycluster/user/root/.sparkStaging/application_1473082245027_0003/spark-assembly-1.6.0-hadoop2.6.0.jar#__spark__.jar
    
      command:
        {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms512m -Xmx512m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=45475' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.184.188:45475 --executor-id 1 --hostname bigdatastorm --cores 1 --app-id application_1473082245027_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
    ===============================================================================
          
    16/09/05 21:59:59 INFO impl.ContainerManagementProtocolProxy: Opening proxy : bigdatastorm:59055
    16/09/05 22:09:45 INFO yarn.YarnAllocator: Completed container container_1473082245027_0003_01_000002 on host: bigdatastorm (state: COMPLETE, exit status: 50)
    16/09/05 22:09:45 WARN yarn.YarnAllocator: Container marked as failed: container_1473082245027_0003_01_000002 on host: bigdatastorm. Exit status: 50. Diagnostics: Exception from container-launch: ExitCodeException exitCode=50: 
    ExitCodeException exitCode=50: 
    	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    	at org.apache.hadoop.util.Shell.run(Shell.java:455)
    	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
    	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    	at java.lang.Thread.run(Thread.java:745)
    
    
    Container exited with a non-zero exit code 50
    
    16/09/05 22:09:48 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 896 MB memory including 384 MB overhead
    16/09/05 22:09:48 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:896, vCores:1>)
    16/09/05 22:09:49 INFO yarn.YarnAllocator: Launching container container_1473082245027_0003_01_000003 for on host bigdatastorm
    16/09/05 22:09:49 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@192.168.184.188:45475,  executorHostname: bigdatastorm
    16/09/05 22:09:49 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
    16/09/05 22:09:49 INFO yarn.ExecutorRunnable: Starting Executor Container
    16/09/05 22:09:49 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
    16/09/05 22:09:49 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
    16/09/05 22:09:49 INFO yarn.ExecutorRunnable: Preparing Local resources
    16/09/05 22:09:49 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "mycluster" port: -1 file: "/user/root/.sparkStaging/application_1473082245027_0003/spark-assembly-1.6.0-hadoop2.6.0.jar" } size: 187548272 timestamp: 1473083954792 type: FILE visibility: PRIVATE)
    16/09/05 22:09:49 INFO yarn.ExecutorRunnable: 
    ===============================================================================
    YARN executor launch context:
      env:
        CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
        SPARK_LOG_URL_STDERR -> http://bigdatastorm:8042/node/containerlogs/container_1473082245027_0003_01_000003/root/stderr?start=-4096
        SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1473082245027_0003
        SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187548272
        SPARK_USER -> root
        SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE
        SPARK_YARN_MODE -> true
        SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1473083954792
        SPARK_LOG_URL_STDOUT -> http://bigdatastorm:8042/node/containerlogs/container_1473082245027_0003_01_000003/root/stdout?start=-4096
        SPARK_YARN_CACHE_FILES -> hdfs://mycluster/user/root/.sparkStaging/application_1473082245027_0003/spark-assembly-1.6.0-hadoop2.6.0.jar#__spark__.jar
    
      command:
        {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms512m -Xmx512m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=45475' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.184.188:45475 --executor-id 2 --hostname bigdatastorm --cores 1 --app-id application_1473082245027_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
    ===============================================================================
          
    16/09/05 22:09:49 INFO impl.ContainerManagementProtocolProxy: Opening proxy : bigdatastorm:59055
    16/09/05 22:12:14 INFO yarn.YarnAllocator: Completed container container_1473082245027_0003_01_000003 on host: bigdatastorm (state: COMPLETE, exit status: 1)
    16/09/05 22:12:14 WARN yarn.YarnAllocator: Container marked as failed: container_1473082245027_0003_01_000003 on host: bigdatastorm. Exit status: 1. Diagnostics: Exception from container-launch: ExitCodeException exitCode=1: 
    ExitCodeException exitCode=1: 
    	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    	at org.apache.hadoop.util.Shell.run(Shell.java:455)
    	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
    	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    	at java.lang.Thread.run(Thread.java:745)
    
    
    Container exited with a non-zero exit code 1
    
    16/09/05 22:12:17 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 896 MB memory including 384 MB overhead
    16/09/05 22:12:17 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:896, vCores:1>)
    16/09/05 22:12:18 INFO impl.AMRMClientImpl: Received new token for : bigdatahadoop:39892
    16/09/05 22:12:18 INFO yarn.YarnAllocator: Launching container container_1473082245027_0003_01_000004 for on host bigdatahadoop
    16/09/05 22:12:18 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: spark://CoarseGrainedScheduler@192.168.184.188:45475,  executorHostname: bigdatahadoop
    16/09/05 22:12:18 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
    16/09/05 22:12:18 INFO yarn.ExecutorRunnable: Starting Executor Container
    16/09/05 22:12:18 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
    16/09/05 22:12:18 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
    16/09/05 22:12:18 INFO yarn.ExecutorRunnable: Preparing Local resources
    16/09/05 22:12:18 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "mycluster" port: -1 file: "/user/root/.sparkStaging/application_1473082245027_0003/spark-assembly-1.6.0-hadoop2.6.0.jar" } size: 187548272 timestamp: 1473083954792 type: FILE visibility: PRIVATE)
    16/09/05 22:12:18 INFO yarn.ExecutorRunnable: 
    ===============================================================================
    YARN executor launch context:
      env:
        CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
        SPARK_LOG_URL_STDERR -> http://bigdatahadoop:8042/node/containerlogs/container_1473082245027_0003_01_000004/root/stderr?start=-4096
        SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1473082245027_0003
        SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187548272
        SPARK_USER -> root
        SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE
        SPARK_YARN_MODE -> true
        SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1473083954792
        SPARK_LOG_URL_STDOUT -> http://bigdatahadoop:8042/node/containerlogs/container_1473082245027_0003_01_000004/root/stdout?start=-4096
        SPARK_YARN_CACHE_FILES -> hdfs://mycluster/user/root/.sparkStaging/application_1473082245027_0003/spark-assembly-1.6.0-hadoop2.6.0.jar#__spark__.jar
    
      command:
        {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms512m -Xmx512m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=45475' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://CoarseGrainedScheduler@192.168.184.188:45475 --executor-id 3 --hostname bigdatahadoop --cores 1 --app-id application_1473082245027_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
    ===============================================================================
          
    16/09/05 22:12:18 INFO impl.ContainerManagementProtocolProxy: Opening proxy : bigdatahadoop:39892
    16/09/05 22:14:36 INFO yarn.YarnAllocator: Completed container container_1473082245027_0003_01_000004 on host: bigdatahadoop (state: COMPLETE, exit status: 1)
    16/09/05 22:14:36 WARN yarn.YarnAllocator: Container marked as failed: container_1473082245027_0003_01_000004 on host: bigdatahadoop. Exit status: 1. Diagnostics: Exception from container-launch: ExitCodeException exitCode=1: 
    ExitCodeException exitCode=1: 
    	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
    	at org.apache.hadoop.util.Shell.run(Shell.java:455)
    	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
    	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
    	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
    	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
    	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    	at java.lang.Thread.run(Thread.java:745)
    
    
    Container exited with a non-zero exit code 1
    
    16/09/05 22:14:39 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 11, (reason: Max number of executor failures (3) reached)
    16/09/05 22:14:42 INFO util.ShutdownHookManager: Shutdown hook called

  • 相关阅读:
    json数据转化格式
    远程安装软件控制台
    杂、记忆点
    布局(杂,细节处理)
    自己修改代码后push推送到zhile
    js中call和apply的区别 / 函数的call、apply以及bind的作用与区别
    一元运算符a++、++a、a--、--a
    javascript基础语法和算术运算符
    不同空格符号的区别
    2020.12.11面试两家
  • 原文地址:https://www.cnblogs.com/TendToBigData/p/10501379.html
Copyright © 2020-2023  润新知