• hive insert 出现问题


    问题:

    0: jdbc:hive2://node01:10000> insert into table t_user values(1,"allen");
    INFO  : Compiling command(queryId=hadoop_20210929010636_df171036-0e2c-4ad6-97f7-2648779abcb6): insert into table t_user values(1,"allen")
    INFO  : Concurrency mode is disabled, not creating a lock manager
    INFO  : Semantic Analysis Completed (retrial = false)
    INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0, type:int, comment:null), FieldSchema(name:_col1, type:varchar(255), comment:null)], properties:null)
    INFO  : Completed compiling command(queryId=hadoop_20210929010636_df171036-0e2c-4ad6-97f7-2648779abcb6); Time taken: 4.465 seconds
    INFO  : Concurrency mode is disabled, not creating a lock manager
    INFO  : Executing command(queryId=hadoop_20210929010636_df171036-0e2c-4ad6-97f7-2648779abcb6): insert into table t_user values(1,"allen")
    WARN  : Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
    INFO  : Query ID = hadoop_20210929010636_df171036-0e2c-4ad6-97f7-2648779abcb6
    INFO  : Total jobs = 3
    INFO  : Launching Job 1 out of 3
    INFO  : Starting task [Stage-1:MAPRED] in serial mode
    INFO  : Number of reduce tasks determined at compile time: 1
    INFO  : In order to change the average load for a reducer (in bytes):
    INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
    INFO  : In order to limit the maximum number of reducers:
    INFO  :   set hive.exec.reducers.max=<number>
    INFO  : In order to set a constant number of reducers:
    INFO  :   set mapreduce.job.reduces=<number>
    INFO  : Cleaning up the staging area /tmp/hadoop-yarn/staging/root/.staging/job_1632847660307_0001
    ERROR : Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwxrwx---
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1852)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1836)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1781)
            at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1930)
            at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:888)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:527)
            at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
    )'
    org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwxrwx---
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1852)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1836)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1781)
            at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1930)
            at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:888)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:527)
            at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
    
            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
            at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
            at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
            at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
            at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
            at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1900)
            at org.apache.hadoop.hdfs.DistributedFileSystem$35.doCall(DistributedFileSystem.java:1744)
            at org.apache.hadoop.hdfs.DistributedFileSystem$35.doCall(DistributedFileSystem.java:1741)
            at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
            at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:1754)
            at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:676)
            at org.apache.hadoop.mapreduce.JobResourceUploader.mkdirs(JobResourceUploader.java:660)
            at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResourcesInternal(JobResourceUploader.java:174)
            at org.apache.hadoop.mapreduce.JobResourceUploader.uploadResources(JobResourceUploader.java:135)
            at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
            at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
            at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
            at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
            at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
            at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
            at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
            at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:423)
            at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
            at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
            at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
            at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2664)
            at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2335)
            at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2011)
            at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1709)
            at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1703)
            at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157)
            at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:224)
            at org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:87)
            at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:316)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:329)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
            at java.lang.Thread.run(Thread.java:748)
    Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwxrwx---
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1852)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1836)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1781)
            at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1930)
            at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:888)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:527)
            at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
    
            at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)
            at org.apache.hadoop.ipc.Client.call(Client.java:1508)
            at org.apache.hadoop.ipc.Client.call(Client.java:1405)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
            at com.sun.proxy.$Proxy29.setPermission(Unknown Source)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:434)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
            at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
            at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
            at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
            at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
            at com.sun.proxy.$Proxy30.setPermission(Unknown Source)
            at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1898)
            ... 45 more
    
    ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwxrwx---
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1852)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1836)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1781)
            at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1930)
            at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:888)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:527)
            at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943)
    
    INFO  : Completed executing command(queryId=hadoop_20210929010636_df171036-0e2c-4ad6-97f7-2648779abcb6); Time taken: 1.242 seconds
    INFO  : Concurrency mode is disabled, not creating a lock manager
    Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. Permission denied: user=root, access=EXECUTE, inode="/tmp":hadoop:supergroup:drwxrwx---
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
            at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1852)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1836)
            at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1781)
            at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:63)
            at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1930)
            at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:888)
            at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:527)
            at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
            at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
            at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1015)
            at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:943)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:422)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
            at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2943) (state=08S01,code=1)
     
    View Code

    原因:root权限不够

    解决方法:

    开权限,把权限都打开,然后重启hdfs

     hadoop fs -chmod 777 /tmp

     

     参考:

    https://blog.csdn.net/qq_42784606/article/details/111029620

    
    
  • 相关阅读:
    flask 知识积累
    python中下划线
    pipenv知识积累
    shell知识积累
    AttributeError: type object 'testClass' has no attribute 'testMothod'
    python知识积累
    补全爬取的url
    linux 的基本命令
    Python里的拷贝
    关于第一次java课的感想
  • 原文地址:https://www.cnblogs.com/j-y-s/p/15350916.html
Copyright © 2020-2023  润新知