z_activeagent z_weekstore z_wstest zz_monthstore 69 row(s) in 0.5240 seconds => ["KYLIN_02YJ3NJ7PW", "KYLIN_09YWHIEKLK", "KYLIN_0MGAS628J4", "KYLIN_14TNZQ6DAE", "KYLIN_18RFD2M9KD", "KYLIN_2XRK8PNLEQ", "KYLIN_3LTL19CGNB", "KYLIN_3ZKJLGTHF8", "KYLIN_42XI0TZP1C", "KYLIN_4E5NFSJ1TT", "KYLIN_58UKG45HEY", "KYLIN_5G0HVH6WO5", "KYLIN_5QPRJFEZBF", "KYLIN_6FQUGB2GG3", "KYLIN_7SUORPO9V7", "KYLIN_7U9AFHQ366", "KYLIN_8HZPENNGB7", "KYLIN_8QHHG5GOC2", "KYLIN_A6GK4REWOD", "KYLIN_B8DDOOO8IV", "KYLIN_DZ79IEFUEY", "KYLIN_ETYEUFI2WO", "KYLIN_FBIWHPCOHY", "KYLIN_FTW1CM9P5H", "KYLIN_G2NWQRQAFV", "KYLIN_G6F41QVAI6", "KYLIN_ICBULW0MPB", "KYLIN_JT60DBVXUI", "KYLIN_KQWB6I426I", "KYLIN_L4KS1QHSDH", "KYLIN_PI3B0F23NU", "KYLIN_PQOMA1EHZP", "KYLIN_QJGQJYRATQ", "KYLIN_QTIZGJEGBW", "KYLIN_S3IK6XW0SZ", "KYLIN_U6LWJPGXE5", "KYLIN_UBI758YA36", "KYLIN_UNN1IGQT4C", "KYLIN_VCA1XQU0JX", "KYLIN_VIM0C9L5WE", "KYLIN_YR4QE1XYAK", "KYLIN_YYIJGRXIBU", "KYLIN_Z4Y323QGUL", "KYLIN_ZF7D6S12IO", "KYLIN_ZU7XCILCF7", "addrent_info", "comm_info", "flushrent_info", "flushsale_info", "hot_info", "house:renthouse_test", "kylin_metadata", "kylin_metadata_acl", "kylin_metadata_user", "promotion_info", "rank_count_rent", "rank_count_sale", "salehousedeal", "sitehot_info", "stork_info", "storkrent_info", "storksale_info", "t_book", "test", "testinsert", "z_activeagent", "z_weekstore", "z_wstest", "zz_monthstore"] hbase(main):002:0> count 'z_activeagent' ERROR: HRegionInfo was null in z_activeagent, row=keyvalues={z_activeagent,,1514755370829.d36b716be958e98c9ae41bd4d7a46caa./info:seqnumDuringOpen/1514858426304/Put/vlen=8/seqid=0, z_activeagent,,1514755370829.d36b716be958e98c9ae41bd4d7a46caa./info:server/1514858426304/Put/vlen=13/seqid=0, z_activeagent,,1514755370829.d36b716be958e98c9ae41bd4d7a46caa./info:serverstartcode/1514858426304/Put/vlen=8/seqid=0} Here is some help for this command: Count the number of rows in a table. Return value is the number of rows. This operation may take a LONG time (Run '$HADOOP_HOME/bin/hadoop jar hbase.jar rowcount' to run a counting mapreduce job). Current count is shown every 1000 rows by default. Count interval may be optionally specified. Scan caching is enabled on count scans by default. Default cache size is 10 rows. If your rows are small in size, you may want to increase this parameter. Examples:
无论是scan 还是count 甚至是复制表都不行
[root@master109 ~]# hbase org.apache.hadoop.hbase.mapreduce.CopyTable --new.name=z_activeagent1 z_activeagent SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2018-01-04 13:01:34,102 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2018-01-04 13:01:34,185 INFO [main] Configuration.deprecation: dfs.permissions is deprecated. Instead, use dfs.permissions.enabled 2018-01-04 13:01:34,186 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2018-01-04 13:01:34,198 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it. 2018-01-04 13:01:34,822 INFO [main] Configuration.deprecation: dfs.permissions is deprecated. Instead, use dfs.permissions.enabled 2018-01-04 13:01:34,824 INFO [main] Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum 2018-01-04 13:01:35,059 INFO [main] client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 2018-01-04 13:01:36,957 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x706a4369 connecting to ZooKeeper ensemble=master109:2181,spider1:2181,node110:2181,node111:2181,node112:2181 2018-01-04 13:01:36,966 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.7.0--1, built on 03/23/2016 18:31 GMT 2018-01-04 13:01:36,966 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=master109 2018-01-04 13:01:36,966 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_80 2018-01-04 13:01:36,966 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 2018-01-04 13:01:36,966 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/opt/hadoop/jdk1.7/jre ... 2018-01-04 13:01:51,142 INFO [main] mapreduce.Job: Task Id : attempt_1507608682095_49226_m_000001_0, Status : FAILED Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 771 actions: z_activeagent1: 771 times, at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:247) at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:227) at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1758) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:146) at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:113) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:138) at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658) at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112) at org.apache.hadoop.hbase.mapreduce.Import$Importer.processKV(Import.java:209) at org.apache.hadoop.hbase.mapreduce.Import$Importer.writeResult(Import.java:164) at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:149) at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:132) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
无奈打开 hbck工具
[root@master109 ~]# cd /opt/hadoop/hbase
[root@master109 hbase]# ls
bin conf hbase-annotations hbase-client hbase-external-blockcache hbase-it hbase-protocol hbase-server hbase-testing-util LEGAL logs README.txt
CHANGES.txt dev-support hbase-assembly hbase-common hbase-hadoop2-compat hbase-prefix-tree hbase-resource-bundle hbase-shaded hbase-thrift lib NOTICE.txt src
cloudera docs hbase-checkstyle hbase-examples hbase-hadoop-compat hbase-procedure hbase-rest hbase-shell hbase-webapps LICENSE.txt pom.xml
[root@master109 hbase]# cd bin/
[root@master109 bin]# ls
draining_servers.rb hbase hbase-common.sh hbase-daemon.sh hirb.rb master-backup.sh region_status.rb shutdown_regionserver.rb stop-hbase.cmd thread-pool.rb
get-active-master.rb hbase-cleanup.sh hbase-config.cmd hbase-daemons.sh local-master-backup.sh region_mover.rb replication start-hbase.cmd stop-hbase.sh zookeepers.sh
graceful_stop.sh hbase.cmd hbase-config.sh hbase-jruby local-regionservers.sh regionservers.sh rolling-restart.sh start-hbase.sh test
[root@master109 bin]# hbase hbck
...
Table KYLIN_0MGAS628J4 is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table KYLIN_QJGQJYRATQ is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table zz_monthstore is okay. Number of regions: 5 Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 Table KYLIN_S3IK6XW0SZ is okay. Number of regions: 1 Deployed on: node110,60020,1514865702994 Table flushsale_info is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table stork_info is okay. Number of regions: 1 Deployed on: node110,60020,1514865702994 Table storksale_info is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table KYLIN_PI3B0F23NU is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table z_activeagent is inconsistent. Number of regions: 12 Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 node112,60020,1514865710244 Table rank_count_rent is okay. Number of regions: 2 Deployed on: node111,60020,1514865706402 Table rank_count_sale is okay. Number of regions: 2 Deployed on: node111,60020,1514865706402 Table KYLIN_ICBULW0MPB is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table KYLIN_FTW1CM9P5H is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table kylin_metadata is okay. Number of regions: 1 Deployed on: node110,60020,1514865702994 Table kylin_metadata_user is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table KYLIN_18RFD2M9KD is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table KYLIN_5QPRJFEZBF is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table KYLIN_09YWHIEKLK is okay. Number of regions: 1 Deployed on: node110,60020,1514865702994 Table KYLIN_ETYEUFI2WO is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table KYLIN_58UKG45HEY is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table salehousedeal is okay. Number of regions: 11 Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 node112,60020,1514865710244 5 inconsistencies detected. Status: INCONSISTENT 2018-01-04 14:56:10,889 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService 2018-01-04 14:56:10,889 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x560adb43048004b 2018-01-04 14:56:10,894 INFO [main] zookeeper.ZooKeeper: Session: 0x560adb43048004b closed 2018-01-04 14:56:10,894 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down
发现一个 不一致,不一致的表就是我操作的表 z_activeagent
启动修复
[root@master109 bin]# hbase hbck -fixMeta SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2018-01-04 15:00:24,039 INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS HBaseFsck command line options: -fixMeta 2018-01-04 15:00:24,213 WARN [main] util.HBaseFsck: Got AccessDeniedException when preCheckPermission org.apache.hadoop.hbase.security.AccessDeniedException: Permission denied: action=WRITE path=hdfs://gagcluster/hbase/MasterProcWALs user=root at org.apache.hadoop.hbase.util.FSUtils.checkAccess(FSUtils.java:1797) at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:1929) at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4731) at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:4559) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:4547) Current user root does not have write perms to hdfs://gagcluster/hbase/MasterProcWALs. Please rerun hbck as hdfs user hadoop [root@master109 bin]# su hadoop [hadoop@master109 bin]$ hbase hbck SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2018-01-04 15:01:19,187 INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS ... Table z_activeagent is inconsistent. Number of regions: 12 Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 node112,60020,1514865710244 Table rank_count_rent is okay. Number of regions: 2 Deployed on: node111,60020,1514865706402 Table rank_count_sale is okay. Number of regions: 2 Deployed on: node111,60020,1514865706402 Table KYLIN_ICBULW0MPB is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table KYLIN_FTW1CM9P5H is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table kylin_metadata is okay. Number of regions: 1 Deployed on: node110,60020,1514865702994 Table kylin_metadata_user is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table KYLIN_18RFD2M9KD is okay. Number of regions: 1 Deployed on: node112,60020,1514865710244 Table KYLIN_5QPRJFEZBF is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table KYLIN_09YWHIEKLK is okay. Number of regions: 1 Deployed on: node110,60020,1514865702994 Table KYLIN_ETYEUFI2WO is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table KYLIN_58UKG45HEY is okay. Number of regions: 1 Deployed on: node111,60020,1514865706402 Table salehousedeal is okay. Number of regions: 11 Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 node112,60020,1514865710244 5 inconsistencies detected. Status: INCONSISTENT 2018-01-04 15:01:24,000 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService 2018-01-04 15:01:24,000 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x460b03fafb6004b 2018-01-04 15:01:24,003 INFO [main] zookeeper.ZooKeeper: Session: 0x460b03fafb6004b closed 2018-01-04 15:01:24,003 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down [hadoop@master109 bin]$ hbase hbck -fixMeta SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop2.6/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2018-01-04 15:01:40,613 INFO [main] Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS HBaseFsck command line options: -fixMeta 2018-01-04 15:01:40,791 WARN [main] util.HBaseFsck: Got AccessDeniedException when preCheckPermission org.apache.hadoop.hbase.security.AccessDeniedException: Permission denied: action=WRITE path=hdfs://gagcluster/hbase/.hbase-snapshot user=hadoop at org.apache.hadoop.hbase.util.FSUtils.checkAccess(FSUtils.java:1797) at org.apache.hadoop.hbase.util.HBaseFsck.preCheckPermission(HBaseFsck.java:1929) at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4731) at org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:4559) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:4547) Current user hadoop does not have write perms to hdfs://gagcluster/hbase/.hbase-snapshot. Please rerun hbck as hdfs user root [hadoop@master109 bin]$ hadoop fs -chown -R hadoop /hbase
经历了一直权限不一致的问题,直接狠狠心全部给他改成 hadoop用户的了
再次修复
[hadoop@master109 bin]$ hbase hbck -fixAssignments ... Table z_activeagent is okay. Number of regions: 14 Deployed on: node110,60020,1514865702994 node111,60020,1514865706402 node112,60020,1514865710244 Status: OK 2018-01-04 15:05:28,012 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing master protocol: MasterService 2018-01-04 15:05:28,012 INFO [main] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1608aaaedc50081 2018-01-04 15:05:28,017 INFO [main] zookeeper.ZooKeeper: Session: 0x1608aaaedc50081 closed 2018-01-04 15:05:28,017 INFO [main-EventThread] zookeeper.ClientCnxn: EventThread shut down [hadoop@master109 bin]$ hbase hbck -fixMeta
重新分配一下,完事!