• [开发]Win7环境下Eclipse连接Hadoop2.2.0


    准备:
    确保hadoop2.2.0集群正常运行
    1.eclipse中建立mven工程,并编辑pom文件如下
     <dependencies>
            <dependency>
                <groupId>org.apache.hbase</groupId>
                <artifactId>hbase-client</artifactId>
                <version>0.96.2-hadoop2</version>
            </dependency>
            <dependency>
                <groupId>org.apache.hadoop</groupId>
                <artifactId>hadoop-hdfs</artifactId>
                <version>2.2.0</version>
            </dependency>
            <dependency>
                <groupId>jdk.tools</groupId>
                <artifactId>jdk.tools</artifactId>
                <version>1.7</version>
                <scope>system</scope>
                <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
            </dependency>
        </dependencies>

    2.在src/main/resources根目录下拷入log4j.properties,通过log4j查看详细日志

    log4j.rootLogger=debug, stdout, R
    log4j.appender.stdout=org.apache.log4j.ConsoleAppender
    log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
    log4j.appender.stdout.layout.ConversionPattern=%5p - %m%n
    log4j.appender.R=org.apache.log4j.RollingFileAppender
    log4j.appender.R.File=firestorm.log
    log4j.appender.R.MaxFileSize=100KB
    log4j.appender.R.MaxBackupIndex=1
    log4j.appender.R.layout=org.apache.log4j.PatternLayout
    log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
    log4j.logger.com.codefutures=DEBUG

    3.拷入一个可执行的hadoop程序,我用的是一个HdfsDAO,可以先保证HDFS操作能执行

    package com.bigdata.hdfs;
    
    import java.io.IOException;
    import java.net.URI;
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FSDataInputStream;
    import org.apache.hadoop.fs.FSDataOutputStream;
    import org.apache.hadoop.fs.FileStatus;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IOUtils;
    import org.apache.hadoop.mapred.JobConf;
    
    public class HdfsDAO {
        private static final String HDFS = "hdfs://192.168.11.37:9000/";
        
        
        public HdfsDAO(Configuration conf) {
            this(HDFS, conf);
        }
        
        public HdfsDAO(String hdfs, Configuration conf) {
            this.hdfsPath = hdfs;
            this.conf = conf;
        }
        private String hdfsPath;
        private Configuration conf;
        public static void main(String[] args) throws IOException {
            JobConf conf = config();
            HdfsDAO hdfs = new HdfsDAO(conf);
    //        hdfs.copyFile("datafile/item.csv", "/tmp/new");
    //        hdfs.ls("/tmp/new");
            hdfs.ls("/");
        }        
        
        public static JobConf config(){
            JobConf conf = new JobConf(HdfsDAO.class);
            conf.setJobName("HdfsDAO");
            conf.addResource("classpath:/hadoop/core-site.xml");
            conf.addResource("classpath:/hadoop/hdfs-site.xml");
            conf.addResource("classpath:/hadoop/mapred-site.xml");
            return conf;
        }
        
        public void mkdirs(String folder) throws IOException {
            Path path = new Path(folder);
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            if (!fs.exists(path)) {
                fs.mkdirs(path);
                System.out.println("Create: " + folder);
            }
            fs.close();
        }
        public void rmr(String folder) throws IOException {
            Path path = new Path(folder);
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            fs.deleteOnExit(path);
            System.out.println("Delete: " + folder);
            fs.close();
        }
        public void ls(String folder) throws IOException {
            Path path = new Path(folder);
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            FileStatus[] list = fs.listStatus(path);
            System.out.println("ls: " + folder);
            System.out.println("==========================================================");
            for (FileStatus f : list) {
                System.out.printf("name: %s, folder: %s, size: %d
    ", f.getPath(), f.isDir(), f.getLen());
            }
            System.out.println("==========================================================");
            fs.close();
        }
        public void createFile(String file, String content) throws IOException {
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            byte[] buff = content.getBytes();
            FSDataOutputStream os = null;
            try {
                os = fs.create(new Path(file));
                os.write(buff, 0, buff.length);
                System.out.println("Create: " + file);
            } finally {
                if (os != null)
                    os.close();
            }
            fs.close();
        }
        public void copyFile(String local, String remote) throws IOException {
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            fs.copyFromLocalFile(new Path(local), new Path(remote));
            System.out.println("copy from: " + local + " to " + remote);
            fs.close();
        }
        public void download(String remote, String local) throws IOException {
            Path path = new Path(remote);
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            fs.copyToLocalFile(path, new Path(local));
            System.out.println("download: from" + remote + " to " + local);
            fs.close();
        }
        
        public void cat(String remoteFile) throws IOException {
            Path path = new Path(remoteFile);
            FileSystem fs = FileSystem.get(URI.create(hdfsPath), conf);
            FSDataInputStream fsdis = null;
            System.out.println("cat: " + remoteFile);
            try {  
                fsdis =fs.open(path);
                IOUtils.copyBytes(fsdis, System.out, 4096, false);  
              } finally {  
                IOUtils.closeStream(fsdis);
                fs.close();
              }
        }
        public void location() throws IOException {
            // String folder = hdfsPath + "create/";
            // String file = "t2.txt";
            // FileSystem fs = FileSystem.get(URI.create(hdfsPath), new
            // Configuration());
            // FileStatus f = fs.getFileStatus(new Path(folder + file));
            // BlockLocation[] list = fs.getFileBlockLocations(f, 0, f.getLen());
            //
            // System.out.println("File Location: " + folder + file);
            // for (BlockLocation bl : list) {
            // String[] hosts = bl.getHosts();
            // for (String host : hosts) {
            // System.out.println("host:" + host);
            // }
            // }
            // fs.close();
        }
    
    }

    4.运行HdfsDAO

    报错:
    java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
        at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:225)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:250)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
        at org.apache.hadoop.conf.Configuration.getTrimmedStrings(Configuration.java:1546)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:519)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
        at HdfsDAO.copyFile(HdfsDAO.java:94)
        at HdfsDAO.main(HdfsDAO.java:34)
    ERROR - Failed to locate the winutils binary in the hadoop binary path
    java.io.IOException: Could not locate executable nullinwinutils.exe in the Hadoop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
        at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
        at org.apache.hadoop.conf.Configuration.getTrimmedStrings(Configuration.java:1546)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:519)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
        at HdfsDAO.copyFile(HdfsDAO.java:94)
        at HdfsDAO.main(HdfsDAO.java:34)
    

      

    解决:
    首先,在win7中设置环境变量HADOOP_HOME,指向win7中的hadoop2.2.0根目录。
    然后,到 https://github.com/srccodes/hadoop-common-2.2.0-bin 去下载hadoop2.2.0的bin,里面有winutils.exe
    将其拷贝到 $HADOOP_HOME/bin 下。
    5.重新启动,顺利执行
    DEBUG - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of successful kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
    DEBUG - field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of failed kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
    DEBUG - UgiMetrics, User and group related metrics
    DEBUG - Kerberos krb5 configuration not found, setting default realm to empty
    DEBUG -  Creating new Groups object
    DEBUG - Trying to load the custom-built native-hadoop library...
    DEBUG - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
    DEBUG - java.library.path=D:Program FilesJavajre7in;C:WindowsSunJavain;C:Windowssystem32;C:Windows;C:Program Files (x86)NVIDIA CorporationPhysXCommon;C:Program Files (x86)InteliCLS Client;C:Program FilesInteliCLS Client;C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShellv1.0;C:Program FilesIntelIntel(R) Management Engine ComponentsDAL;C:Program FilesIntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsDAL;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)IntelOpenCL SDK3.0inx86;C:Program Files (x86)IntelOpenCL SDK3.0inx64;D:Program FilesJavajdk1.7.0_40in;D:Program FilesJavajdk1.7.0_40jrein;D:Program FilesTortoiseSVNin;D:Program Files (x86)antin;D:Program Filesmaven3in;.
     WARN - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    DEBUG - Falling back to shell based
    DEBUG - Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
    DEBUG - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000
    DEBUG - hadoop login
    DEBUG - hadoop login commit
    DEBUG - using local user:NTUserPrincipal: Administrator
    DEBUG - UGI loginUser:Administrator (auth:SIMPLE)
    DEBUG - dfs.client.use.legacy.blockreader.local = false
    DEBUG - dfs.client.read.shortcircuit = false
    DEBUG - dfs.client.domain.socket.data.traffic = false
    DEBUG - dfs.domain.socket.path = 
    DEBUG - StartupProgress, NameNode startup progress
    DEBUG - multipleLinearRandomRetry = null
    DEBUG - rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@1afde4a3
    DEBUG - Both short-circuit local reads and UNIX domain socket are disabled.
    DEBUG - The ping interval is 60000 ms.
    DEBUG - Connecting to /192.168.0.160:8020
    DEBUG - IPC Client (60133785) connection to /192.168.0.160:8020 from Administrator: starting, having connections 1
    DEBUG - IPC Client (60133785) connection to /192.168.0.160:8020 from Administrator sending #0
    DEBUG - IPC Client (60133785) connection to /192.168.0.160:8020 from Administrator got value #0
    DEBUG - Call: getListing took 136ms
    ls: /
    ==========================================================
    name: hdfs://192.168.0.160:8020/data, folder: true, size: 0
    name: hdfs://192.168.0.160:8020/fulong, folder: true, size: 0
    name: hdfs://192.168.0.160:8020/test, folder: true, size: 0
    name: hdfs://192.168.0.160:8020/tmp, folder: true, size: 0
    name: hdfs://192.168.0.160:8020/user, folder: true, size: 0
    name: hdfs://192.168.0.160:8020/workspace, folder: true, size: 0
    ==========================================================
    DEBUG - Stopping client
    DEBUG - IPC Client (60133785) connection to /192.168.0.160:8020 from Administrator: closed
    DEBUG - IPC Client (60133785) connection to /192.168.0.160:8020 from Administrator: stopped, remaining connections 0
    View Code

    6.测试hbase代码

    package com.rockontrol.tryhbase;
    import static org.junit.Assert.*;
    
    import java.io.IOException;
    import java.io.InputStream;
    import java.util.Random;
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    import java.util.concurrent.TimeUnit;
    
    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.hbase.HColumnDescriptor;
    import org.apache.hadoop.hbase.HTableDescriptor;
    import org.apache.hadoop.hbase.KeyValue;
    import org.apache.hadoop.hbase.client.Get;
    import org.apache.hadoop.hbase.client.HBaseAdmin;
    import org.apache.hadoop.hbase.client.HTable;
    import org.apache.hadoop.hbase.client.HTablePool;
    import org.apache.hadoop.hbase.client.Put;
    import org.apache.hadoop.hbase.client.Result;
    import org.apache.hadoop.hbase.client.ResultScanner;
    import org.apache.hadoop.hbase.client.Scan;
    import org.apache.hadoop.hbase.filter.PrefixFilter;
    import org.apache.hadoop.hbase.util.Bytes;
    import org.apache.log4j.Logger;
    import org.junit.Test;
    
    public class TestUseHbase {
       
       private String table = "Tenant";
       private String cfs[] = {"i"};
       private final int availableProcessors = 
             Runtime.getRuntime().availableProcessors();
       private ExecutorService exec = 
             Executors.newFixedThreadPool(availableProcessors*2);
       private Random rnd = new Random();
       private final int ROW_KEY_LEN = Bytes.SIZEOF_LONG + Bytes.SIZEOF_BYTE;
       private final String colId = "id";
       private final String colStat = "stat";
       private final String colCert = "cert";
       
       private Configuration conf;
       private HTablePool pool;
       
       private static final Logger logger = 
             Logger.getLogger(TestUseHbase.class);
       
       public TestUseHbase() throws Exception {
          conf = new Configuration();
          conf.addResource(getHbaseConfStream());
          pool = new HTablePool(conf, 1000);
       }
       
       @Test
       public void testSetupTable() throws Exception {
          
          HBaseAdmin admin = new HBaseAdmin(conf);
          
          try {
             if (admin.tableExists(table)) {
                logger.info("table already exists!");
             } else {
                HTableDescriptor tableDesc =new HTableDescriptor(table);
                for(String cf : cfs) {
                   tableDesc.addFamily(new HColumnDescriptor(cf));
                }
                admin.createTable(tableDesc);
                logger.info("table created!");
             }
          } finally {
             admin.close();
          }
       }
       
       @Test
       public void testPuts() throws Exception {
    
          final HTable htable = (HTable) pool.getTable(table);
          // put random id
          for (int i = 0; i < 10; i++) {
             exec.execute(new Runnable() {
                @Override
                public void run() {
                   long authId = getAuthId();
                   byte[] rowkey = createRowKey(authId, (byte) 0);
                   htable.setAutoFlush(false);
                   Put put = new Put(rowkey);
                   put.add(cfs[0].getBytes(), colId.getBytes(), String.valueOf(authId)
                         .getBytes());
                   put.add(cfs[0].getBytes(), colStat.getBytes(), String.valueOf(0)
                         .getBytes());
                   try {
                      synchronized (htable) {
                         htable.put(put);
                         htable.flushCommits();
                      }
                   } catch (IOException e) {
                      logger.error("ERROR: insert authId=" + authId, e);
                   }
                }
             });
          }
          exec.shutdown();
    
          int count = 0;
          while (!exec.awaitTermination(10, TimeUnit.SECONDS)) {
             logger.warn("thread pool is still running");
             if (count++ > 3) {
                logger.warn("force to exit anyway...");
                break;
             }
          }
    
          htable.flushCommits();
          pool.putTable(htable);
    
       }
       
       @Test
       public void testFullScan() throws Exception {
    
          HTable htable = (HTable) pool.getTable(table);
          long last = Long.MIN_VALUE;
    
          ResultScanner rs = htable.getScanner(new Scan());
          long authId = 0;
          byte stat = 0;
          String strAuthId;
          String strStat;
          for (Result r : rs) {
    
             KeyValue kvId = r.getColumnLatest(cfs[0].getBytes(), colId.getBytes());
             KeyValue kvStat = r.getColumnLatest(cfs[0].getBytes(), colStat.getBytes());
             if (kvId != null && kvStat != null) {
                strAuthId = new String(kvId.getValue());
                strStat = new String(kvStat.getValue());
                authId = getIdByRowKey(kvId.getKey());
                stat = getStatByRowKey(kvId.getKey());
                assertTrue("last=" + last + 
                      ", current=" + authId, authId >= last); // incremental sorted
                last = authId;
                logger.info("authId=" + authId + ", stat=" + stat + ", value=[" + strAuthId
                      + ", " + strStat + "]");
             } else {
                for (KeyValue kv : r.raw()) {
                   authId = getIdByRowKey(kv.getKey());
                   stat = getStatByRowKey(kv.getKey());
                   assertTrue("last=" + last + 
                         ", current=" + authId, authId >= last); // incremental sort                        
                   last = authId;
                   logger.info("authId=" + authId + ", stat=" + stat);
                   logger.info(new String(kv.getValue()));
                }
             }  
          }
    
       }
       
       @Test
       public void testSpecScan() throws Exception {
          HTable htable = (HTable) pool.getTable(table);
          long specId = getAuthId();
          byte[] rowkey = createRowKey(specId, (byte) 0);
          
          // PUT
          Put put = new Put(rowkey);
          put.add(cfs[0].getBytes(), colId.getBytes(), String.valueOf(specId)
                .getBytes());
          put.add(cfs[0].getBytes(), colStat.getBytes(), String.valueOf(0)
                .getBytes());
          htable.put(put);
          
          // Get with rowkey
          Get scan = new Get(rowkey);
          Result r = htable.get(scan);
          assertTrue(!r.isEmpty());
          long id = 0;
          for(KeyValue kv : r.raw()) {
             id = getIdByRowKey(kv.getKey());
             assertEquals(specId, id);
             logger.info("authId=" + id + 
                   ", cf=" + new String(kv.getFamily()) +
                   ", key=" + new String(kv.getQualifier()) +
                   ", value=" + new String(kv.getValue()));
          }
          
          // Put with specId but stat and different column
          rowkey = createRowKey(specId, (byte)1);
          put = new Put(rowkey);
          put.add(cfs[0].getBytes(), colCert.getBytes(), "xyz".getBytes());
          htable.put(put);
          
          // Get with rowkey prefix
          Scan s = new Scan();
          s.setFilter(new PrefixFilter(createRowKeyPrefix(specId)));
          ResultScanner rs = htable.getScanner(s);
          for(Result ret : rs) {
             String strk = new String(ret.getRow());
             logger.info("ret=" + strk);
             for(KeyValue kv : ret.raw()) {
                id = getIdByRowKey(kv.getKey());
                assertEquals(specId, id);
                logger.info("authId=" + id + 
                      ", stat=" + getStatByRowKey(kv.getKey()) +
                      ", cf=" + new String(kv.getFamily()) +
                      ", key=" + new String(kv.getQualifier()) +
                      ", value=" + new String(kv.getValue()));
             }
          }
          
          // Get with start and end row
          s = new Scan();
          s.setStartRow(createRowKeyPrefix(specId));
          s.setStopRow(createRowKeyPrefix(specId+1));
          rs = htable.getScanner(s);
          for(Result ret : rs) {
             String strk = new String(ret.getRow());
             logger.info("ret=" + strk);
             for(KeyValue kv : ret.raw()) {
                id = getIdByRowKey(kv.getKey());
                assertEquals(specId, id);
                logger.info("authId=" + id + 
                      ", stat=" + getStatByRowKey(kv.getKey()) +
                      ", cf=" + new String(kv.getFamily()) +
                      ", key=" + new String(kv.getQualifier()) +
                      ", value=" + new String(kv.getValue()));
             }
          }
       }
       
       @Test
       public void testBytesConv() throws Exception {
          long a = 120;
          byte s = 0;
          byte[] data = new byte[9];
          int off = Bytes.putLong(data, 0, a);
          Bytes.putByte(data, off, s);
          long b = Bytes.toLong(data);
          byte t = data[8];
          assertEquals(a, b);
          assertEquals(s, t);
       }
       
       private byte[] createRowKey(long authId, byte stat) {
          byte[] rowkey = new byte[ROW_KEY_LEN];
          int off = Bytes.putLong(rowkey, 0, authId);
          Bytes.putByte(rowkey, off, stat);
          return rowkey;
       }
       
       private byte[] createRowKeyPrefix(long authId) {
          byte[] prefix = new byte[Bytes.SIZEOF_LONG];
          Bytes.putLong(prefix, 0, authId);
          return prefix;
       }
       
       private long getIdByRowKey(byte[] rowkey) {
          // HACK
          return Bytes.toLong(rowkey, Bytes.SIZEOF_SHORT);
       }
       
       private byte getStatByRowKey(byte[] rowkey) {
          // HACK
          return rowkey[Bytes.SIZEOF_SHORT + ROW_KEY_LEN - 1];
       }
       
       private long getAuthId() {
          long authId = rnd.nextLong();
          authId = authId > 0 ? authId : -authId;
          return authId;
       }
    
       private static InputStream getHbaseConfStream() throws Exception {
          return TestUseHbase.class.getClassLoader().getResourceAsStream("hbase-site.xml");
       }
    
    }
    View Code

    7.执行成功

    2014-09-04 12:52:29  [ main:0 ] - [ DEBUG ]  field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of successful kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
    2014-09-04 12:52:29  [ main:10 ] - [ DEBUG ]  field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of failed kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)
    2014-09-04 12:52:29  [ main:11 ] - [ DEBUG ]  UgiMetrics, User and group related metrics
    2014-09-04 12:52:29  [ main:253 ] - [ DEBUG ]  Kerberos krb5 configuration not found, setting default realm to empty
    2014-09-04 12:52:29  [ main:257 ] - [ DEBUG ]   Creating new Groups object
    2014-09-04 12:52:29  [ main:259 ] - [ DEBUG ]  Trying to load the custom-built native-hadoop library...
    2014-09-04 12:52:29  [ main:261 ] - [ DEBUG ]  Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
    2014-09-04 12:52:29  [ main:261 ] - [ DEBUG ]  java.library.path=D:Program FilesJavajdk1.7.0_45in;C:WindowsSunJavain;C:Windowssystem32;C:Windows;D:Perl64in;D:Perl64sitein;C:Program Files (x86)Common FilesNetSarang;C:Program Files (x86)InteliCLS Client;C:Program FilesInteliCLS Client;C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShellv1.0;C:Program Files (x86)IntelOpenCL SDK2.0inx86;C:Program Files (x86)IntelOpenCL SDK2.0inx64;C:Program FilesIntelIntel(R) Management Engine ComponentsDAL;C:Program FilesIntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsDAL;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsIPT;D:javamaven/bin;D:Program FilesJavajdk1.8.0/bin;d:Program Files (x86)YYXTAudioEditorOCX;D:Program FilesMySQLMySQL Server 5.5in;D:hadoopapache-ant-1.9.3in;D:Program Files
    odejs;D:Program FilesTortoiseSVNin;D:Perl64in;D:Perl64sitein;C:UserslenovoAppDataRoaming
    pm;.
    2014-09-04 12:52:29  [ main:261 ] - [ WARN ]  Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    2014-09-04 12:52:29  [ main:261 ] - [ DEBUG ]  Falling back to shell based
    2014-09-04 12:52:29  [ main:262 ] - [ DEBUG ]  Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
    2014-09-04 12:52:29  [ main:262 ] - [ DEBUG ]  Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000
    2014-09-04 12:52:29  [ main:268 ] - [ DEBUG ]  hadoop login
    2014-09-04 12:52:29  [ main:268 ] - [ DEBUG ]  hadoop login commit
    2014-09-04 12:52:29  [ main:274 ] - [ DEBUG ]  using local user:NTUserPrincipal: lenovo
    2014-09-04 12:52:29  [ main:276 ] - [ DEBUG ]  UGI loginUser:lenovo (auth:SIMPLE)
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:host.name=qiaokai-PC
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:java.version=1.7.0_45
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:java.vendor=Oracle Corporation
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:java.home=D:Program FilesJavajdk1.7.0_45jre
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:java.class.path=D:UserslenovokoalaSPdbhbase	argetclasses;D:javamavenRepoorgapachehbasehbase-client0.96.2-hadoop2hbase-client-0.96.2-hadoop2.jar;D:javamavenRepoorgapachehbasehbase-common0.96.2-hadoop2hbase-common-0.96.2-hadoop2.jar;D:javamavenRepocommons-collectionscommons-collections3.2.1commons-collections-3.2.1.jar;D:javamavenRepoorgapachehbasehbase-protocol0.96.2-hadoop2hbase-protocol-0.96.2-hadoop2.jar;D:javamavenRepocommons-codeccommons-codec1.7commons-codec-1.7.jar;D:javamavenRepocommons-iocommons-io2.4commons-io-2.4.jar;D:javamavenRepocommons-langcommons-lang2.6commons-lang-2.6.jar;D:javamavenRepocommons-loggingcommons-logging1.1.1commons-logging-1.1.1.jar;D:javamavenRepocomgoogleguavaguava12.0.1guava-12.0.1.jar;D:javamavenRepocomgooglecodefindbugsjsr3051.3.9jsr305-1.3.9.jar;D:javamavenRepocomgoogleprotobufprotobuf-java2.5.0protobuf-java-2.5.0.jar;D:javamavenRepoio
    etty
    etty3.6.6.Final
    etty-3.6.6.Final.jar;D:javamavenRepoorgapachezookeeperzookeeper3.4.5zookeeper-3.4.5.jar;D:javamavenRepoorgslf4jslf4j-api1.6.1slf4j-api-1.6.1.jar;D:javamavenRepoorgslf4jslf4j-log4j121.6.1slf4j-log4j12-1.6.1.jar;D:javamavenRepoorgclouderahtracehtrace-core2.04htrace-core-2.04.jar;D:javamavenRepoorgcodehausjacksonjackson-mapper-asl1.8.8jackson-mapper-asl-1.8.8.jar;D:javamavenRepoorgapachehadoophadoop-common2.2.0hadoop-common-2.2.0.jar;D:javamavenRepoorgapachecommonscommons-math2.1commons-math-2.1.jar;D:javamavenRepocommons-httpclientcommons-httpclient3.1commons-httpclient-3.1.jar;D:javamavenRepocommons-netcommons-net3.1commons-net-3.1.jar;D:javamavenRepocomsunjerseyjersey-json1.9jersey-json-1.9.jar;D:javamavenRepoorgcodehausjettisonjettison1.1jettison-1.1.jar;D:javamavenRepostaxstax-api1.0.1stax-api-1.0.1.jar;D:javamavenRepocomsunxmlindjaxb-impl2.2.3-1jaxb-impl-2.2.3-1.jar;D:javamavenRepojavaxxmlindjaxb-api2.2.2jaxb-api-2.2.2.jar;D:javamavenRepojavaxactivationactivation1.1activation-1.1.jar;D:javamavenRepoorgcodehausjacksonjackson-jaxrs1.8.3jackson-jaxrs-1.8.3.jar;D:javamavenRepoorgcodehausjacksonjackson-xc1.8.3jackson-xc-1.8.3.jar;D:javamavenRepocommons-elcommons-el1.0commons-el-1.0.jar;D:javamavenRepo
    etjavadevjets3tjets3t0.6.1jets3t-0.6.1.jar;D:javamavenRepocommons-configurationcommons-configuration1.6commons-configuration-1.6.jar;D:javamavenRepocommons-digestercommons-digester1.8commons-digester-1.8.jar;D:javamavenRepocommons-beanutilscommons-beanutils1.7.0commons-beanutils-1.7.0.jar;D:javamavenRepocommons-beanutilscommons-beanutils-core1.8.0commons-beanutils-core-1.8.0.jar;D:javamavenRepoorgapacheavroavro1.7.4avro-1.7.4.jar;D:javamavenRepocom	houghtworksparanamerparanamer2.3paranamer-2.3.jar;D:javamavenRepoorgxerialsnappysnappy-java1.0.4.1snappy-java-1.0.4.1.jar;D:javamavenRepocomjcraftjsch0.1.42jsch-0.1.42.jar;D:javamavenRepoorgapachecommonscommons-compress1.4.1commons-compress-1.4.1.jar;D:javamavenRepoorg	ukaanixz1.0xz-1.0.jar;D:javamavenRepoorgapachehadoophadoop-auth2.2.0hadoop-auth-2.2.0.jar;D:javamavenRepoorgapachehadoophadoop-mapreduce-client-core2.2.0hadoop-mapreduce-client-core-2.2.0.jar;D:javamavenRepoorgapachehadoophadoop-yarn-common2.2.0hadoop-yarn-common-2.2.0.jar;D:javamavenRepoorgapachehadoophadoop-yarn-api2.2.0hadoop-yarn-api-2.2.0.jar;D:javamavenRepocomgoogleinjectguice3.0guice-3.0.jar;D:javamavenRepojavaxinjectjavax.inject1javax.inject-1.jar;D:javamavenRepoaopallianceaopalliance1.0aopalliance-1.0.jar;D:javamavenRepocomsunjerseycontribsjersey-guice1.9jersey-guice-1.9.jar;D:javamavenRepocomgoogleinjectextensionsguice-servlet3.0guice-servlet-3.0.jar;D:javamavenRepoorgapachehadoophadoop-annotations2.2.0hadoop-annotations-2.2.0.jar;D:javamavenRepocomgithubstephencfindbugsfindbugs-annotations1.3.9-1findbugs-annotations-1.3.9-1.jar;D:javamavenRepojunitjunit4.11junit-4.11.jar;D:javamavenRepoorghamcresthamcrest-core1.3hamcrest-core-1.3.jar;D:javamavenRepoorgapachehadoophadoop-hdfs2.2.0hadoop-hdfs-2.2.0.jar;D:javamavenRepoorgmortbayjettyjetty6.1.26jetty-6.1.26.jar;D:javamavenRepoorgmortbayjettyjetty-util6.1.26jetty-util-6.1.26.jar;D:javamavenRepocomsunjerseyjersey-core1.9jersey-core-1.9.jar;D:javamavenRepocomsunjerseyjersey-server1.9jersey-server-1.9.jar;D:javamavenRepoasmasm3.1asm-3.1.jar;D:javamavenRepocommons-clicommons-cli1.2commons-cli-1.2.jar;D:javamavenRepocommons-daemoncommons-daemon1.0.13commons-daemon-1.0.13.jar;D:javamavenRepojavaxservletjspjsp-api2.1jsp-api-2.1.jar;D:javamavenRepolog4jlog4j1.2.17log4j-1.2.17.jar;D:javamavenRepojavaxservletservlet-api2.5servlet-api-2.5.jar;D:javamavenRepoorgcodehausjacksonjackson-core-asl1.8.8jackson-core-asl-1.8.8.jar;D:javamavenRepo	omcatjasper-runtime5.5.23jasper-runtime-5.5.23.jar;D:javamavenRepoxmlencxmlenc0.52xmlenc-0.52.jar;D:Program FilesJavajdk1.8.0lib	ools.jar
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:java.library.path=D:Program FilesJavajdk1.7.0_45in;C:WindowsSunJavain;C:Windowssystem32;C:Windows;D:Perl64in;D:Perl64sitein;C:Program Files (x86)Common FilesNetSarang;C:Program Files (x86)InteliCLS Client;C:Program FilesInteliCLS Client;C:Windowssystem32;C:Windows;C:WindowsSystem32Wbem;C:WindowsSystem32WindowsPowerShellv1.0;C:Program Files (x86)IntelOpenCL SDK2.0inx86;C:Program Files (x86)IntelOpenCL SDK2.0inx64;C:Program FilesIntelIntel(R) Management Engine ComponentsDAL;C:Program FilesIntelIntel(R) Management Engine ComponentsIPT;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsDAL;C:Program Files (x86)IntelIntel(R) Management Engine ComponentsIPT;D:javamaven/bin;D:Program FilesJavajdk1.8.0/bin;d:Program Files (x86)YYXTAudioEditorOCX;D:Program FilesMySQLMySQL Server 5.5in;D:hadoopapache-ant-1.9.3in;D:Program Files
    odejs;D:Program FilesTortoiseSVNin;D:Perl64in;D:Perl64sitein;C:UserslenovoAppDataRoaming
    pm;.
    2014-09-04 12:52:29  [ main:418 ] - [ INFO ]  Client environment:java.io.tmpdir=C:UserslenovoAppDataLocalTemp
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:java.compiler=<NA>
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:os.name=Windows 7
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:os.arch=amd64
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:os.version=6.1
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:user.name=lenovo
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:user.home=C:Userslenovo
    2014-09-04 12:52:29  [ main:419 ] - [ INFO ]  Client environment:user.dir=D:UserslenovokoalaSPdbhbase
    2014-09-04 12:52:29  [ main:420 ] - [ INFO ]  Initiating client connection, connectString=compute1:2181 sessionTimeout=90000 watcher=hconnection-0xda5a705, quorum=compute1:2181, baseZNode=/hbase
    2014-09-04 12:52:29  [ main:425 ] - [ DEBUG ]  zookeeper.disableAutoWatchReset is false
    2014-09-04 12:52:29  [ main:457 ] - [ INFO ]  Process identifier=hconnection-0xda5a705 connecting to ZooKeeper ensemble=compute1:2181
    2014-09-04 12:52:29  [ main-SendThread(compute1:2181):458 ] - [ INFO ]  Opening socket connection to server compute1/192.168.11.39:2181. Will not attempt to authenticate using SASL (unknown error)
    2014-09-04 12:52:29  [ main-SendThread(compute1:2181):459 ] - [ INFO ]  Socket connection established to compute1/192.168.11.39:2181, initiating session
    2014-09-04 12:52:29  [ main-SendThread(compute1:2181):461 ] - [ DEBUG ]  Session establishment request sent on compute1/192.168.11.39:2181
    2014-09-04 12:52:29  [ main-SendThread(compute1:2181):472 ] - [ INFO ]  Session establishment complete on server compute1/192.168.11.39:2181, sessionid = 0x2483a55a18c0013, negotiated timeout = 40000
    2014-09-04 12:52:29  [ main-EventThread:474 ] - [ DEBUG ]  hconnection-0xda5a705, quorum=compute1:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null
    2014-09-04 12:52:29  [ main-EventThread:476 ] - [ DEBUG ]  hconnection-0xda5a705-0x2483a55a18c0013 connected
    2014-09-04 12:52:29  [ main-SendThread(compute1:2181):477 ] - [ DEBUG ]  Reading reply sessionid:0x2483a55a18c0013, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,4294967438,0  request:: '/hbase/hbaseid,F  response:: s{4294967310,4294967310,1409728069737,1409728069737,0,0,0,0,67,0,4294967310} 
    2014-09-04 12:52:29  [ main-SendThread(compute1:2181):480 ] - [ DEBUG ]  Reading reply sessionid:0x2483a55a18c0013, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,4294967438,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a363030303033ffffff8036ffffff94ffffffabcfffffffd6750425546a2430643537303664662d653431622d343332382d383833342d356533643531363362393736,s{4294967310,4294967310,1409728069737,1409728069737,0,0,0,0,67,0,4294967310} 
    2014-09-04 12:52:30  [ main:755 ] - [ DEBUG ]  Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@453d9468, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, maxIdleTime=10000, maxRetries=0, fallbackAllowed=false, ping interval=60000ms, bind address=null
    2014-09-04 12:52:30  [ main-SendThread(compute1:2181):776 ] - [ DEBUG ]  Reading reply sessionid:0x2483a55a18c0013, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,4294967438,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffffff133cffffff88341c1effffffef50425546a15a8636f6d707574653110fffffff4ffffffd4318ffffffe4ffffffefffffffdcffffffd2ffffff8329100,s{4294967339,4294967339,1409728076023,1409728076023,0,0,0,0,60,0,4294967339} 
    2014-09-04 12:52:30  [ main-SendThread(compute1:2181):789 ] - [ DEBUG ]  Reading reply sessionid:0x2483a55a18c0013, packet:: clientPath:null serverPath:null finished:false header:: 4,4  replyHeader:: 4,4294967438,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffffff133cffffff88341c1effffffef50425546a15a8636f6d707574653110fffffff4ffffffd4318ffffffe4ffffffefffffffdcffffffd2ffffff8329100,s{4294967339,4294967339,1409728076023,1409728076023,0,0,0,0,60,0,4294967339} 
    2014-09-04 12:52:30  [ main-SendThread(compute1:2181):798 ] - [ DEBUG ]  Reading reply sessionid:0x2483a55a18c0013, packet:: clientPath:null serverPath:null finished:false header:: 5,4  replyHeader:: 5,4294967438,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffffff133cffffff88341c1effffffef50425546a15a8636f6d707574653110fffffff4ffffffd4318ffffffe4ffffffefffffffdcffffffd2ffffff8329100,s{4294967339,4294967339,1409728076023,1409728076023,0,0,0,0,60,0,4294967339} 
    2014-09-04 12:52:30  [ main:1209 ] - [ DEBUG ]  Use SIMPLE authentication for service ClientService, sasl=false
    2014-09-04 12:52:30  [ main:1218 ] - [ DEBUG ]  Connecting to compute1/192.168.11.39:60020
    2014-09-04 12:52:30  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1225 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: starting, connections 1
    2014-09-04 12:52:30  [ main:1291 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 0 method_name: "Get" request_param: true
    2014-09-04 12:52:30  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1291 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 0, totalSize: 435 bytes
    2014-09-04 12:52:31  [ main-SendThread(compute1:2181):1613 ] - [ DEBUG ]  Reading reply sessionid:0x2483a55a18c0013, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,4294967438,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a3630303230ffffffff133cffffff88341c1effffffef50425546a15a8636f6d707574653110fffffff4ffffffd4318ffffffe4ffffffefffffffdcffffffd2ffffff8329100,s{4294967339,4294967339,1409728076023,1409728076023,0,0,0,0,60,0,4294967339} 
    2014-09-04 12:52:31  [ main:1751 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 1 method_name: "Scan" request_param: true
    2014-09-04 12:52:31  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1752 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 1, totalSize: 13 bytes
    2014-09-04 12:52:31  [ main:1762 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 2 method_name: "Scan" request_param: true priority: 100
    2014-09-04 12:52:31  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1769 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 2 cell_block_meta { length: 1359 }, totalSize: 1383 bytes
    2014-09-04 12:52:31  [ main:1772 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 3 method_name: "Scan" request_param: true
    2014-09-04 12:52:31  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1773 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 3, totalSize: 9 bytes
    2014-09-04 12:52:31  [ main:1792 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 4 method_name: "Scan" request_param: true
    2014-09-04 12:52:31  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1793 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 4, totalSize: 13 bytes
    2014-09-04 12:52:31  [ main:1794 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 5 method_name: "Scan" request_param: true
    2014-09-04 12:52:31  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1795 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 5, totalSize: 13 bytes
    2014-09-04 12:52:31  [ main:1795 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: wrote request header call_id: 6 method_name: "Scan" request_param: true
    2014-09-04 12:52:31  [ IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo:1796 ] - [ DEBUG ]  IPC Client (1568147101) connection to compute1/192.168.11.39:60020 from lenovo: got response header call_id: 6, totalSize: 9 bytes
    View Code
    总结:
    1.将hadoop-2.2.0.tar.gz解压一份放到win7的程序目录下,注意hadoop版本一定要和集群的版本一致,然后拷贝集群中的以下几个配置文件覆盖到win7本地的对应目录:
    core-site.xml
    hdfs-site.xml
    mapred-site.xml
    yarn-site.xml
     
    2.在eclipse中新建java工程后,最好直接引入所有hadoop2.2.0相关的jar包,包括以下几个目录下的jar包:
    sharehadoopcommon
    sharehadoophdfs
    sharehadoopmapreduce
    sharehadoopyarn
     
    注:如果使用hadoop的eclipse插件,就无需该步骤,但2.2.0的插件需自行编译,编译过程参见我的另一篇博客:
     
    3.需要在win7中设置环境变量%HADOOP_HOME%,并把%HADOOP_HOME%in加入PATH环境变量中
     
    4.需要下载https://github.com/srccodes/hadoop-common-2.2.0-bin,解压后把下载的bin目录覆盖%HADOOP_HOME%in
     
    5.注意参考hadoop集群的配置,Eclipse中的程序配置“hadoop地址:端口”的代码需和hadoop集群的配置一致
    <property>
        <name>fs.default.name</name>
        <value>hdfs://singlehadoop:8020</value>
    </property>
     
    6.在hadoop集群的hdfs-site.xml中加入如下属性,关闭权限校验。
    <property>     
        <name>dfs.permissions</name>    
        <value>false</value>
    </property>
     
    7.hbase文件设置

    <property>
    <name>hbase.zookeeper.quorum</name>
    <value>compute1</value>
    </property>

    一定要配置 quorum 的值为 hostname,  节点个数必须为子节点而且要为奇数个。

    在Windows 的 C:WindowsSystem32driversetc  目录下修改hosts映射文件,与集群服务器的映射文件保持一致。

    192.168.14.20 CS020
    192.168.14.16 CS016
    192.168.11.37 master
    192.168.11.39 compute1
    192.168.11.40 thinkit-4

     

  • 相关阅读:
    初见线段树
    用typedef声明类型
    BZOJ 3240([Noi2013]矩阵游戏-费马小定理【矩阵推论】-%*s-快速读入)
    linux命令--sysctl
    信号量学习 & 共享内存同步
    原子操作
    共享内存学习
    HOST绑定和VIP映射
    【转载】《Unix网络编程》思维导图
    外排序 & 败者树 & 多路归并-学习
  • 原文地址:https://www.cnblogs.com/joqk/p/3955930.html
Copyright © 2020-2023  润新知