• hadoop 端口总结


    localhost:50030/jobtracker.jsp

    localhost:50060/tasktracker.jsp

    localhost:50070/dfshealth.jsp

     

    1. NameNode进程

        NameNode节点进程 – 运行在端口9000上

     

    INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at: asn-ThinkPad-SL410/127.0.1.1:9000

     

       

        对应的Jetty服务器 -- 运行在端口50070上, 50070NameNode Web的管理端口

     

    INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50070

    INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070

    INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070

    INFO org.mortbay.log: jetty-6.1.26

    INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070

     

    2. DataNode进程

     

        DataNode控制进程 -- 运行在50010上

     

    INFO org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50010 storage DS-1647545997-127.0.1.1-50010-1399439341888

    INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:50010  -- 网络拓扑结构,向默认机架中增加了1个数据节点

    INFO org.apache.hadoop.hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:50010, blocks: 0, processing time: 2 msecs

     

    DatanodeRegistration(asn-ThinkPad-SL410:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020)

    ................. DatanodeRegistration(127.0.0.1:50010, storageID=DS-1647545997-127.0.1.1-50010-1399439341888, infoPort=50075, ipcPort=50020) In DataNode.run, data = FSDataset{dirpath='/opt/hadoop/data/current'}

     

        DataNode 对应的Jetty服务器 – 运行在端口50075上, DataNode Web 的管理端口

     

    INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50075

    INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50075 webServer.getConnectors()[0].getLocalPort() returned 50075

    INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50075

    INFO org.mortbay.log: jetty-6.1.26

    INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50075

     

        DataNode 的 RPC  -- 运行在50020端口

    INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting

    INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: starting

    INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: starting

    INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting

    INFO org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec

    INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: starting

     

    3.  TaskTracker进程

        TaskTracker服务进程  -- 运行在58567端口上

    2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: TaskTracker up at: localhost/127.0.0.1:58567

    2014-05-09 08:51:54,128 INFO org.apache.hadoop.mapred.TaskTracker: Starting tracker tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567

    2014-05-09 08:51:54,143 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 58567: starting

    2014-05-09 08:52:24,443 INFO org.apache.hadoop.mapred.TaskTracker: Starting thread: Map-events fetcher for all reduce tasks on tracker_asn-ThinkPad-SL410:localhost/127.0.0.1:58567

        TaskTracker服务对应的Jetty服务器  -- 运行在50060端口上

    2014-05-09 08:52:24,513 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50060

    2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50060 webServer.getConnectors()[0].getLocalPort() returned 50060

    2014-05-09 08:52:24,514 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50060

    2014-05-09 08:52:24,514 INFO org.mortbay.log: jetty-6.1.26

    2014-05-09 08:52:25,088 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50060

     

    4. JobTracker 进程

        一个Job由多个Task组成

    JobTracker up at: 9001

    JobTracker webserver: 50030

    2014-05-09 12:20:05,598 INFO org.apache.hadoop.mapred.JobTracker: Starting jobtracker with owner as asn

    2014-05-09 12:20:05,664 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcDetailedActivityForPort9001 registered.

    2014-05-09 12:20:05,665 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source RpcActivityForPort9001 registered.

     

    2014-05-09 12:20:06,166 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 50030

    2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030

    2014-05-09 12:20:06,169 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030

    2014-05-09 12:20:06,169 INFO org.mortbay.log: jetty-6.1.26

    2014-05-09 12:20:07,481 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030

     

     

    2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001

    2014-05-09 12:20:07,513 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030

    2014-05-09 12:20:08,165 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory

    2014-05-09 12:20:08,479 INFO org.apache.hadoop.mapred.JobTracker: History server being initialized in embedded mode

    2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobHistoryServer: Started job history server at: localhost:50030

    2014-05-09 12:20:08,487 INFO org.apache.hadoop.mapred.JobTracker: Job History Server web address: localhost:50030

    2014-05-09 12:20:08,513 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is inactive

    2014-05-09 12:20:08,931 INFO org.apache.hadoop.mapred.JobTracker: Refreshing hosts information

     

     

     

    注:关闭nanenode安全模式

    命令为:

        [hadoop@localhost hadoop-0.20.203.0]$ bin/hadoop dfsadmin -safemode leave 

  • 相关阅读:
    Linux上将文件夹复制到指令目录
    将PC版网页转为手机端自适应网页
    WCF初探-18:WCF数据协定之KnownType
    WCF初探-17:WCF数据协定之等效性
    WCF初探-16:WCF数据协定之基础知识
    WCF初探-15:WCF操作协定
    2018数学二21题解法分析
    柯西不等式:简单常考形式
    等价、合同、相似、正交变换;二次型,正定,惯性指数
    高数狄利克雷收敛条件(傅里叶)
  • 原文地址:https://www.cnblogs.com/asnjudy/p/3788112.html
Copyright © 2020-2023  润新知