• Spark Standalone Mode 多机启动 -- 分布式计算系统spark学习(二)(更新一键启动slavers)


    捣鼓了一下,先来个手动挡吧。自动挡要设置ssh无密码登陆啥的,后面开搞。

    一、手动多台机链接master

    手动链接master其实上篇已经用过。

    这里有两台机器:

    10.60.215.41 启动master、worker1、application(spark shell)

    10.0.2.15 启动worker2

    具体步骤如下:

    1.在10.60.215.41 上

    $SPARK_HOME $ ./sbin/start-master.sh 
    $SPARK_HOME $./bin/spark-class org.apache.spark.deploy.worker.Worker spark://qpzhangdeMac-mini.local:7077

    2.在10.0.2.15上

    $SPARK_HOME $./bin/spark-class org.apache.spark.deploy.worker.Worker spark://qpzhangdeMac-mini.local:7077

    这里需要注意的是,貌似spark用了akka的库,spark的master URL里面必须要用hostname(尝试从配置文件里面改成IP,也没生效),否则会报错:

    15/03/20 17:14:05 ERROR EndpointWriter: dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://sparkMaster@10.60.215.41:7077/]] arriving at [akka.tcp://sparkMaster@10.60.215.41:7077] inbound addresses are [akka.tcp://sparkMaster@qpzhangdeMac-mini.local:7077]

    要在10.0.2.15机器的hosts里面,设置qpzhangdeMac-mini.local对应的IP为master 10.60.215.41,否则无法转换成IP进行链接。

    开始以为把master kill之后,master会自动转为worker1 或者 work2中的一个,但是并没有。worker只是不断尝试重连。

    15/03/20 17:41:05 INFO Worker: Retrying connection to master (attempt # 2)
    15/03/20 17:41:05 WARN EndpointWriter: AssociationError [akka.tcp://sparkWorker@10.60.215.41:53899] -> [akka.tcp://sparkMaster@qpzhangdeMac-mini.local:7077]: Error [Invalid address: akka.tcp://sparkMaster@qpzhangdeMac-mini.local:7077] [
    akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster@qpzhangdeMac-mini.local:7077
    Caused by: akka.remote.transport.Transport$InvalidAssociationException: Connection refused: qpzhangdeMac-mini.local/10.60.215.41:7077

    重新启动master之后, 重连成功。

    15/03/20 18:27:41 INFO Worker: Retrying connection to master (attempt # 10)
    15/03/20 18:27:41 INFO Worker: Successfully registered with master spark://qpzhangdeMac-mini.local:7077

    这里暂且留下几个疑问:

    1)原来salve只是workers 么?worker是不会升级为master的,这里没有选举之说。

    2)master挂了之后,重启,任务会丢失么?

    3)单个worker是否可以注册到多个master上?

    3.在10.60.215.41 上

    启动spark shell,下达任务。

    scala> val textFile = sc.textFile("/var/spark/README.md")
    15/03/20 17:55:41 INFO MemoryStore: ensureFreeSpace(73391) called with curMem=186365, maxMem=555755765
    15/03/20 17:55:41 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 71.7 KB, free 529.8 MB)
    15/03/20 17:55:41 INFO MemoryStore: ensureFreeSpace(31262) called with curMem=259756, maxMem=555755765
    15/03/20 17:55:41 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 30.5 KB, free 529.7 MB)
    15/03/20 17:55:41 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.60.215.41:53983 (size: 30.5 KB, free: 530.0 MB)
    15/03/20 17:55:41 INFO BlockManagerMaster: Updated info of block broadcast_2_piece0
    15/03/20 17:55:41 INFO SparkContext: Created broadcast 2 from textFile at <console>:21
    textFile: org.apache.spark.rdd.RDD[String] = /var/spark/README.md MapPartitionsRDD[3] at textFile at <console>:21
    
    scala> textFile.count()
    15/03/20 17:55:45 INFO FileInputFormat: Total input paths to process : 1
    15/03/20 17:55:45 INFO SparkContext: Starting job: count at <console>:24
    15/03/20 17:55:45 INFO DAGScheduler: Got job 1 (count at <console>:24) with 2 output partitions (allowLocal=false)
    15/03/20 17:55:45 INFO DAGScheduler: Final stage: Stage 1(count at <console>:24)
    15/03/20 17:55:45 INFO DAGScheduler: Parents of final stage: List()
    15/03/20 17:55:45 INFO DAGScheduler: Missing parents: List()
    15/03/20 17:55:45 INFO DAGScheduler: Submitting Stage 1 (/var/spark/README.md MapPartitionsRDD[3] at textFile at <console>:21), which has no missing parents
    15/03/20 17:55:45 INFO MemoryStore: ensureFreeSpace(2640) called with curMem=291018, maxMem=555755765
    15/03/20 17:55:45 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 2.6 KB, free 529.7 MB)
    15/03/20 17:55:45 INFO MemoryStore: ensureFreeSpace(1931) called with curMem=293658, maxMem=555755765
    15/03/20 17:55:45 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 1931.0 B, free 529.7 MB)
    15/03/20 17:55:45 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.60.215.41:53983 (size: 1931.0 B, free: 530.0 MB)
    15/03/20 17:55:45 INFO BlockManagerMaster: Updated info of block broadcast_3_piece0
    15/03/20 17:55:45 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:839
    15/03/20 17:55:45 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1 (/var/spark/README.md MapPartitionsRDD[3] at textFile at <console>:21)
    15/03/20 17:55:45 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
    15/03/20 17:55:45 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 3, 10.60.215.41, PROCESS_LOCAL, 1289 bytes)
    15/03/20 17:55:45 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 4, 10.0.2.15, PROCESS_LOCAL, 1289 bytes)
    15/03/20 17:55:45 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.60.215.41:53990 (size: 1931.0 B, free: 265.1 MB)
    15/03/20 17:55:45 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.60.215.41:53990 (size: 30.5 KB, free: 265.1 MB)
    15/03/20 17:55:45 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 10.0.2.15:53284 (size: 1931.0 B, free: 267.2 MB)
    15/03/20 17:55:45 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 10.0.2.15:53284 (size: 30.5 KB, free: 267.2 MB)
    15/03/20 17:55:45 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 3) in 127 ms on 10.60.215.41 (1/2)
    15/03/20 17:55:46 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 4) in 470 ms on 10.0.2.15 (2/2)
    15/03/20 17:55:46 INFO DAGScheduler: Stage 1 (count at <console>:24) finished in 0.471 s
    15/03/20 17:55:46 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
    15/03/20 17:55:46 INFO DAGScheduler: Job 1 finished: count at <console>:24, took 0.487544 s
    res2: Long = 98
    
    scala> val linesWithSpark = textFile.filter(line => line.contains("Spark"))
    linesWithSpark: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[4] at filter at <console>:23
    
    scala> linesWithSpark.count()
    15/03/20 17:56:53 INFO SparkContext: Starting job: count at <console>:26
    15/03/20 17:56:53 INFO DAGScheduler: Got job 2 (count at <console>:26) with 2 output partitions (allowLocal=false)
    15/03/20 17:56:53 INFO DAGScheduler: Final stage: Stage 2(count at <console>:26)
    15/03/20 17:56:53 INFO DAGScheduler: Parents of final stage: List()
    15/03/20 17:56:53 INFO DAGScheduler: Missing parents: List()
    15/03/20 17:56:53 INFO DAGScheduler: Submitting Stage 2 (MapPartitionsRDD[4] at filter at <console>:23), which has no missing parents
    15/03/20 17:56:53 INFO MemoryStore: ensureFreeSpace(2848) called with curMem=295589, maxMem=555755765
    15/03/20 17:56:53 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 2.8 KB, free 529.7 MB)
    15/03/20 17:56:53 INFO MemoryStore: ensureFreeSpace(2034) called with curMem=298437, maxMem=555755765
    15/03/20 17:56:53 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 2034.0 B, free 529.7 MB)
    15/03/20 17:56:53 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 10.60.215.41:53983 (size: 2034.0 B, free: 530.0 MB)
    15/03/20 17:56:53 INFO BlockManagerMaster: Updated info of block broadcast_4_piece0
    15/03/20 17:56:53 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:839
    15/03/20 17:56:53 INFO DAGScheduler: Submitting 2 missing tasks from Stage 2 (MapPartitionsRDD[4] at filter at <console>:23)
    15/03/20 17:56:53 INFO TaskSchedulerImpl: Adding task set 2.0 with 2 tasks
    15/03/20 17:56:53 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 5, 10.0.2.15, PROCESS_LOCAL, 1289 bytes)
    15/03/20 17:56:53 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 6, 10.60.215.41, PROCESS_LOCAL, 1289 bytes)
    15/03/20 17:56:53 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 10.60.215.41:53990 (size: 2034.0 B, free: 265.1 MB)
    15/03/20 17:56:53 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 10.0.2.15:53284 (size: 2034.0 B, free: 267.2 MB)
    15/03/20 17:56:53 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 6) in 113 ms on 10.60.215.41 (1/2)
    15/03/20 17:56:53 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 5) in 122 ms on 10.0.2.15 (2/2)
    15/03/20 17:56:53 INFO DAGScheduler: Stage 2 (count at <console>:26) finished in 0.122 s
    15/03/20 17:56:53 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool 
    15/03/20 17:56:53 INFO DAGScheduler: Job 2 finished: count at <console>:26, took 0.137589 s
    res3: Long = 19

    从日志里面看到,任务都是分解成2个,分别发送到2个worker上面执行。

    这里不免想到以下问题:

    1)master的任务是怎么分配的?local file 是传递path到不同的worker上去,还是把内容读取了传递过去?

    2)如果仅仅是传递path过去,那么每个work都要读一遍文件?全部读取,还是移位读取的呢?

    多执行几次,然后看worker的日志,发现是传path,加上文件分片的;不同的分片应该是随机分到对应的worker的,因为几次命令,每个worker收到的分片地址不一样。

    这里还有一个问题,如果是从HDFS上面读取文件,一个地址是可以被不同机器的worker读取到的。如果是读本地local path的话,那么就呵呵了,你要自己把文件内容分派到不同的worker机器上去了。

    可在 http://10.60.215.41:4040/executors/ 上面可以看到当前执行task的 workers list,以及task被执行的状态。

    二,自动挡部署

    ==========

    其实原理也很简单,就是shell脚本,根据配置的slavers机器,通过ssh登录到slaver机器上面,切换到对应的目录,启动slave。

    相比手动启动slaver,这个一键启动只需要在一台master机器上完成。

    前提是,你必须配置好ssh的无密码登录,你可以参考这里

    配置好后,修改conf目录下的slavers列表:

    root@qp-zhang:/var/spark# cat conf/slaves
    # A Spark Worker will be started on each of the machines listed below.
    localhost
    root@qpzhangdeMac-mini.local

    采用对应的slavers脚本启动即可:

    root@qp-zhang:/var/spark# ./sbin/start-slaves.sh 
    root@qpzhangdeMac-mini.local: starting org.apache.spark.deploy.worker.Worker, logging to /private/var/spark/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-qpzhangdeMac-mini.local.out
    localhost: starting org.apache.spark.deploy.worker.Worker, logging to /var/spark/sbin/../logs/spark-root-org.apache.spark.deploy.worker.Worker-1-qp-zhang.out

    这时,可以通过

    http://localhost:8080/  查看当前master的slavers(也可以说是workers)。

    ===================================

    转载请注明出处:http://www.cnblogs.com/zhangqingping/p/4354383.html 

  • 相关阅读:
    【转发】JS中如何判断null/ undefined/IsNull
    CSS3实现两行或三行文字,然后多出的部分省略号代替
    关于CodeFirst的使用教程
    把一个字符串里符合表情文字标签的地方全部替换为相应的图片的方法
    js 给json添加新的字段,或者添加一组数据,在JS数组指定位置删除、插入、替换元素
    用css3选择器给你要的第几个元素添加不同样式方法【转发】
    WebApi2 知识点总结
    把字符串每隔四个字符使用“-”中横线分隔的方法
    C语言strchr()函数:查找某字符在字符串中首次出现的位置
    linux 下安装开发组件包
  • 原文地址:https://www.cnblogs.com/zhangqingping/p/4354383.html
Copyright © 2020-2023  润新知