• [Paper] LCS: An Efficient Data Eviction Strategy for Spark


    Abstract

    • Classical strategies do not aware of recovery cost, which could cause system performance degradation.   -->  a cost aware eviction strategt can obviously reduces the total recovery cost.
    • A strategy named LCS(Least cost strategy) -->  gets the dependencies information between cache data via analyzing application, and calculates the recovery cost during running. By predicting how many times cache data will be reused and using it to weight the recovery cost, LCS always evicts the data which lead to minimum recovery cost in future.

    Introduction

    • Current eviction strategies:
      • FIFO: focuses on the create time.
      • LRU: focuses on access history for better hit ratio.
    • Many eviction algorithms take access history and costs of cache items into consideration. But for spark, the execution logic of upcoming phase is known, access history has no help to eviction strategy.
    • LCS has three steps:
      1. Gets the dependencies of RDD by analyzing application, and predicts how many times cache partitions will be reused.
      2. Collects information during partition creation, and predicts the recovery cost.
      3. Maintains the eviction order using above two information, and evicts the partition that incurs the least cost when memory is not sufficient.

    Design and Implementation

    Overall Architecture

    • Three necessary steps:
      1. Analyzer in driver node analyzes the application by  the DAG strcutures provided by DAGScheduler.
      2. Collector in each executor node records information about each cache partition during its creation.
      3. Eviction Decision provides an efficient eviction strategy to evict the optimal cache partition set when remaining memory space for cache storage is not efficient, and decide whether remove it from MemoryStore or serialize it to DiskStore.

     Analyzer

    • DAG start points: 
      • DFS, files on it can be read from local or remote disk directly;
      • ShuffledRDD, which can be generated by fetching remote shuffle data.

      This indicates the longest running path of task: when all the cache RDDs are missing, task needs to run from the starting points. (Only needs to run part of the path from cache RDD by referring dependencies between RDDs).

    • The aim of Analyzer is classifying cache RDDs and analyzing the dependency information between them before each stage runs.
    • Analyzer only runs in driver node and will transfer result to executors when driver schedules tasks to them. 
    • By pre-registering RDD that needs to be unpresist, and checking whether it is used in each stage, we put it to the RemovableRDDs list of the last stage to use it. The removable partition can be evicted directly, and will not waste the memory.
    • Cache RDDs of a stage will be classified to:
      • current running cache RDDs (targetCacheRDDs)
      • RDDs participate in current stage (relatedCacheRDDs)
      • other cache RDDs

    Colletor

    • collector will collect information about each cache partition during task running.
    • Information that needs to be observed:
      • Create cost: Time spent, called Ccreate.
      • Eviction cost: Time costs when evicting a partition from memory, called Ceviction. (If partition is serialized to disk, the eviction cost is the time spent on serializing and writing to disk, denoted as Cser. If removed directly, the eviction cost is 0.)
      • Recovery cost: Time costs when partition data are not found in memory, named Crecovery. If partition is serialized to disk, the recovery cost is the time spent in reading from disk and deserilization, denoted as Cdeser. Otherwise, recomputed by lineage information, represented as Crecompute.

    Eviction Decision

    • Through using information provided by Colletor, each cache partition has a WCPM value:
      WCPM = min (CPM * reus, SPM + DPM * reus).
      CPMrenew = (CPMancestor * sizeancestor + CPM * size) / size
      SPM refers to serialization, DPM refers to deserialization, resu refers to reusability

    Evaluation

    Evaluation Environment and Method

    • PR, CC, KMeans algorithms...
    • LCS compare to LRU & FIFO
  • 相关阅读:
    Node.js中流程控制
    设计模式六大原则(转)
    Python中装饰器(转)
    cocos2d-js反射
    With as
    Python中sort与sorted函数
    cocos+kbe问题记录
    Python字符串
    vue判断Object对象是否包含每个键
    vue跳转其他页面并传参
  • 原文地址:https://www.cnblogs.com/wttttt/p/7521916.html
Copyright © 2020-2023  润新知