• MaxProp Routing Protocol


    The core idea of MaxProp (from wikipedia):  

     To obtain these estimated path likelihoods, each node maintains a vector of size n − 1 (where n is the number of nodes in the network) consisting of the likelihood the node has of encountering each of the other nodes in the network. Each of the n − 1 elements in the vector is initially set to \frac{1}{|n|-1}, meaning the node is equally likely to meet any other node next. When the node meets another node, j, the jth element of its vector is incremented by 1, and then the entire vector is normalized such that the sum of all entries add to 1. Note that this phase is completely local and does not require transmitting routing information between nodes.

    When two nodes meet, they first exchange their estimated node-meeting likelihood vectors. Ideally, every node will have an up-to-date vector from every other node. With these n vectors at hand, the node can then compute a shortest path via a depth-first search where path weights indicate the probability that the link does not occur (note that this is 1 minus the value found in the appropriate vector). These path weights are summed to determine the total path cost, and are computed over all possible paths to the destinations desired (destinations for all messages currently being held). The path with the least total weight is chosen as the cost for that particular destination. The messages are then ordered by destination costs, and transmitted and dropped in that order.

    MaxProp Additions: 

    In conjunction with the core routing described above, MaxProp allows for many complementary mechanisms, each helping the message delivery ratio in general. First, acknowledgements are injected into the network by nodes that successfully receive a message (and are the final destination of that message). These acknowledgements are 128-bit hashes of the message that are flooded into the network, and instruct nodes to delete extra copies of the message from their buffers. This helps free space so outstanding messages are not dropped as often. Second, packets with low hop-counts are given higher priority. This helps promote initial rapid message replication to give new messages a "head start". Without this head start, newer messages can be quickly starved by older messages, since there are generally less copies of new messages in the network. Third, each message maintains a "hop list" indicating nodes it has previously visited to ensure that it does not revisit a node.

    Here is the implementation of MaxProp in theONE simulator. 

    The update process of meatting probability set.

        /**
         * Updates meeting probability for the given node index.
         * <PRE> P(b) = P(b)_old + alpha
         * Normalize{P}</PRE> 
         * I.e., The probability of the given node index is increased by one and
         * then all the probabilities are normalized so that their sum equals to 1.
         * 
    @param index The node index to update the probability for
         
    */
        
    public void updateMeetingProbFor(Integer index) {
            Map.Entry
    <Integer, Double> smallestEntry = null;
            
    double smallestValue = Double.MAX_VALUE;

            
    this.lastUpdateTime = SimClock.getTime();
            
            
    if (probs.size() == 0) { // first entry
                probs.put(index, 1.0);
                
    return;
            }
            
            
    double newValue = getProbFor(index) + alpha;
            probs.put(index, newValue);

            
    /* now the sum of all entries is 1+alpha;
             * normalize to one by dividing all the entries by 1+alpha 
    */ 
            
    for (Map.Entry<Integer, Double> entry : probs.entrySet()) {
                entry.setValue(entry.getValue() 
    / (1+alpha));
                
    if (entry.getValue() < smallestValue) {
                    smallestEntry 
    = entry;
                    smallestValue 
    = entry.getValue();
                }

            }

            
    if (probs.size() >= maxSetSize) {
                core.Debug.p(
    "Probsize: " + probs.size() + " dropping " + 
                        probs.remove(smallestEntry.getKey()));
            }
        }

     Threshold calculation:

        /**
         * Calculates and returns the current threshold value for the buffer's split
         * based on the average number of bytes transferred per transfer opportunity
         * and the hop counts of the messages in the buffer. Method is public only
         * to make testing easier.  
         * 
    @return current threshold value (hop count) for the buffer's split
         
    */
        
    public int calcThreshold() {
            
    /* b, x and p refer to respective variables in the paper's equations */
            
    int b = this.getBufferSize();
            
    int x = this.avgTransferredBytes;
            
    int p;

            
    if (x == 0) {
                
    /* can't calc the threshold because there's no transfer data */
                
    return 0;
            }
            
            
    /* calculates the portion (bytes) of the buffer selected for priority */
            
    if (x < b/2) {
                p 
    = x;
            }
            
    else if (b/2 <= x && x < b) {
                p 
    = Math.min(x, b-x);
            }
            
    else {
                
    return 0// no need for the threshold 
            }
            
            
    /* creates a copy of the messages list, sorted by hop count */
            ArrayList
    <Message> msgs = new ArrayList<Message>();
            msgs.addAll(getMessageCollection());
            
    if (msgs.size() == 0) {
                
    return 0// no messages -> no need for threshold
            }
            
    /* anonymous comparator class for hop count comparison */
            Comparator
    <Message> hopCountComparator = new Comparator<Message>() {
                
    public int compare(Message m1, Message m2) {
                    
    return m1.getHopCount() - m2.getHopCount();
                }
            };
            Collections.sort(msgs, hopCountComparator);

            
    /* finds the first message that is beyond the calculated portion */
            
    int i=0;
            
    for (int n=msgs.size(); i<&& p>0; i++) {
                p 
    -= msgs.get(i).getSize();
            }
            
            i
    --// the last round moved i one index too far 
            if (i < 0) {
                
    return 0;
            }
            
            
    /* now i points to the first packet that exceeds portion p; 
             * the threshold is that packet's hop count + 1 (so that packet and
             * perhaps some more are included in the priority part) 
    */
            
    return msgs.get(i).getHopCount() + 1;
        }
  • 相关阅读:
    VB Script学习
    [杂项笔记] linux下查看so依赖的库
    从文件名中删除下划线
    智联招聘基于 Nebula Graph 的推荐实践分享
    基于 Nebula Graph 构建百亿关系知识图谱实践
    使用 MyBatis 操作 Nebula Graph 的实践
    Nebula Importer 数据导入实践
    leetcode695dfs
    docer redis
    leet1905回溯
  • 原文地址:https://www.cnblogs.com/jcleung/p/2072316.html
Copyright © 2020-2023  润新知