• folly无锁队列正确性说明


    folly无锁队列是facebook开源的一个无所队列,使用的是单向链表,通过compare_exchange语句实现的多生产多消费的队列,我曾经花了比较多的时间学习memory_order的说明,对release-acquire语义,自认为还是比较了解。如果一个atomic对象使用std::memory_order_release进行写操作,而另外一个线程使用std::memory_order_acquire进行读操作,那么这两个线程之间形成同步关系。std::memory_order_release之前写的效果,在std::memory_order_acquire之后可见。不过对于多生产多消费模型,存在多个生产者的情况,在有多个生产者的情况下,结果正确吗?

    这里给出folly的源代码,这里请重点关注insertHead函数和sweepOnce函数。

    /*
    * Copyright 2014-present Facebook, Inc.
    *
    * Licensed under the Apache License, Version 2.0 (the "License");
    * you may not use this file except in compliance with the License.
    * You may obtain a copy of the License at
    *
    *   http://www.apache.org/licenses/LICENSE-2.0
    *
    * Unless required by applicable law or agreed to in writing, software
    * distributed under the License is distributed on an "AS IS" BASIS,
    * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    * See the License for the specific language governing permissions and
    * limitations under the License.
    */
    
    #pragma once
    
    #include <atomic>
    #include <cassert>
    #include <utility>
    
    namespace folly {
    
        /**
        * A very simple atomic single-linked list primitive.
        *
        * Usage:
        *
        * class MyClass {
        *   AtomicIntrusiveLinkedListHook<MyClass> hook_;
        * }
        *
        * AtomicIntrusiveLinkedList<MyClass, &MyClass::hook_> list;
        * list.insert(&a);
        * list.sweep([] (MyClass* c) { doSomething(c); }
        */
        template <class T>
        struct AtomicIntrusiveLinkedListHook {
            T* next{ nullptr };
        };
    
        template <class T, AtomicIntrusiveLinkedListHook<T> T::*HookMember>
        class AtomicIntrusiveLinkedList {
        public:
            AtomicIntrusiveLinkedList() {}
            AtomicIntrusiveLinkedList(const AtomicIntrusiveLinkedList&) = delete;
            AtomicIntrusiveLinkedList& operator=(const AtomicIntrusiveLinkedList&) =
                delete;
            AtomicIntrusiveLinkedList(AtomicIntrusiveLinkedList&& other) noexcept {
                auto tmp = other.head_.load();
                other.head_ = head_.load();
                head_ = tmp;
            }
            AtomicIntrusiveLinkedList& operator=(
                AtomicIntrusiveLinkedList&& other) noexcept {
                auto tmp = other.head_.load();
                other.head_ = head_.load();
                head_ = tmp;
    
                return *this;
            }
    
            /**
            * Note: list must be empty on destruction.
            */
            ~AtomicIntrusiveLinkedList() {
                assert(empty());
            }
    
            bool empty() const {
                return head_.load() == nullptr;
            }
    
            /**
            * Atomically insert t at the head of the list.
            * @return True if the inserted element is the only one in the list
            *         after the call.
            */
            bool insertHead(T* t) {
                assert(next(t) == nullptr);
    
                auto oldHead = head_.load(std::memory_order_relaxed);
                do {
                    next(t) = oldHead;
                    /* oldHead is updated by the call below.
                    NOTE: we don't use next(t) instead of oldHead directly due to
                    compiler bugs (GCC prior to 4.8.3 (bug 60272), clang (bug 18899),
                    MSVC (bug 819819); source:
                    http://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange */
                } while (!head_.compare_exchange_weak(oldHead, t,
                    std::memory_order_release,
                    std::memory_order_relaxed));
    
                return oldHead == nullptr;
            }
    
            /**
            * Replaces the head with nullptr,
            * and calls func() on the removed elements in the order from tail to head.
            * Returns false if the list was empty.
            */
            template <typename F>
            bool sweepOnce(F&& func) {
                if (auto head = head_.exchange(nullptr)) {
                    auto rhead = reverse(head);
                    unlinkAll(rhead, std::forward<F>(func));
                    return true;
                }
                return false;
            }/**
            * Repeatedly replaces the head with nullptr,
            * and calls func() on the removed elements in the order from tail to head.
            * Stops when the list is empty.
            */
            template <typename F>
            void sweep(F&& func) {
                while (sweepOnce(func)) {
                }
            }
    
            /**
            * Similar to sweep() but calls func() on elements in LIFO order.
            *
            * func() is called for all elements in the list at the moment
            * reverseSweep() is called.  Unlike sweep() it does not loop to ensure the
            * list is empty at some point after the last invocation.  This way callers
            * can reason about the ordering: elements inserted since the last call to
            * reverseSweep() will be provided in LIFO order.
            *
            * Example: if elements are inserted in the order 1-2-3, the callback is
            * invoked 3-2-1.  If the callback moves elements onto a stack, popping off
            * the stack will produce the original insertion order 1-2-3.
            */
            template <typename F>
            void reverseSweep(F&& func) {
                // We don't loop like sweep() does because the overall order of callbacks
                // would be strand-wise LIFO which is meaningless to callers.
                auto head = head_.exchange(nullptr);
                unlinkAll(head, std::forward<F>(func));
            }
    
        private:
            std::atomic<T*> head_{ nullptr };
    
            static T*& next(T* t) {
                return (t->*HookMember).next;
            }
    
            /* Reverses a linked list, returning the pointer to the new head
            (old tail) */
            static T* reverse(T* head) {
                T* rhead = nullptr;
                while (head != nullptr) {
                    auto t = head;
                    head = next(t);
                    next(t) = rhead;
                    rhead = t;
                }
                return rhead;
            }
    
            /* Unlinks all elements in the linked list fragment pointed to by `head',
            * calling func() on every element */
            template <typename F>
            void unlinkAll(T* head, F&& func) {
                while (head != nullptr) {
                    auto t = head;
                    head = next(t);
                    next(t) = nullptr;
                    func(t);
                }
            }
        };
    
    } // namespace folly

    如果存在两个线程先后向同一个队列中插入节点,由于两个线程中没有一个使用acquire,如果仅按照release-acquire语义,显然,正确性无法保证,后一个insertHead函数中,无论是auto oldHead = head_.load(std::memory_order_relaxed);,还是while (!head_.compare_exchange_weak(oldHead, t, std::memory_order_release,std::memory_order_relaxed));都可能读取的是前一个线程插入前的数据。那么,还有什么C++语义,可以保证folly队列的正确性?那就是release sequence。release sequence其中的一部分说的是:

    如果一个存储使用memory_order_release或更严格的内存序,后面跟着若干读-改-写(read-modify-write)(可以是同一个线程,也可以是不同的线程)操作的话。

    (1)那么中间的读-改-写操作 读取的要么是前一次读-改-写的结果,要么是存储的数据。

    那么,如果存在一个release操作,后面跟着一个读改写操作的话,这个读改写操作肯定会得到之前release操作写入的效果。我们可以观察到insertHead中的compare_exchange_weak为一个release操作,同时也是一个读改写操作,那么前面一个线程的修改,一定会在后面一个compare_exchange_weak中可见,无论是同一个线程调用,还是不同线程调用。注意到auto oldHead = head_.load(std::memory_order_relaxed);得到的结果的正确性与否,不影响compare_exchange_weak的正确性,因为如果前一个读取的结果是旧值,这个操作就会失败,而且将oldHead的值更新为最新值,这点对于理解folly的正确性很重要。其他的情况应该根据类似的原理得到正确的解答,这里就不详细说明了。

  • 相关阅读:
    lua时间戳和日期转换及踩坑【转】
    Js正则表达式验证输入是否有特殊字符【转】
    PHP数据类型转换【转】
    JavaScript indexOf() 方法
    CSS文本下划线 删除线 上划线【转】
    PHP中把stdClass Object转array的几个方法【转】
    2020软件工程作业02
    2020软件工程作业01
    2020 CCPC Wannafly Winter Camp Day1-F-乘法
    牛客-装货物
  • 原文地址:https://www.cnblogs.com/albizzia/p/9000364.html
Copyright © 2020-2023  润新知