Folly no lock queue, try to add a new function (continued), folly queue

Source: Internet
Author: User
Tags lock queue

Folly no lock queue, try to add a new function (continued), folly queue

Based on the previous article, after dropHead extracts the node and deletes the node, memory access problems may occur. According to this logic, if you save the removed node to a lock-free queue, and then retrieve the node from the standby lock-free queue when the node is required, so we should be able to avoid the previous problems. Now it is important to judge that the program is running.

During the process, the approximate length of the standby trivial queue will consume a lot of resources.

The modified folly code is as follows:

/** Copyright 2014-present Facebook, Inc.** Licensed under the Apache License, Version 2.0 (the "License");* you may not use this file except in compliance with the License.* You may obtain a copy of the License at**   http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/#pragma once#include <atomic>#include <cassert>#include <utility>namespace folly {    /**    * A very simple atomic single-linked list primitive.    *    * Usage:    *    * class MyClass {    *   AtomicIntrusiveLinkedListHook<MyClass> hook_;    * }    *    * AtomicIntrusiveLinkedList<MyClass, &MyClass::hook_> list;    * list.insert(&a);    * list.sweep([] (MyClass* c) { doSomething(c); }    */    template <class T>    struct AtomicIntrusiveLinkedListHook {        T* next{ nullptr };    };    template <class T, AtomicIntrusiveLinkedListHook<T> T::*HookMember>    class AtomicIntrusiveLinkedList {    public:        AtomicIntrusiveLinkedList() {}        AtomicIntrusiveLinkedList(const AtomicIntrusiveLinkedList&) = delete;        AtomicIntrusiveLinkedList& operator=(const AtomicIntrusiveLinkedList&) =            delete;        AtomicIntrusiveLinkedList(AtomicIntrusiveLinkedList&& other) noexcept {            auto tmp = other.head_.load();            other.head_ = head_.load();            head_ = tmp;        }        AtomicIntrusiveLinkedList& operator=(            AtomicIntrusiveLinkedList&& other) noexcept {            auto tmp = other.head_.load();            other.head_ = head_.load();            head_ = tmp;            return *this;        }        /**        * Note: list must be empty on destruction.        */        ~AtomicIntrusiveLinkedList() {            assert(empty());        }        bool empty() const {            return head_.load() == nullptr;        }        /**        * Atomically insert t at the head of the list.        * @return True if the inserted element is the only one in the list        *         after the call.        */        bool insertHead(T* t) {            assert(next(t) == nullptr);            auto oldHead = head_.load(std::memory_order_relaxed);            do {                next(t) = oldHead;                /* oldHead is updated by the call below.                NOTE: we don't use next(t) instead of oldHead directly due to                compiler bugs (GCC prior to 4.8.3 (bug 60272), clang (bug 18899),                MSVC (bug 819819); source:                http://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange */            } while (!head_.compare_exchange_weak(oldHead, t,                std::memory_order_release,                std::memory_order_relaxed));            return oldHead == nullptr;        }        /**        * Replaces the head with nullptr,        * and calls func() on the removed elements in the order from tail to head.        * Returns false if the list was empty.        */        template <typename F>        bool sweepOnce(F&& func) {            if (auto head = head_.exchange(nullptr)) {                auto rhead = reverse(head);                unlinkAll(rhead, std::forward<F>(func));                return true;            }            return false;        }        // new function        // if std::memory_order_acquire applies to next(oldHead)(the first one, the argument of compare_exchange_weak)        // and I don't know if following bugs affect the code        // GCC prior to 4.8.3 (bug 60272), clang prior to 2014-05-05 (bug 18899)        // MSVC prior to 2014-03-17 (bug 819819).         // template <typename F>        T* sweepHead()        {            // handle if the list is not empty            auto oldHead = head_.load(std::memory_order_relaxed);            while (oldHead != nullptr && !head_.compare_exchange_weak(oldHead, next(oldHead), std::memory_order_acquire, std::memory_order_relaxed))                ;            // if drop out head successfully            if (oldHead)            {                next(oldHead) = nullptr;                return oldHead;            }            return nullptr;        }        // new function        // if std::memory_order_acquire does not apply to next(oldHead)        // and I don't know if following bugs affect the code        // GCC prior to 4.8.3 (bug 60272), clang prior to 2014-05-05 (bug 18899)        // MSVC prior to 2014-03-17 (bug 819819).         //template <typename F>        T* dropHead()        {            T* oldHead = nullptr;            // handle if the list is not empty            while ((oldHead = head_.load(std::memory_order_acquire)))            {                assert(oldHead != nullptr);                T* nextHead = next(oldHead);                // because insert and drop out will be involving with head_, they                 // will change head_ first, then others                bool res = head_.compare_exchange_weak(oldHead, nextHead, std::memory_order_relaxed,                    std::memory_order_relaxed);                if (res && oldHead != nullptr)                {                    assert(next(oldHead) == nextHead);                    next(oldHead) = nullptr;                    return oldHead;                }            }            return nullptr;        }        /**        * Repeatedly replaces the head with nullptr,        * and calls func() on the removed elements in the order from tail to head.        * Stops when the list is empty.        */        template <typename F>        void sweep(F&& func) {            while (sweepOnce(func)) {            }        }        /**        * Similar to sweep() but calls func() on elements in LIFO order.        *        * func() is called for all elements in the list at the moment        * reverseSweep() is called.  Unlike sweep() it does not loop to ensure the        * list is empty at some point after the last invocation.  This way callers        * can reason about the ordering: elements inserted since the last call to        * reverseSweep() will be provided in LIFO order.        *        * Example: if elements are inserted in the order 1-2-3, the callback is        * invoked 3-2-1.  If the callback moves elements onto a stack, popping off        * the stack will produce the original insertion order 1-2-3.        */        template <typename F>        void reverseSweep(F&& func) {            // We don't loop like sweep() does because the overall order of callbacks            // would be strand-wise LIFO which is meaningless to callers.            auto head = head_.exchange(nullptr);            unlinkAll(head, std::forward<F>(func));        }    private:        std::atomic<T*> head_{ nullptr };        static T*& next(T* t) {            return (t->*HookMember).next;        }        /* Reverses a linked list, returning the pointer to the new head        (old tail) */        static T* reverse(T* head) {            T* rhead = nullptr;            while (head != nullptr) {                auto t = head;                head = next(t);                next(t) = rhead;                rhead = t;            }            return rhead;        }        /* Unlinks all elements in the linked list fragment pointed to by `head',        * calling func() on every element */        template <typename F>        void unlinkAll(T* head, F&& func) {            while (head != nullptr) {                auto t = head;                head = next(t);                next(t) = nullptr;                func(t);            }        }    };} // namespace folly

The following code is used for testing:

# Include <memory> # include <cassert> # include <iostream> # include <vector> # include <thread> # include <future> # include <random> # include <cmath> # include "folly. h "using namespace folly; struct student_name {student_name (int age = 0): age (age) {} int age; AtomicIntrusiveLinkedListHook <student_name> node ;}; using ATOMIC_STUDENT_LIST = AtomicIntrusiveLinkedList <student_name, & student_name: node>; ATOMIC_STUDE NT_LIST g_students; ATOMIC_STUDENT_LIST g_backStudents; // count the size of backStudents int g_backSize = 0; std: atomic <int> g_inserts; // insert num (successful) std :: atomic <int> g_drops; // drop num (successful) std: atomic <int> g_printNum; // as same as g_dropsstd: atomic <long> g_ageInSum; // age sum when producing student_namestd: atomic <long> g_ageOutSum; // age sum when consuming student_nameconstexpr int HA NDLE_NUM = 2000000; // when testing, no more than this number, you know 20,000,000*100 ~ = MAX_INTconstexpr int consumer = 3; // producing thread numberconstexpr int CONSUME_THREAD_NUM = 3; // consuming thread numberinline void printOne (student_name * t) {consumer (1, std: consumer ); g_ageOutSum.fetch_add (t-> age, std: memory_order_relaxed); // clean node // delete t; Forward (t);} void eraseOne (student_name * t) {++ g_backSize; delete t;} v Oid insert_students (int idNo) {std: default_random_engine Sid (time (nullptr); std: uniform_int_distribution <int> ageDi (1, 99); while (true) {int newAge = ageDi (DH); g_ageInSum.fetch_add (newAge, std: memory_order_relaxed); auto ns = g_backStudents.dropHead (); if (ns = nullptr) {ns = new student_name (newAge);} g_students.insertHead (ns); // use memory_order_relaxed avoiding affect folly memory o Rder enumerate (1, std: memory_order_relaxed); // use Your avoiding affect folly memory order if (g_inserts.load (std: memory_order_relaxed) >=handle_num) {return ;}}} void drop_students (int idNo) {while (true) {auto st = g_students.dropHead (); if (st) {printOne (st ); // use memory_order_relaxed avoiding affect folly memory order g_drops.fetch_add (1, std: memory_order_re Laxed);} // use memory_order_relaxed avoiding affect folly memory order if (g_drops.load (std: memory) >= HANDLE_NUM) {return ;}} int main () {std :: vector <std: future <void> insert_threads; for (int I = 0; I! = PRODUCE_THTREAD_NUM; ++ I) {insert_threads.push_back (std: async (std: launch: async, insert_students, I);} std: vector <std :: future <void> drop_threads; for (int I = 0; I! = CONSUME_THREAD_NUM; ++ I) {drop_threads.push_back (std: async (std: launch: async, drop_students, I);} for (auto & item: insert_threads) {item. get () ;}for (auto & item: drop_threads) {item. get ();} std: cout <"insert count1:" <g_inserts.load () <std: endl; std: cout <"drop count1: "<g_drops.load () <std: endl; std: cout <" print num1: "<g_printNum.load () <std: endl; std :: cout <"age in1:" <g_ageInSum.load () <std: endl; std: cout <"age out1:" <g_ageOutSum.load () <std:: endl; std: cout <std: endl; while (true) {auto st = g_students.dropHead (); if (st) {printOne (st ); // use Your avoiding affect folly memory order g_drops.fetch_add (1, std: memory_order_relaxed);} if (g_students.empty () {break;} std :: cout <"insert count2:" <g_inserts.load () <std: endl; std: cout <"drop count2:" <g_drops.load () <std:: endl; std: cout <"print num2:" <g_printNum.load () <std: endl; std: cout <"age in2: "<g_ageInSum.load () <std: endl; std: cout <" age out2: "<g_ageOutSum.load () <std: endl; g_backStudents.sweepOnce (eraseOne); std: cout <"back Students size:" <g_backSize <std: endl ;}

The test result shows:

In folly. in the H file, assert (next (oldHead) = nextHead) of the dropHead function will be triggered, which surprised me. After careful consideration, I found some possible problems.

Description:

Now we assume there are two threads (call the drop_students function) that get the g_students node, both of which run to get the nextHead (refer to the dropHead function) at the same time, and one of the threads (thread A) is interrupted, another thread (thread B) obtains the node (node a, node a's next points to Node B), which is inserted into g_backStudents, at this time, thread B Extracts a node from g_students (Node B, Node B's next points to node c), and inserts the node thread into g_students (call the insert_students function) (thread C) insert node a to g_students. At this time, thread A continues to run and runs head _. after the compare_exchange_weak function, head _ points to Node B. In fact, head _ points to node c. In this case, there are two nodes pointing to Node B, and the program may be faulty.

Of course, all I have described is a problem. In fact, there may be many similar cases. I will not give them one by one here, but for scenarios with more multithreading, obviously, the situation described above is reasonable, as long as the newly added thread is interrupted in the above process. In addition, when there are more threads, there may be more and more problems. Here, I just want to demonstrate the irrationality of the above implementation. In the previous article, the problem described in the first comment can also be analyzed in a similar way. Just change the inserted into g_backStudents to delete and the node will be retrieved from g_backStudents, instead, a new node is created at the delete address (although the possibility is small, this possibility exists ).

Here, I only show an error. If you change the next node to shared_ptr, the problem may be solved in the C ++ 20 compiling environment, the performance loss caused by this modification increases memory usage, which is contrary to the intention of using a lock-free queue. In this case, changing atomic operations to spin locks may be better.

So I have not continued to try it for the moment. If you are interested, you can consider what kind of discovery you want to share with me.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.