Self-thinking Yes Multithreading (ii)

Source: Internet
Author: User

As you have already learned, the scheduling between threads and threads is not controllable, so when we go to write multithreaded procedures, we must take into account the thread is out of order, and if not, there will be a thread safety problem.

Why do you say that? Because when the program occurs when multiple threads are running, you cannot determine exactly which thread is executing, maybe a executes a line of code, then switch to B to execute a line of code, then switch back to a and then execute a line of code, it is possible, do not think my code is short, then one or two lines do not need to lock, multithreaded programs must be rigorous.

How to ensure the rigorous?

is when your program is using shared resources, that is, when multiple threads are likely to call the same variable or access the same piece of memory, it is necessary to ensure that the code of the linear execution, such as I have the following code:

 Public classdbactionqueue:idisposable { PublicQueue<action>_transqueue; PrivateThread _thread; Private BOOL_isdispose =false; Private Static ReadOnly Object_syncobject =New Object(); Private ReadOnly Object_syncqueueobject =New Object(); Private StaticDbactionqueue _instance;  Public Staticdbactionqueue Instance {Get            {                if(_instance = =NULL)                {                    Lock(_syncobject) {if(_instance = =NULL) {_instance=NewDbactionqueue (); }                    }                }                return_instance; }        }        PrivateDbactionqueue () {if(_transqueue = =NULL) {_transqueue=NewQueue<action>(); }            if(_thread = =NULL) {_thread=NewThread (thread_work) {IsBackground=true                }; } _thread.        Start (); }         Public voidPush (Action action) {if(_transqueue = =NULL)Throw NewArgumentNullException ("Dbactionqueue is not init"); Lock(_syncqueueobject) {_transqueue.enqueue (action); }        }         Public voidthread_work () { while(!_isdispose) {action[] Items=NULL; if(_transqueue! =NULL&& _transqueue.count >0)                {                    Lock(_syncqueueobject) {Items=NewAction[_transqueue.count]; _transqueue.copyto (Items,0);                    _transqueue.clear (); }                }                if(Items! =NULL&& items. Length >0)                {                    foreach(varIteminchitems) {                        Try{item.                        Invoke (); }                        Catch(Exception ex) {Loghelper.write (string. Format ("dbactionqueue error. | EXCEPTION.STACKTRACE:{0}", ex.                        StackTrace), ex); }}} thread.sleep (1); }        }         Public voidDispose () {_isdispose=true; _thread.        Join (); }    }
View Code

I was in the enqueue when the lock, in the clear when also on the lock, here is a place to say, is when you want to operate on the block logic lock, it must be locked is the same object, otherwise there is no meaning. Why the lock here, if I do not lock, what will be the problem?

Unlocked case, the first is the data loss problem, when I have a thread executed CopyTo this line of code, there is a thread executed enqueue, this time, I will continue to run the front of the clear, the enqueue data will be removed, it is equivalent to lose a piece of data.

If the code changes a little bit:

 while (! _isdispose)            {                null;                 Lock (_syncobject)                {                    ifnull0)                    {                        = _transqueue.dequeue ();                    }                }                Item. Invoke ();            }
View Code

We will find that the logic of the execution code. Invoke () is placed outside of lock, this place in the blog has been said, because lock will cause a series of problems, if I am a single piece of the case, do not lock it?

No, because when you have a queue in the Enqueue and run dequeue, this queue will appear similar to the database of dirty read, Phantom Read and other unpredictable bugs. But can be replaced by concurrentqueue to solve this problem, but one thing to say , if the case is bulk fetch, replaced by Concurrentqueue will still appear above said the problem of data loss, because thread scheduling is not controllable, As for Concurrentqueue's thread safety, which is either atomic or spin-locked, there is no specific documentation, and this is not a discussion. Here is also a point to say, batch take is to avoid frequent lock, the specific batch to take how many, you can control yourself, I here is once taken out, you can control into 10, 20, 50 and so on.

We will find that because the thread scheduling is not controllable such a premise, resulting in when we have multiple threads to collaborate, it will become unusually difficult to control, so in the programming, please try to avoid multi-threaded collaboration This happens, if it must have happened, Do not take for granted that your code will be carried out according to their own understanding, to give you an example:

The code roughly means that there is a network module that receives a message from the client, assigns a thread to the queue, and then, after the thread finishes processing, drops it to the sending thread, with the core code as follows:

protected Virtual voidReceiveCallback (stringIpintPortstringUrlBOOLIslargepack, IntPtr Streamhandle,LongStreamsize, IntPtr Bodydata,intbodysize, IntPtr responsehandle) {            //Initializes a thread wait event (semaphore)AutoResetEvent autoevent =NULL; //Turn on asynchronous processing (because this module supports both synchronous and asynchronous)            if(! This. _issync) {autoevent=NewAutoResetEvent (false); }            //read data from Streamhandler            vardata =read2byte (Streamhandle, Bodydata, Streamsize, Bodysize, Islargepack); //Convert to Internal protocol data (Bson)            varobj = bsonhelper.toobject<communication>(data); //a action<communication, IntPtr, object>            if(Received! =NULL) {Received.invoke (obj, Responsehandle, autoevent); }            //blocking, until the signal is received.            if(Autoevent! =NULL) {Autoevent.waitone ( This. _timeout); }        }
View Code

Receive.invoke This place receive is an action with the following code:

 Public voidInvokeCommand (Communication obj, INTPTR Connect,Objecte) {//Data integrity judgment            if(obj = =NULL||string. IsNullOrEmpty (Obj.command)) {obj=NewCommunication {Command="Errorcommand", Body=NewNewtonsoft.Json.Linq.JObject ()}; Obj. body["Token"] =Guid.NewGuid ().            ToString (); }            varUnit =NewInternelunit {Event=E, Packet=obj, Connection=Connect}; //is synchronized            if( This. _issync) {                 This.            Requestcallback (unit); }            Else            {                //put into the business processing queueRequestQueueManage.Instance.Push (unit); }        }
View Code

The meaning of these two pieces of code is that the network module receives the message and drops it to the thread queue. That, due to the life cycle control, causes the handle to RequestHandler, which is valid only within the body of the method, and if the method body ends, the handle is freed. So we have, push to the thread queue inside, do a signal waitone processing. Just want to wait until the sending thread finishes processing, then release the signal, the code is as follows:

 Public voidresponsecallback (internelunit unit) {//whether the package is to be dropped into the drop pool            if(unit. Isinlastpackpool) {Core.LostPacketPool.LostPacketPool.Instance.Push (refunit); }            //Convert to byte[by protocol]            varRepbson =Bsonhelper.tobson (unit.            Packet); //whether to turn on encryption            if( This. _isencrypt) {Repbson=Encrypthelper.decrypt (Repbson, repbson.length); }            //SendNetwork.NetworkHelper.Send (unit. Connection, Repbson, unit.            ID); //whether to turn on async            if(!_issync) {                //Release Signal(Unit. Event asSystem.Threading.AutoResetEvent).            Set (); }        }
View Code

This whole code, in most cases, will not be problematic, but as we have just said, thread scheduling is not controllable, so we can not guarantee that after Receive.invoke (), the code continues to execute, executed the WaitOne (), If after Receive.invoke, the program switches to the business processing thread, then it is possible that the set () is executed first to release the signal, and then execute WaitOne (), there will be a deadlock, but fortunately we have to do time-out control, and there will be no absolute deadlock ( But the same is the same ).

So this program is written, is a non-rigorous program, there will be a lot of inexplicable timeouts. When the program does require collaboration between multiple threads, try to handle it as callback as possible, and control the life cycle so that resources are not released as much as possible.

A more common example of voting:

// get the article object from the cache var article = cachehelper.get (articleid); give like +1article. up + + write back cache, because of the reference to the technical relationship, so if the cache is your own control within your program (such as dictionary), this step can be omitted.  //Cachehelper.set (ArticleID, article);
View Code

A very simple code for a counter, but because when more than one user likes it at the same time, the program may add the data wrong (reason no longer repeat). So we have the intention of adding lock, the code is as follows:

Lock (object) {voting counter +1}

Here is a place to note that if the foundation is not enough, try not to lock (this), because this refers to the current instance, and multiple threads may have more than one instance, then lock is not the same object.

This time your code seems to be no problem, but if your program is deployed on multiple machines, then the problem of data add error will still appear, right. Because the two machines lock is not the same object, this time may need to use DB, or introduce a third-party middleware (such as Redis, etc.), need to have a place as a unique central control, you can guarantee the consistency of data, there is another way, is to ArticleID, let the same piece of article like the operation, go to the same machine on the operation of this can also.

Similarly, when we are doing db to cache processing, so is the case, for example, we have the following code,

var list = cachehelper.get (key); if NULL {    = getlistfromdb (XXX);} return list;

The problem with this piece of code is that when the data changes when Getlistfromdb (), it's possible that multiple machines get a list that's different. You may need to do some timing synchronization. If more than one thread has been reading, it will appear that multiple threads go to db to get the data at the same time, this is not what we want to see, so we add lock

var list = cachehelper.get (key); if NULL {    lock(object)          {= cachehelper.get (key);           if NULL {               = getlistfromdb (XXX);}     }} return list;

Why is there a double judgment? Because at the time of your lock, the previous thread might have read the data, which avoids the fact that multiple threads still go to DB to fetch the data when multiple threads are running here. since the data from the DB is slow, there will still be such a cycle of thread scheduling, locking, switching, as we said in the previous article. so try to use lock with caution.

Thread safety is mainly the problem of the thread scheduling is not controllable, we need to ensure that our shared resources to deal with the place is block, can be executed linearly.

Self-thinking Yes Multithreading (ii)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.