IOS multithreading 02-pthread, Nsthread, GCD, Nsoperationqueue, Nsrunloop

Source: Internet
Author: User
Tags gcd






Note: I am a translator, and add a bit of my opinion.






Points:



1. Preface



2.pthread



3.NSThread



4.Grand Central Dispatch (GCD)



5.Opearation Queues



6. Run Loops



7. Challenges in multithreaded programming



8. Resource Sharing



9. Mutual exclusion Lock



10. Deadlock



11. Resource Starvation (starvation)



12. Priority reversal









1. Preface



In fact, the runloop is called multithreading is not correct, because it can not really parallel, but because it has a great relationship with concurrent programming, so it deserves our deep understanding.






2. Pthread



Pthread is a POSIX standard multithreaded library that is widely used on UNIX and Linux and has a corresponding implementation on Windows.



Pthreads defines a set of C language types, functions, and constants, which are implemented withPthread.hheader files and a line libraries, which is a relatively low-level implementation.



Here does not enumerate its realization, because the writing comparison bottom and chores, can view the encyclopedia http://baike.baidu.com/view/974776.htm






3. Nsthread



Nsthread is an Apple package for the pthread. Through encapsulation, you can use multithreading more easily in a cocoa environment.



For example, we can open a new thread like this:



NSMutableSet * _threads;

- (void)viewDidLoad {
     [super viewDidLoad];
     NSLog(@"viewDidload");
     / / The following is to open a new thread
     _threads = [NSMutableSet set];
     NSUInteger threadCount = 4;
     For(int i=0; i<threadCount; i++) {
 NSThread *t = [[NSThread alloc] initWithTarget:self selector:@selector(run) object:nil];
         [t start];
         [_threads addObject:t];
     }
}

-(void) run{
     NSLog(@"run");
}


Above we can see that the new thread of Nsthread is simpler than the pthread. However, we do not delve into nsthread here. Because not only is using Nsthread or pthread to manipulate threads directly, it's a bad programming experience, and it's not a good way for us to write nice code.



Also, the problem with using threads directly is that, more empirically, if your code and the framework code you're basing are creating your own threads, the number of active threads is likely to grow exponentially, which is a common problem in large projects. There is no cost to using threads, and each thread consumes some memory and kernel resources.



Next, we'll cover the queue-based concurrency programming API:GCD and Operation queue. They solve the problem by centrally managing a thread pool that everyone uses together.






4. Grand Central Dispatch (GCD)



With GCD, developers don't have to deal with threads directly, just add code blocks to the queue. GCD a thread pool at the backend manager not only determines which thread your code block will be executed on, it also manages these threads based on available system resources. This frees the developer from the work of thread management and mitigates the problem of a large number of threads being created through centralized thread management.



Another important change that GCD brings is that as developers can consider work as a queue rather than a bunch of threads, this parallel abstraction model is easier to master and use.



GCD exposes 5 different queues: The main queue running in the main thread, 3 different priority queues, and a lower-priority background queue for I/O.



In addition, developers can create custom queues: serial or parallel. custom queues are very powerful, and all blocks that are scheduled in a custom queue will eventually be placed in the system's global queue and thread pool .






There are a number of different priority queues, but the authors strongly recommend that you use the default priority queue .



If the tasks you perform require access to some shared resources, scheduling these tasks in different priority queues can quickly cause unpredictable behavior . This may cause the program to be completely suspended, because a low-priority task blocks a high-priority task so that it cannot be executed. Later there will be an introduction to more about the priority Reversal section.



Although GCD is a low-level C API, it is very straightforward to use. But it's also easy for programmers to cause a lot of problems in concurrent programming, and readers can look down and find some inspiration.






5. Opearation Queues



The Action queue (Operation queue) is a cocoa abstraction of a queue model provided by GCD. The GCD provides a lower level of control, while the opearation queue implements some handy functions on top of the GCD.



Nsoperationqueue has 2 different types of queues: Primary queue and custom queue. The primary queue runs on top of the main thread, and the custom queue executes in the background. Of the 2 types, the tasks processed by these queues are expressed using nsoperation subclasses.



You can define your own operations by overriding the main or Start method. The previous method is very simple, and the programmer does not need to manage some state attributes (such as isexecuting and isfinished), and when the main method returns, the operation is over. This approach is very simple to use, but less flexible.



@implementation YourOperation
     - (void)main
     {
         // handle it...
     }
@end 


If you want to have more control and can perform asynchronous tasks in one operation, rewrite the Start method:



@implementation YourOperation
     - (void)start
     {
         self.isExecuting = YES;
         self.isFinished = NO;
         / / Start processing, at the end should call finished ...
     }

     - (void)finished
     {
         self.isExecuting = NO;
         self.isFinished = YES;
     }
@end 


Note: In this case, you must manually manage the state of the operation. In order for the operations queue to capture the changes to the operation, the properties of the state need to be implemented in a manner that fits KVO. If you do not use their default setter to set it, you will need to send the appropriate KVO message at the right time.



In order to be able to use the cancellation functionality provided by the action queue, you need to check the properties at times during long operationsisCancelled:



- (void)main
{
     While (notDone && !self.isCancelled) {
         // handle
     }
}


Once you have defined the Operation class, it is easy to add a operation to the queue:




NSOperationQueue *queue = [[NSOperationQueue alloc] init];
YourOperation *operation = [[YourOperation alloc] init];
[queue  addOperation:operation];


Alternatively, you can add a block to the operations queue. This can sometimes be very handy, for example, if you want to schedule a one-time task in the main queue:



[[NSOperationQueue mainQueue] addOperationWithBlock:^{
     // code...
}]


Although it is convenient to add operations to the queue in this way, it is helpful to define your own nsoperation subclasses when debugging . If you rewrite the operationdescriptionmethod, you can easily identify all the operations that are currently scheduled in a queue.



In addition to providing basic dispatch operations or blocks, the operations queue provides features that are less easily handled in GCD . For example, you canmaxConcurrentOperationCountcontrol how many operations in a particular queue can participate in concurrent execution through properties. If you set it to 1, you will get a serial queue, which is useful for isolation purposes.



Another handy feature is to sort the queues according tooperationtheir priority, which differs from the GCD queue priority, which affects only the execution of all scheduled operation in the current queue . If you need to further control the order of execution of operation in addition to the 5 standard priorities, you can also specify a dependency between operation, as follows:



 
[intermediateOperation addDependency:operation1];
[intermediateOperation addDependency:operation2];
[finishedOperation addDependency:intermediateOperation];


These simple code can be ensuredoperation1andoperation2intermediateOperationexecuted before, and of course,finishOperationbefore being executed. Operational dependencies are a very powerful mechanism for requiring a clear order of execution. It allows you to create groups of operations and ensure that they are executed before the operations that depend on them are executed, or in a serial manner in the concurrent queue.



In essence, the performance of the operations queue is a bit lower than GCD, but in most cases this negative impact is negligible, and the operations queue (Operation queue) is the preferred tool for concurrent programming .






6. Run Loops



In fact, the Run loop is not a concurrency mechanism like GCD or the operations queue because it does not perform tasks in parallel. In the main dispatch/operation queue, however, the run loop will work directly with the execution of the task, which provides a mechanism for executing code asynchronously.



The run loop is much easier to use than the operation queue or GCD because you can perform tasks asynchronously without having to deal with the complexities in concurrency through the run loop.



a run loop is always bound to a particular thread . Main run loop is related to the main thread, and in each of the Cocoa and Cocoatouch programs, this main run loop plays a central role in handling UI events, timers, and other kernel-related events . Whenever you set a timer, useNSURLConnection, or callperformSelector:withObject:afterDelay:, the run loop is the one that handles these asynchronous tasks .



Whenever you use the run loop to execute a method, you need to remember that the run loop can run in different modes, each of which defines a set of events for the run loop to respond to. This is a smart way to prioritize a task to perform this task temporarily in the corresponding main run loop.



A very typical example of this, in IOS, is scrolling. When scrolling, the run loop is not running in the default mode, so the run loop does not respond to timers that are set before scrolling, for example. Once the scroll is stopped, the run loop returns to the default mode and executes the related events added to the queue. If you want the timer to be triggered when scrolling, you need to set it toNSRunLoopCommonModesthe mode and add it to the run loop.



The main thread is generally configured with main run loop. Other threads, however, do not have the run loop set by default. You can also set the run loop on your own for other threads, but in general we rarely need to do so. It is much easier to use main run loop most of the time. If you need to handle some heavy work, but do not want to do it in the main thread, you can still assign the work to other queues after your code is called in main run loop.



If you really need to add a run loop to another thread, don't forget to add at least one input source in the run loop. If you do not have the input source set in the run loop, each time you run the run loop, it exits immediately.






7. Challenges in multithreaded programming



Using concurrent programming can bring many pitfalls. As long as you're doing something more than the most basic situation, monitoring the different states of the interaction between multiple tasks that are performed concurrently can become extremely difficult. Problems often occur in areas of uncertainty (unpredictability), which makes it more difficult to debug related concurrency code .



There is a very well-known example of the unpredictability of concurrent programming: In 1995, NASA sent the pioneer Mars probe, but when the probe landed on our red neighbour planet shortly after the mission came to a halt, the Mars rover inexplicably restarted, in the computer field, This behavior is determined to be a priority reversal, meaning that a low-priority thread has been blocking a high-priority thread. We'll see more details about the issue later. What we want to show here is that even with abundant resources and the wisdom of a large number of excellent engineers, concurrency will bite you a mouthful in a few cases.






8. Resource Sharing



Many of the problems in concurrent programming are rooted in accessing shared resources in multiple threads. A resource can be a property, an object, a common memory, a network device, or a file, and so on. Any shared resource in multiple threads can be a potential conflict point, and you must carefully design it to prevent this conflict from occurring.



To demonstrate this kind of problem, let's give a simple example of a resource: Just using an integer value to do the counter. While the program is running, we have two parallel threads A and B, and all two threads try to increase the value of the counter at the same time. The problem is that the code you write in C or objective-c most of the way is not just a machine instruction for the CPU. To increase the value of the counter, the current must be read out of memory, then increment the value of the counter, and finally write the added value back to memory.



We can try to think about what happens if two threads do the same thing at the same time. For example, both threads A and B read out the value of the counter from memory, assuming17that thread a adds a value of 1 to the counter and writes the result18back to memory. Also, thread B adds a value of 1 to the counter and writes the result18back into memory. In fact, the value of the counter has been destroyed because the value of the counter is incremented by171 two times, and its value is18.






This problem is called race condition, in the multi-threaded access to a shared resource, if there is no mechanism to ensure that thread a end access to a shared resource, the resource will not begin to access the shared resources, the problem of resources competition will always occur. If you are writing memory that is not a simple integer, but rather a more complex data structure, this can happen: when the first thread is writing to the structure, the second thread attempts to read the data structure, then the obtained information may be either new or old or not initialized. To prevent such problems, multithreading requires a mutually exclusive mechanism to access shared resources.



In actual development, the situation is even more complex than described above, because modern CPUs tend to change the order in which they read and write data to memory (disorderly execution) for optimization purposes.






9. Mutual exclusion Lock



Mutually exclusive access means that only one thread is allowed to access a particular resource at the same time. To ensure this, each thread that wants to access the shared resource first needs to obtain a mutex for the shared resource, and once a thread has completed the operation on the resource, it frees the mutex so that other threads have access to the shared resource.






In addition to securing mutually exclusive access, you also need to address the problems associated with code execution. If it is not possible to ensure that the CPU accesses the memory in the same order as the code at the time of programming, it is not sufficient to rely solely on mutually exclusive access. In order to solve the side effects caused by the optimization strategy of the CPU, we also need to introduce memory barrier (barrier). A memory barrier is set to ensure that commands that do not run out of order can be executed across the barrier.



Of course, the implementation of the mutex itself requires no competitive conditions. This is actually a very important guarantee and requires the use of special instructions on modern CPUs. More information on atomic operations (atomic operation).



From the language level, the attribute is declared in the form of atomic in Objective-c, and the mutex can be supported. In fact, by default, properties are atomic. Declaring a property as atomic indicates an implicit lock-and-unlock operation for each access to the property. Although the most some chop approach is to declare all attributes as atomic, there is a cost to unlocking it.



Locking on a resource can cause a certain performance cost. The operation that acquires the lock and releases the lock itself also requires no race condition, which is important in multicore systems. In addition, when acquiring a lock, threads sometimes need to wait because it is possible that other threads have already acquired a lock on the resource. In this case, the thread goes into hibernation. When other threads release the lock on the associated resource, the dormant thread gets notified. All of these related operations are very expensive and complex.



Locks are also available in different types. When there is no competition, some locks perform well in the absence of a lock competition, but in the case of a locked competition, performance is greatly compromised. Other locks are more resource-intensive at the basic level, but in a competitive situation, the deterioration in performance will be less severe. (The competition for locks is generated when one or more threads try to acquire a lock that has been acquired by another thread).



Here's one thing to weigh: getting and releasing locks is a cost, so you need to make sure that you don't frequently enter and exit critical sections (such as acquiring and releasing locks). Also, if you are going to execute a large piece of code after acquiring the lock, this poses a risk of lock contention: Other threads may have to wait for a resource lock to fail to work. This is not an easy task to solve.



We often see code that was planned to run in parallel, but in fact, because the associated locks are configured in the shared resource, only one thread is active at the same time. It is often important to predict how your code will run on multicore, and you can use instrument's CPU strategy view to check that the available cores of the CPU are effectively utilized to get a better idea of how to optimize your code.






10. Deadlock



mutexes solve the problem of race conditions, but unfortunately it also introduces some other problems, one of which is deadlock. A deadlock occurs when multiple threads are waiting for each other to end, and the program may get stuck.






Take a look at the following code, which swaps the values of two variables:



 
void swap(A, B)
{ lock(lockA); lock(lockB); int a = A; int b = B;
    A = b;
    B = a;
    unlock(lockB);
    unlock(lockA);
}


Most of the time, this will work. But when two threads use the opposite value to call the above method at the same time:



Swap(X, Y); // Thread 1
Swap(Y, X); // thread 2  


At this point the program may be terminated because of a deadlock. Thread 1 obtains a lock on X, and thread 2 obtains a lock on Y. Then they will wait for the other lock at the same time, but will never get it.



Again, the more resources you share between threads, the more locks you will use, and the greater the probability of the program being deadlocked. This is one of the reasons why we need to minimize resource sharing among threads and ensure that shared resources are as simple as possible.





11. Resource Starvation (starvation)


A new problem arises when you think that you are sufficiently aware of the problems that are facing concurrent programming. A locked shared resource can cause read-write problems. In most cases, restricting resources to read access by only one thread at a time is really wasteful. Therefore, holding a read lock is allowed when there is no write lock on the resource. In this case, if a thread holding a read lock waits for a write lock, other threads that want to read the resource will cause resource starvation to occur because they cannot obtain the read lock.



To solve this problem, we need to use a method that is smarter than a simple read/write lock, such as given a writer preference, or using the read-copy-update algorithm. In the underlying concurrency programming API, Daniel describes how to implement a multi-read single-write pattern with GCD so that it is not bothered by the problem of writing resource starvation.






12. Priority reversal



This section begins with a description of the concurrency problems encountered by NASA's pioneer Mars rover, which was launched on Mars. Now let's see why the trailblazer is near failure, and why sometimes our program encounters the same problem, the damned priority reversal (inversion).



Priority reversal means that a program's low-priority tasks at run time block high-priority tasks, effectively reversing the priority of the task. Since GCD provides a background queue with different priorities, even including an I/O queue, we'd better understand the likelihood of a priority reversal.



A priority reversal can occur when resources are shared between high-priority and low-priority tasks. When a low-priority task obtains a lock on a shared resource, the task should complete quickly and release the lock so that the high-priority task can continue without noticeable delay. However, high-priority tasks are blocked during a low-priority task holding a lock. If there is a priority task at this time (the task does not need that shared resource), then it is possible to preempt the low-priority task, because the high-priority task is blocked, so the priority task is the highest priority in all currently running tasks. At this point, the medium-priority task blocks the low-priority task, causing the low-priority task to not release the lock, which causes the high-priority task to wait for the lock to be released.






In your actual code, it's probably not going to restart as dramatically as what happened on Mars. When a priority reversal is encountered, it is generally less severe.



The approach to solving this problem is usually not to use different priorities. Typically, you will end up with a high-priority code that waits for low-priority code to resolve the problem. When you use GCD, the default priority queue is always used (either directly or as the target queue). If you use different priorities, chances are that things will get worse.



The lesson from this is that multiple queues with different priorities may sound good, but they are on paper. It will make complex parallel programming more complex and unpredictable. If you're programmed to get stuck with a high-priority task suddenly, you might think of this article, and the one that NASA engineers have come across as a priority reversal problem.






Summarize



We hope that through this article you will be able to understand the complexities and related issues associated with concurrent programming. In concurrent programming, no matter how simple the API looks, the problems they produce can become very difficult to observe, and it is often difficult to debug such problems.



But on the other hand, concurrency is actually a great tool. It takes full advantage of the powerful computing power of modern multi-core CPUs. In development, the key point is to keep the concurrency model as simple as possible, which limits the number of locks needed.



The security model we recommend is to extract the data you want to use from the main thread, and use an action queue to process the related data in the background, and finally go back to the main queue to send the results you get in the background queue. In this way, you do not need to do any locking operations yourself, which greatly reduces the chance of making mistakes.



IOS multithreading 02-pthread, Nsthread, GCD, Nsoperationqueue, Nsrunloop


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.