iOS Development multi-threaded article-multi-threading Technology

Source: Internet
Author: User
Tags gcd mutex
<span id="Label3"></p><p><p>Concurrency describes the concept of running multiple tasks at the same time. These tasks may be run concurrently in the form of a Single-core CPU (time-sharing), or in a truly parallel way on a multicore cpu.</p></p><p><p>OS X and IOS provide several different APIs to support concurrent Programming. Each API has different functionality and usage restrictions, which makes them suitable for different tasks. At the same time, these APIs are at different levels of Abstraction. It's possible to use it for very deep bottom-up operations, but it also means being saddled with the great responsibility to handle the task Well.</p></p><p><p>In fact, concurrent programming is a challenging topic, and it has many complex problems and pitfalls. When developers use a similar <code>Grand Central Dispatch</code> (GCD) or <code>NSOperationQueue</code> API, it's easy to forget about these problems and pitfalls. This article begins with an introduction to the different concurrency programming APIs in OS X and iOS, and then provides insight into some of the inherent challenges in concurrent programming that are independent of the specific APIs you use.</p></p>Concurrent programming in OS X and IOS<p><p>The same concurrency programming API is available in Apple's Mobile and desktop operating Systems. This article will introduce,,, <code>pthread</code> <code>NSThread</code> <code>GCD</code> <code>NSOperationQueue</code> , and <code>NSRunLoop</code> . Actually putting the run loop in there is a bit odd, because it doesn't really work in parallel, but because it has a lot to do with concurrent programming, it's worth a little more insight.</p></p><p><p>Because the high-level API is built based on the underlying api, we'll start with the underlying API and then step into the high-level Api. In specific programming, however, the API is selected in the opposite order: because in most cases, choosing a high-level API not only accomplishes the tasks that the underlying API can accomplish, but also makes the concurrency model Simple.</p></p><p><p>If you have questions about why we insist on using high levels of abstraction and simple parallel code, you can look at the challenges of concurrent programming in the second part of this article, and the article on thread safety written by Peter Steinberger.</p></p>Thread<p><p>A thread is a sub-unit that makes up a process, and the scheduler of the operating system can schedule a thread separately. In fact, all concurrent programming APIs are built on top of threads-including GCD and operations queues (operation queues).</p></p><p><p>Multithreading can be run simultaneously (or at least concurrently) on a single-core CPU. The operating system assigns small slices of time to each thread, which allows the user to feel that there are multiple tasks at the same time. If the CPU is multi-core, then the thread can actually be executed in a concurrent manner, reducing the total time required to complete an Operation.</p></p><p><p>You can use the CPU strategy view in Instruments to know your code or how the framework code you are using is scheduled to execute in a multi-core cpu.</p></p><p><p>It is important to focus on that you have no control over where and when your code is scheduled, and how long it will not be able to control execution until it is paused so that other tasks can be rotated. This kind of thread scheduling is a very powerful technique, but it's also very complex and we'll look at it later.</p></p><p><p>By putting the complexities of thread scheduling aside, developers can use the POSIX threading api, or the encapsulation of the API provided in Objective-c <code>NSThread</code> , to create their own threads. The following small example uses the <code>pthread</code> 1 million numbers to find the minimum and maximum Values. 4 threads were executed concurrently. From this Example's complex code, you should see why you wouldn't want to use Pthread directly.</p></p><pre><code>#import <pthread.h>struct threadinfo {uint32_t * inputvalues; size_t count;}; struct Threadresult {uint32_t min; uint32_t max;}; void * Findminandmax (void *arg) {struct threadinfo const * Const INFO = (struct Threadinfo *) arg; uint32_t min = uint32_max; uint32_t max = 0; for (size_t i = 0; i < info->count; ++i) {uint32_t v = info->inputvalues[i]; min = min (min, v); max = Max (max, v); } Free (arg); struct Threadresult * Const result = (struct Threadresult *) malloc (sizeof (*result)); Result->min = min; Result->max = max; Return result;} int main (int argc, const char * argv[]) {size_t Const COUNT = 1000000; uint32_t inputvalues[count]; Fill inputvalues for with random numbers (size_t i = 0; i < count; ++i) {inputvalues[i] = Arc4random (); }//start 4 threads looking for minimum and maximum values size_t const THREADCOUNT = 4; pthread_t tid[threadcount]; for (size_t i = 0; i < threadcount; ++i) {struct threadInfo * Const INFO = (struct Threadinfo *) malloc (sizeof (*info)); size_t offset = (count/threadcount) * i; Info->inputvalues = inputvalues + offset; Info->count = MIN (count-offset, count/threadcount); int err = pthread_create (tid + i, NULL, &findminandmax, info); Nscassert (err = = 0, @ "pthread_create () failed:%d", err); }//wait for thread to exit struct Threadresult * results[threadcount]; for (size_t i = 0; i < threadcount; ++i) {int err = pthread_join (tid[i], (void * *) & (results[i])); Nscassert (err = = 0, @ "pthread_join () failed:%d", err); }//find min and max uint32_t min = uint32_max; uint32_t max = 0; for (size_t i = 0; i < threadcount; ++i) {min = min (min, results[i]->min); max = Max (max, results[i]->max); Free (results[i]); results[i] = NULL; } NSLog (@ "min =%u", min); NSLog (@ "max =%u", max); Return 0;}</code></pre><p><p><span style="text-decoration: underline;"> <code>NSThread</code> is a package of objective-c for Pthread. By encapsulation, you can make your code look more gracious in a Cocoa environment</span> . For example, a developer can use a subclass of Nsthread to define a thread that encapsulates code that needs to run in a background thread in this subclass. For the example above, we can define a subclass such as <code>NSThread</code> :</p></p><pre><pre><code>@interface FindMinMaxThread : NSThread@property (nonatomic) NSUInteger min;@property (nonatomic) NSUInteger max;- (instancetype)initWithNumbers:(NSArray *)numbers;@end@implementation FindMinMaxThread { NSArray *_numbers;}- (instancetype)initWithNumbers:(NSArray *)numbers{ self = [super init]; if (self) { _numbers = numbers; } return self;}- (void)main{ NSUInteger min; NSUInteger max; // 进行相关数据的处理 self.min = min; self.max = max;}@end</code></pre></pre><p><p>To start a new thread, you need to create a thread object and then call its <code>start</code> method:</p></p><pre><pre><code>NSMutableSet *threads = [NSMutableSet set];NSUInteger numberCount = self.numbers.count;NSUInteger threadCount = 4;for (NSUInteger i = 0; i < threadCount; i++) { NSUInteger offset = (count / threadCount) * i; NSUInteger count = MIN(numberCount - offset, numberCount / threadCount); NSRange range = NSMakeRange(offset, count); NSArray *subset = [self.numbers subarrayWithRange:range]; FindMinMaxThread *thread = [[FindMinMaxThread alloc] initWithNumbers:subset]; [threads addObject:thread]; [thread start];}</code></pre></pre><p><p>now, we can detect <code>isFinished</code> whether the newly generated thread has ended and get the result by detecting the properties of the Thread. We leave this exercise to interested readers, mainly because it <code>pthread</code> <code>NSThread</code> 's a relatively bad programming experience, whether it's working directly on threading or not, and it's not a good way to encode the spirit of Coding.</p></p><span style="text-decoration: underline;"><span style="text-decoration: underline;">one problem that can be raised by using threads directly is that the number of active threads can grow exponentially if both your code and the framework code you are basing are creating your own THREADS. This is a common problem in large projects. For example, in a 8-core cpu, you created 8 threads to fully perform CPU Performance. however, the framework code that your code calls in these threads also does the same thing (because it does not know which threads you have created), which can quickly generate hundreds of THREADS. Each part of the code has no problem with itself, but in the end it causes the Problem. There is no cost to using threads, and each thread consumes some memory and kernel Resources. </span></span><span style="text-decoration: underline;"><span style="text-decoration: underline;"> </span></span><span style="text-decoration: underline;"><span style="text-decoration: underline;">next, we'll cover two queue-based concurrent programming API:GCD and Operation Queue. They solve the problems encountered by centrally managing a pool of threads that are used Together. </span></span><span style="text-decoration: underline;"><span style="text-decoration: underline;"> </span></span>Grand Central Dispatch<p><p>To make it easier for developers to use multicore CPUs on their devices, Apple introduced the Grand Central Dispatch (GCD) in OS X 10.6 and IOS 4. In the next article on the underlying concurrency API, we'll cover GCD in more Detail.</p></p><p><p>With GCD, developers no longer have to deal directly with threads, just add code blocks to the queue, and GCD manages a thread pool on the Backend. GCD not only determines which thread your code block will be executed on, It also manages these threads based on available system Resources. This frees the developer from the work of thread management and mitigates the problem of a large number of threads being created through a centralized management thread.</p></p><p><p>Another important change that GCD brings is that as developers can consider work as a queue rather than a bunch of threads, this parallel abstraction model is easier to master and Use.</p></p><p><p>GCD exposes 5 different queues: main queue,3 a different priority background queue running in the main thread, and a lower-priority background queue (for i/o).<br>In addition, developers can create custom queues: serial or parallel queues. Custom queues are very powerful, and all blocks that are scheduled in a custom queue will eventually be placed in the System's global queue and thread Pool.</p></p><p><p></p></p><p><p>Several queues with different priorities sound very straightforward at first, but we strongly recommend that you use the default priority queue in most Cases. If the tasks you perform require access to some shared resources, scheduling these tasks in different priority queues can quickly cause unpredictable behavior. This may cause the program to be completely suspended because a low-priority task blocks a high-priority task so that it cannot be executed. More relevant content is described in the Priority reversal section of this Article.</p></p><p><p>Although GCD is a low-level C API, it is very straightforward to Use. But it also makes it easy for developers to forget many of the considerations and pitfalls of concurrent programming. Readers can read the challenges of concurrent programming later in this article, so you can notice some potential problems. Another excellent article in this issue: the underlying concurrency API contains a lot of in-depth explanations and some valuable hints.</p></p>Operation Queues<p><p>The action queue (operation Queue) is a Cocoa abstraction of a queue model provided by GCD. GCD provides a lower level of control, while the operations queue implements some handy features on top of GCD, which are often the best and safest option for app Developers.</p></p><p><p><code>NSOperationQueue</code>There are two different types of queues: the primary queue and the custom Queue. The primary queue runs on top of the main thread, and the custom queue executes in the Background. In both types, the tasks processed by these queues are <code>NSOperation</code> expressed using Subclasses.</p></p><p><p>You can <code>main</code> <code>start</code> define your own by means of rewriting or methods <code>operations</code> . The previous method is very simple, and the developer does not need to manage some state attributes (such as <code>isExecuting</code> and <code>isFinished</code> ), and when the <code>main</code> method returns, the Operation ENDS. This approach is very simple to use, but less flexible <code>start</code> than rewriting.</p></p><pre><pre><code>@implementation YourOperation - (void)main { // 进行处理 ... }@end</code></pre></pre><p><p>If you want more control and you can perform asynchronous tasks in one operation, override the <code>start</code> method:</p></p><pre><pre><code>@implementation YourOperation - (void)start { self.isExecuting = YES; self.isFinished = NO; // 开始处理,在结束时应该调用 finished ... } - (void)finished { self.isExecuting = NO; self.isFinished = YES; }@end</code></pre></pre><p><p>Note: in this case, you must manually manage the state of the Operation. In order for the operations queue to capture the changes to the operation, the properties of the state need to be implemented in a manner that fits KVO. If you do not use their default setter to set it, you will need to send the appropriate KVO message at the right Time.</p></p><p><p>In order to be able to use the cancellation functionality provided by the action queue, you need to check the properties at times during long operations <code>isCancelled</code> :</p></p><pre><pre><code>- (void)main{ while (notDone && !self.isCancelled) { // 进行处理 }}</code></pre></pre><p><p>Once you have defined the operation class, it is easy to add a operation to the Queue:</p></p><pre><pre><code>NSOperationQueue *queue = [[NSOperationQueue alloc] init];YourOperation *operation = [[YourOperation alloc] init];[queue addOperation:operation];</code></pre></pre><p><p>alternatively, you can add a block to the operations Queue. This can sometimes be very handy, for example, if you want to schedule a one-time task in the main queue:</p></p><pre><pre><code>[[NSOperationQueue mainQueue] addOperationWithBlock:^{ // 代码...}];</code></pre></pre><p><p>Although it is convenient to add operations to the queue in this way, it is helpful to define your own nsoperation subclasses when Debugging. If you rewrite the operation <code>description</code> method, you can easily identify all the operations that are currently scheduled in a Queue.</p></p><p><p>In addition to providing basic dispatch operations or blocks, the operations queue provides features that are less easily handled in GCD. For example, you can <code>maxConcurrentOperationCount</code> control how many operations in a particular queue can participate in concurrent execution through Properties. If you set it to 1, you will get a serial queue, which is useful for isolation Purposes.</p></p><p><p>Another handy feature is to sort the queues according to <code>operation</code> their priority, which differs from the GCD queue priority, which affects only the execution of all scheduled operation in the current Queue. If you need to further control the order of execution of operation in addition to the 5 standard priorities, You can also specify a dependency between operation, as Follows:</p></p><pre><pre><code>[intermediateOperation addDependency:operation1];[intermediateOperation addDependency:operation2];[finishedOperation addDependency:intermediateOperation];</code></pre></pre><p><p>These simple code can be ensured <code>operation1</code> and <code>operation2</code> <code>intermediateOperation</code> executed before, and of course, <code>finishOperation</code> before being Executed. Operational dependencies are a very powerful mechanism for requiring a clear order of Execution. It allows you to create groups of operations and ensure that they are executed before the operations that depend on them are executed, or in a serial manner in the concurrent Queue.</p></p><p><p>In essence, the performance of the operations queue is a bit lower than GCD, but in most cases this negative impact is negligible and the operational queue is the preferred tool for concurrent programming.</p></p>Run Loops<p><p>In fact, the Run loop is not a concurrency mechanism like GCD or the operations queue because it does not perform tasks in parallel. In the main dispatch/operation queue, however, the run loop will work directly with the execution of the task, which provides a mechanism for executing code ASYNCHRONOUSLY.</p></p><p><p>The run loop is much easier to use than the operation queue or GCD because you can perform tasks asynchronously without having to deal with the complexities in concurrency through the run loop.</p></p><p><p>A run loop is always bound to a particular thread. Main run loop is related to the main thread, and in each of the Cocoa and Cocoatouch programs, This main run loop plays a central role in handling UI events, timers, and other kernel-related Events. Whenever you set a timer, use <code>NSURLConnection</code> , or call <code>performSelector:withObject:afterDelay:</code> , the run loop is the one that handles these asynchronous tasks.</p></p><p><p>Whenever you use the run loop to execute a method, you need to remember that the run loop can run in different modes, each of which defines a set of events for the run loop to respond to. This is a smart way to prioritize a task to perform this task temporarily in the corresponding main run loop.</p></p><p><p>A very typical example of this, in iOS, is Scrolling. When scrolling, the run loop is not running in the default mode, so the run loop does not respond to timers that are set before scrolling, for Example. Once the scroll is stopped, the run loop returns to the default mode and executes the related events added to the Queue. If you want the timer to be triggered when scrolling, you need to set it to <code>NSRunLoopCommonModes</code> the mode and add it to the run Loop.</p></p><p><p>The main thread is generally configured with main run Loop. Other threads, however, do not have the run loop set by Default. You can also set the run loop on your own for other threads, but in general we rarely need to do so. It is much easier to use main run loop most of the Time. If you need to handle some heavy work, but do not want to do it in the main thread, you can still assign the work to other queues after your code is called in main run loop. Chris explains some good examples of this pattern in his articles on common background practice.</p></p><p><p>If you really need to add a run loop to another thread, don't forget to add at least one input source in the run Loop. If you do not have the input source set in the run loop, each time you run the run loop, it exits immediately.</p></p><p><p></p></p>Challenges encountered in concurrent programming<p><p>Using concurrent programming can bring many Pitfalls. As long as you're doing something more than the most basic situation, monitoring the different states of the interaction between multiple tasks that are performed concurrently can become extremely difficult. Problems often occur in areas of uncertainty (unpredictability), which makes it more difficult to debug related concurrency Code.</p></p><p><p>There is a very well-known example of the unpredictability of concurrent programming: in 1995, NASA sent the pioneer Mars probe, but when the probe landed on our red neighbour planet shortly after the mission came to a halt, the Mars rover inexplicably restarted, in the computer field, This behavior is determined to be a priority reversal, meaning that a low-priority thread has been blocking a high-priority thread. we'll See more details about the issue Later. What we want to show here is that even with abundant resources and the wisdom of a large number of excellent engineers, concurrency will bite you a mouthful in many Cases.</p></p><p><p></p></p>Resource sharing<p><p>Many of the problems in concurrent programming are rooted in accessing shared resources in multiple THREADS. A resource can be a property, an object, a common memory, a network device, or a file, and so On. Any shared resource in multiple threads can be a potential conflict point, and you must carefully design it to prevent this conflict from Occurring.</p></p><p><p>To demonstrate this kind of problem, let's give a simple example of a resource: just using an integer value to do the Counter. While the program is running, we have two parallel threads A and B, and all two threads try to increase the value of the counter at the same time. The problem is that the code you write in C or objective-c most of the way is not just a machine instruction for the CPU. To increase the value of the counter, the current must be read out of memory, then increment the value of the counter, and finally write the added value back to Memory.</p></p><p><p>We can try to think about what happens if two threads do the same thing at the same time. For example, both threads A and B read out the value of the counter from memory, assuming <code>17</code> that thread a adds a value of 1 to the counter and writes the result <code>18</code> back to Memory. also, thread B Adds a value of 1 to the counter and writes the result <code>18</code> back into memory. In fact, the value of the counter has been destroyed because the value of the counter is incremented by <code>17</code> 1 two times, and its value is <code>18</code> .</p></p><p><p></p></p><p><p>This problem is called race condition, in the multi-threaded access to a shared resource, if there is no mechanism to ensure that thread a end access to a shared resource, the resource will not begin to access the shared resources, the problem of resources competition will always occur. If you are writing memory that is not a simple integer, but rather a more complex data structure, This can happen: when the first thread is writing to the structure, the second thread attempts to read the data structure, then the obtained information may be either new or old or not initialized. To prevent such problems, multithreading requires a mutually exclusive mechanism to access shared Resources.</p></p><p><p>In actual development, the situation is even more complex than described above, because modern CPUs tend to change the order in which they read and write data to memory (disorderly Execution) for optimization purposes.</p></p>Mutual exclusion Lock<p><p>Mutually exclusive access means that only one thread is allowed to access a particular resource at the same time. To ensure this, each thread that wants to access the shared resource first needs to obtain a mutex for the shared resource, and once a thread has completed the operation on the resource, it frees the mutex so that other threads have access to the shared resource.</p></p><p><p></p></p><p><p>In addition to securing mutually exclusive access, you also need to address the problems associated with code Execution. If it is not possible to ensure that the CPU accesses the memory in the same order as the code at the time of programming, it is not sufficient to rely solely on mutually exclusive access. In order to solve the side effects caused by the optimization strategy of the CPU, we also need to introduce the memory Barrier. By setting a memory barrier, you ensure that commands that do not run out of order can be executed across the Barrier.</p></p><p><p>of course, the implementation of the mutex itself requires no competitive conditions. This is actually a very important guarantee and requires the use of special instructions on modern cpus. For more information on atomic operations (atomic operation), Read the article Daniel writes: underlying concurrency Techniques.</p></p><p><p>From the language level, the attribute is declared in the form of atomic in objective-c, and the mutex can be supported. In fact, by default, properties are Atomic. Declaring a property as atomic indicates an implicit Lock-and-unlock operation for each access to the Property. Although the most some chop approach is to declare all attributes as atomic, there is a cost to unlocking it.</p></p><p><p>Locking on a resource can cause a certain performance Cost. The operation that acquires the lock and releases the lock itself also requires no race condition, which is important in multicore systems. In addition, when acquiring a lock, threads sometimes need to wait because it is possible that other threads have already acquired a lock on the Resource. In this case, the thread goes into Hibernation. When other threads release the lock on the associated resource, the dormant thread gets Notified. All of these related operations are very expensive and complex.</p></p><p><p>Locks are also available in different types. When there is no competition, some locks perform well in the absence of a lock competition, but in the case of a locked competition, performance is greatly compromised. Other locks are more resource-intensive at the basic level, but in a competitive situation, the deterioration in performance will be less severe. (the Competition for locks is generated when one or more threads try to acquire a lock that has been acquired by another thread).</p></p><p><p>Here's one thing to Weigh: getting and releasing locks is a cost, so you need to make sure that you don't frequently enter and exit critical sections (such as acquiring and releasing locks). also, If you are going to execute a large piece of code after acquiring the lock, this poses a risk of lock contention: Other threads may have to wait for a resource lock to fail to Work. This is not an easy task to Solve.</p></p><p><p>We often see code that was planned to run in parallel, but in fact, because the associated locks are configured in the shared resource, only one thread is active at the same time. It is often important to predict how your code will run on multicore, and you can use Instrument's CPU strategy view to check that the available cores of the CPU are effectively utilized to get a better idea of how to optimize your Code.</p></p><p><p></p></p>Dead lock<p><p>mutexes solve the problem of race conditions, but unfortunately it also introduces some other problems, one of which is deadlock. A deadlock occurs when multiple threads are waiting for each other to end, and the program may get Stuck.</p></p><p><p></p></p><p><p>Take a look at the following code, which swaps the values of two variables:</p></p><pre><pre><code>void swap(A, B){ lock(lockA); lock(lockB); int a = A; int b = B; A = b; B = a; unlock(lockB); unlock(lockA);}</code></pre></pre><p><p>Most of the time, this will Work. But when two threads use the opposite value to call the above method at the same time:</p></p><pre><pre><code>swap(X, Y); // 线程 1swap(Y, X); // 线程 2</code></pre></pre><p><p>At this point the program may be terminated because of a deadlock. Thread 1 obtains a lock on X, and thread 2 obtains a lock on Y. Then they will wait for the other lock at the same time, but will never get it.</p></p><p><p>again, the more resources you share between threads, the more locks you will use, and the greater the probability of the program being Deadlocked. This is one of the reasons why we need to minimize resource sharing among threads and ensure that shared resources are as simple as Possible. It is recommended to read all the use of the asynchronous distribution section in the underlying concurrency programming API.</p></p>Resource Hungry (starvation)<p><p>A new problem arises when you think that you are sufficiently aware of the problems that are facing concurrent programming. A locked shared resource can cause Read-write Problems. In most cases, restricting resources to read access by only one thread at a time is really wasteful. therefore, holding a read lock is allowed when there is no write lock on the Resource. In this case, if a thread holding a read lock waits for a write lock, other threads that want to read the resource will cause resource starvation to occur because they cannot obtain the read Lock.</p></p><p><p>To solve this problem, we need to use a method that is smarter than a simple read/write lock, such as given a writer preference, or using the Read-copy-update algorithm. In the underlying concurrency programming API, Daniel describes how to implement a multi-read single-write pattern with GCD so that it is not bothered by the problem of writing resource Starvation.</p></p><p><p></p></p>Priority reversal<p><p>This section begins with a description of the concurrency problems encountered by NASA's pioneer Mars rover, which was launched on Mars. Now let's see why the trailblazer is near failure and why sometimes our program encounters the same problem, the damned priority Reversal.</p></p><p><p>Priority reversal means that a program's low-priority tasks at run time block high-priority tasks, effectively reversing the priority of the Task. Since GCD provides a background queue with different priorities, even including an I/O queue, we'd better understand the likelihood of a priority reversal.</p></p><p><p>A priority reversal can occur when resources are shared between high-priority and Low-priority tasks. When a low-priority task obtains a lock on a shared resource, the task should complete quickly and release the lock so that the High-priority task can continue without noticeable delay. however, high-priority tasks are blocked during a low-priority task holding a Lock. If there is a priority task at this time (the task does not need that shared resource), then it is possible to preempt the low-priority task, because the high-priority task is blocked, so the priority task is the highest priority in all currently running TASKS. At this point, the medium-priority task blocks the low-priority task, causing the low-priority task to not release the lock, which causes the high-priority task to wait for the lock to be released.</p></p><p><p></p></p><p><p>In your actual code, it's probably not going to restart as dramatically as what happened on Mars. When a priority reversal is encountered, it is generally less severe.</p></p><p><p>The approach to solving this problem is usually not to use different priorities. typically, you will end up with a high-priority code that waits for low-priority code to resolve the Problem. When you use GCD, the default priority queue is always used (either directly or as the target queue). If you use different priorities, chances are that things will get worse.</p></p><p><p>The lesson from this is that multiple queues with different priorities may sound good, but they are on paper. It will make complex parallel programming more complex and Unpredictable. If you're programmed to get stuck with a high-priority task suddenly, you might think of this article, and the one that NASA engineers have come across as a priority reversal Problem.</p></p>Summarize<p><p>We hope that through this article you will be able to understand the complexities and related issues associated with concurrent Programming. In concurrent programming, no matter how simple the API looks, the problems they produce can become very difficult to observe, and it is often difficult to debug such problems.</p></p><p><p>But on the other hand, concurrency is actually a great tool. It takes full advantage of the powerful computing power of modern multi-core cpus. In development, the key point is to keep the concurrency model as simple as possible, which limits the number of locks Needed.</p></p><p><p>The security model we recommend is to extract the data you want to use from the main thread, and use an action queue to process the related data in the background, and finally go back to the main queue to send the results you get in the background Queue. In this way, you do not need to do any locking operations yourself, which greatly reduces the chance of making Mistakes.</p></p><p><p>iOS Development multi-threaded article-multi-threading Technology</p></p></span>

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.