A thread is a sub-unit that makes up a process, and the scheduler of the operating system can schedule the thread separately. In fact, all concurrent programming APIs are built on top of threads-including GCD and operations queues (Operation Queues). Multithreading can be run simultaneously (or at least concurrently) on a single-core CPU. The operating system assigns small slices of time to each thread, which allows the user to feel that there are multiple tasks at the same time. If the CPU is multi-core, then the thread can actually be executed in a concurrent manner, reducing the total time required to complete an operation. NSThread is a package of objective-c for Pthread. One problem that can arise from encapsulating using threads directly is that the number of active threads can grow exponentially if both your code and the framework code you are basing are creating your own threads. This is a common problem in large projects. For example, in a 8-core CPU, you created 8 threads to fully perform CPU performance. However, the framework code that your code calls in these threads also does the same thing (because it does not know which threads you have created), which can quickly generate hundreds of threads. Each part of the code has no problem with itself, but in the end it causes the problem. There is no cost to using threads, and each thread consumes some memory and kernel resources. Two Gcd grand Central dispatch through GCD, developers no longer have to deal directly with threads, just add code blocks to the queue, GCD manage a thread pool on the backend. GCD not only determines which thread your code block will be executed on, it also manages these threads based on available system resources. This frees the developer from the work of thread management and mitigates the problem of a large number of threads being created through a centralized management thread. Another important change that  GCD brings is that as developers can consider work as a queue rather than a bunch of threads, this parallel abstraction model is easier to master and use. In most cases, the default priority queue is available. If the tasks you perform require access to some shared resources, scheduling these tasks in different priority queues can quickly cause unpredictable behavior. This may cause the program to be completely suspended because a low-priority task blocks a high-priority task so that it cannot be executed. Three operation queue (Operation queue) is a Cocoa abstraction of a queue model provided by GCD. GCD provides a lower level of control, while the operations queue implements some convenience on the GCDFeatures that are often the best and safest option for app developers. NSOperationQueue has two different types of queues: The primary queue and the custom queue. The primary queue runs on top of the main thread, and the custom queue executes in the background. In both types, the tasks processed by these queues are expressed using a subclass of NSOperation . You can define your own operations by overriding the main or start method. The previous method is very simple, developers do not need to manage some state attributes (such as isExecuting and isfinished), when the main method returns, the operation is over. This approach is very simple to use, but less flexible than rewriting start . If you want more control and can perform asynchronous tasks in one operation, then rewrite the start method: Note: In this case, you must manually manage the state of the operation. In order for the operations queue to capture the changes to the operation, the properties of the state need to be implemented in a manner that fits KVO. If you do not use their default setter to set it, you will need to send the appropriate KVO message at the right time. In order to be able to use the cancellation functionality provided by the action queue, you need to check the isCancelled properties: in a long operation, once you've defined the Operation class, it's easy to put a operation Add to Queue:nsoperationqueue *queue = [[nsoperationqueue alloc] init]; youroperation *operation = [[youroperation alloc] init]; [queue Addoperation:operation]; Alternatively, you can add a block to the operations queue. This can sometimes be very handy, for example, if you want to schedule a one-time task in the main queue: [[[Nsoperationqueue mainqueue] addoperationwithblock:^{ // Code ...}]; Although it is convenient to add operations to the queue in this way, it is helpful to define your own nsoperation subclasses when debugging. If you rewrite operation's description method, you can easily identify all the operations that are currently scheduled in a queue. In addition to providing basic dispatch operations or blocks, the operations queue provides features that are less easily handled in GCD. For example, you can use the maxConcurrentOperationCount property to control how many operations in a particular queue can participate in concurrent execution. If you set it to 1, you will get a serial queue, which is useful for isolation purposes. There is also a convenient function to sort them according to the priority of operation in the queue, which differs from the GCD queue priority, which affects only the execution of all scheduled operation in the current queue. If you need to further control the order of execution of operation in addition to the 5 standard priorities, you can also specify a dependency between operation, as follows: [intermediateoperation adddependency : Operation1]; [intermediateoperation adddependency:operation2]; [finishedoperation adddependency:intermediateoperation]; These simple code to ensure operation1 and operation2 execute . before intermediateOperation for a clear order of execution, the operation dependency is a very powerful mechanism. It allows you to create groups of operations and ensure that they are executed before the operations that depend on them are executed, or in a serial manner in the concurrent queue. Mutex mutex access means that only one thread is allowed to access a particular resource at the same time. To ensure this, each thread that wants to access the shared resource first needs to obtain a mutex for the shared resource, and once a thread has completed the operation on the resource, it frees the mutex so that other threads have access to the shared resource. Properties atomic means that each access to this property will be implicitly unlocked and unlocked There is a tradeoff here: getting and releasing locks is a cost, so you need to make sure that you don't frequently enter and exit critical sections (such as acquiring and releasing locks). Also, if you are going to execute a large piece of code after acquiring the lock, this poses a risk of lock contention: Other threads may have to wait for a resource lock to fail to work. This is not an easy task to solve. mutexes solve the problem of race conditions, but unfortunately it also introduces some other issues, one of which is deadlock. A deadlock occurs when multiple threads are waiting for each other to end, and the program may get stuck. The more resources you share between threads, the more locks you will use, and the greater the probability of the program being deadlocked. This is one of the reasons why we need to minimize inter-thread resource sharing and ensure that shared resources are as simple as possible priority rollover issues, priority reversal refers to programs that are low-priority tasks at run time that block high-priority tasks, effectively reversing the priority of tasks. Multiple queues with different priorities sounds good, but after all it's an armchair. It will make complex parallel programming more complex and unpredictable. If you're programmed to get stuck with a high-priority task suddenly, you might think of this article, and the one that NASA engineers have come across as a priority reversal problem. uikit is not thread safe, it is recommended to only access the main thread, and even the drawing methods do not explicitly guarantee thread safety. The danger of using UIKit objects in the background is "memory reclamation issues." UI objects should be recycled in the main thread, because they might change the structure of the view when their dealloc method is called for recycling, and as we know, this should be done in the main thread. nsarry This immutable class is thread-safe nsmutablearray is thread insecure nscache uses a mutable dictionary to store immutable data, which not only locks the access, but also empties its contents even in low memory situations. Four Atomic properties a non-atomic setter looks like this:-(void) Setusername: (NSString *) UserName { if (userName! = _username) { [UserName retain]; [_username release]; _username = username; }} if setusername: can cause trouble if it is called concurrently. We may release _userName two times, which makes memory errors and causes difficult to spot bugs. For any property that is not implemented manually, the compiler generates a  OBJC_SETPROPERTY_NON_GC (id self, SEL _cmd, ptrdiff_t offset, id newvalue, BOOL Atomic, Signed Char shouldcopy) call. In our example, the arguments for this call are: objc_setproperty_non_gc (self, _cmd, (ptrdiff_t) (&_username)-(ptrdiff_t) (self), UserName, no, no); ' objc_setproperty calls the following method: static inline void Reallysetproperty (id self, SEL _cmd, id newvalue,& nbsp ptrdiff_t offset, BOOL atomic, BOOL copy, BOOL mutablecopy) { ID oldvalue; id *slot = ( id*) ((char*) self + offset); if (copy) { newValue = [ NewValue copywithzone:null]; } else if (mutablecopy){ newvalue = [NewValue mutablecopywithzone:null]; } else { if (*slot = = newvalue) return; NewValue = Objc_retain (newvalue); } if (!atomic) { OldValue = *slot; *slot = newvalue; } else { spin_lock_t *slotlock = &propertylocks[goodhash (slot)]; _spin_lock (Slotlock); OldValue = *slot; *slot = newvalue; _spin_unlock (slotlock); } Objc_release (oldValue);} Besides the method name is very interesting, actually the method actually does the thing very directly, it uses in PropertyLocks 1281 in a spin lock to lock the operation. This is a pragmatic and fast way, in the worst case, if a hash collision is encountered, then the setter needs to wait for another and its unrelated setter to complete before working. @synchonized (self) is more appropriate when you need to make sure that code does not deadlock when an error occurs, but instead throws an exception. For those that are definitely supposed to be thread-safe (a good example is the class that is responsible for caching), a good design is to use concurrent dispatch_queue as read/write locks, and ensure that only those parts that really need to be locked are locked. To maximize performance. Once you use multiple queues to lock different parts, the whole thing will quickly become unmanageable. @property (nonatomic, strong) Nsmutableset *delegates;
// init方法中
_delegatequeue = dispatch_queue_create ("com. Pspdfkit.cachedelegatequeue ", dispatch_queue_concurrent); -(void) AddDelegate: (id<pspdfcachedelegate>) Delegate {Dispatch_barrier_async (_delegatequeue, ^{[Self.del Egates Addobject:delegate]; });} Unless adddelegate: or removedelegate: Thousands of calls per second, we can use a relatively concise implementation:
// 头文件
@property (Atomic, copy) Nsset *delegates; -(void) AddDelegate: (id<pspdfcachedelegate>) Delegate {@synchronized (self) {self.delegates = [Self.delega TES Setbyaddingobject:delegate]; }} Dispatch_barrier_async and dispatch_barrier_sync1. Comparison * Common: His previous tasks will be completed before him, his subsequent tasks will wait for him to execute after execution * different points: Dispatch_barrier_ The task behind Async does not wait for his execution to be added to the queue; The task behind the Dispatch_barrier_sync will be added to the queue after he executes it again * The task is added to the queue first, but it is not added to start execution
Sdwebimage Related knowledge points thread