IOS multi-thread programming 1: Overview

Source: Internet
Author: User
Tags posix

What is multithreading?

Multithreading is a lightweight method to implement multiple code execution paths in a single application. From a technical point of view, a thread is a combination of kernel-level and application-level data structures that require Code Execution Management. The kernel-level structure helps to schedule thread events and preemptively schedule a thread to the available kernel. The application-level structure includes the call stack used to store function calls and the structure in which applications need to manage and operate thread attributes and States.

 

Alternative to multithreading

One problem with creating multi-threaded code by yourself is that it will bring uncertainty to your code. Multithreading is a relatively low level and complex way to support your application concurrency. If you do not fully understand the impact of your design choices, you may easily encounter synchronization or timing problems, it ranges from subtle behavior changes to severe behavior, to crashing your application and damaging user data.

Another factor you need to consider is whether you really need multithreading or concurrency. Multithreading solves the problem of how to execute multiple code paths concurrently in the same process. However, in many cases, you cannot guarantee that the job you are doing is concurrent. The introduction of multithreading brings a lot of overhead, including memory consumption and CPU usage. You will find that these overhead is too large for your work, or there are other ways to achieve it more easily.

1. Operation objects

Introduced in Mac OS X v10.5, an operation object is a wrapper for a task that wowould normally be executed on a secondary thread. this wrapper hides the thread management aspects of grouping the task, leaving you free to focus on the task itself. you typically use these objects in conjunction with an operation queue object, which actually manages the execution of the operation objects on one more threads.
For more information on how to use operation objects, seeConcurrency Programming Guide.

2. Grand Central Dispatch (GCD)

Introduced in Mac OS x v10.6, Grand Central Dispatch is another alternative to threads that lets you focus on the tasks you need to perform rather than on thread management. with GCD, you define the task you want to perform and add it to a work queue, which handles the scheduling of your task on an appropriate thread. work queues take into account the number of available cores and the current load to execute your tasks more efficiently than you cocould do yourself using threads.
For information on how to use GCD and work queues, seeConcurrency Programming Guide

3. Idle-time configurations

For tasks that are relatively short and very low priority, idle time configurations let you perform the task at a time when your application is not as busy. cocoa provides support for idle-time configurations using the NSNotificationQueue object. to request an idle-time notification, post a notification to the default NSNotificationQueue object using the NSPostWhenIdle option. the queue delays the delivery of your notification object until the run loop becomes idle. for more information, seeNotification Programming Topics.

4. Asynchronous functions

The system interfaces include policasynchronous functions that provide automatic concurrency for you. these APIs may use system daemons and processes or create custom threads to perform their task and return the results to you. (The actual implementation is irrelevant because it is separated from your code .) as you design your application, look for functions that offer asynchronous behavior and consider using them instead of using the equivalent synchronous function on a custom thread.

5. Timers

You can use timers on your application's main thread to perform periodic tasks that are too trivial to require a thread, but which still require servicing at regular intervals. for information on timers, see "Timer Sources."

6. Separate processes

Although more heavyweight than threads, creating a separate process might be useful in cases where the task is only tangentially related to your application. you might use a process if a task requires a significant amount of memory or must be executed using root privileges. for example, you might use a 64-bit server process to compute a large data set while your 32-bit application displays the results to the user.

 

Thread support

At the application layer, the same behavior of all threads on other platforms is essentially the same. After the thread starts, the thread enters any of the three States: running, ready, and blocked ). If a thread is not currently running, it is not blocked, it is waiting for external input, or it is ready to wait for CPU allocation. The thread continuously switches between the three States until it finally exits or enters the interrupted state.

1. Cocoa threads

Cocoa implements threads using the NSThread class. cocoa also provides methods onNSObject for spawning new threads and executing code on already-running threads. for more information, see "Using NSThread" and "Using NSObject to Spawn a Thread."

2. POSIX threads

POSIX threads provide a C-based interface for creating threads. if you are not writing a Cocoa application, this is the best choice for creating threads. the POSIX interface is relatively simple to use and offers ample flexibility for processing ing your threads. for more information, see "Using POSIX Threads"

3. Multiprocessing Services

Multiprocessing Services is a legacy C-based interface used by applications transitioning from older versions of Mac OS. this technology is available in Mac OS X only and shoshould be avoided for any new development. instead, you shoshould use the NSThread class or POSIX threads. if you need more information on this technology, seeMultiprocessing Services Programming Guide.

 

Synchronization Tool

One of the dangers of thread programming is the competition for resources among multiple threads. If multiple threads attempt to use or modify the same resource at the same time, the problem may occur. One way to alleviate this problem is to eliminate shared resources and ensure that each thread has unique settings on the resources it operates on. Because completely independent resources are not feasible, you may have to use locks, conditions, atomic operations, and other technologies to synchronize resource access.

The lock provides an effective protection mode for only one thread to execute code at a time. The most common lock is the mutex exclusive lock, which is what we usually call"Mutex". When a thread tries to obtain a mutex lock that has been occupied by other threads, it will be blocked until other threads release the mutex lock. Several frameworks of the system provide support for mutex locks, although they are all based on the same underlying technology. In addition, Cocoa provides several mutex lock variants to support different behavior types, such as recursion.

In addition to the lock, the system also provides conditions to ensure the proper order of execution of your application tasks. A condition acts as a gatekeeper, blocking a given thread until it represents a true condition. In this case, the condition releases the thread and allows it to continue execution. Both the POSIX level and basic framework provide conditional support. (If you use an operation object, you can configure the order of dependencies between your operation objects to determine the task execution sequence, which is very similar to the behavior provided by the condition ).

Although locks and conditions are widely used in Concurrent Design, atomic operations are another way to protect and synchronize data access. Atomic operations provide lightweight methods to replace locks in the following scenarios, where you can perform mathematical or logical operations on scalar data types. Atomic operations use special hardware facilities to ensure that variable changes are completed before other threads can access them.

 

Inter-thread Communication

There are many methods for inter-thread communication, each of which has its advantages and disadvantages.

1. Direct messaging

Cocoa applications support the ability to perform selectors directly on other threads. this capability means that one thread can be essential execute a method on any other thread. because they are executed in the context of the target thread, messages sent this way are automatically serialized on that thread. for information about input sources, see "Cocoa Perform Selector Sources."

2. Global variables, shared memory, and objects

Another simple way to communicate information between two threads is to use a global variable, shared object, or shared block of memory. although shared variables are fast and simple, they are also more fragile than direct messaging. shared variables must be carefully protected with locks or other synchronization mechanisms to ensure the correctness of your code. failure to do so cocould lead to race conditions, lost upted data, or crashes.

3. Conditions

Conditions are a synchronization tool that you can use to control when a thread executes a participating portion of code. you can think of conditions as gate keepers, lew.a thread run only when the stated condition is met. for information on how to use conditions, see "Using Conditions."

 

4. Run loop sources

A custom run loop source is one that you set up to receive application-specific messages on a thread. because they are event driven, run loop sources put your thread to sleep automatically when there is nothing to do, which improves your thread's efficiency. for information about run loops and run loop sources, see "Run Loops."

5. Ports and sockets

Port-based communication is a more elaborate way to communication between two threads, but it is also a very reliable technique. more importantly, ports and sockets can be used to communicate with external entities, such as other processes and services. for efficiency, ports are implemented using run loop sources, so your thread sleeps when there is no data waiting on the port. for information about run loops and about port-based input sources, see "Run Loops."

6. Message queues

The legacy Multiprocessing Services defines a first-in, first-out (FIFO) queue wait action for managing incoming and outgoing data. although message queues are simple and convenient, they are not as efficient as some other communications techniques. for more information about how to use message queues, seeMultiprocessing Services Programming Guide.

7. Cocoa distributed objects

Distributed objects is a Cocoa technology that provides a high-level implementation of port-based communications. although it is possible to use this technology for inter-thread communication, doing so is highly discouraged because of the amount of overhead it incurs. distributed objects is much more suitable for communicating with other processes, where the overhead of going between processes is already high. for more information, seeDistributed Objects Programming Topics.

 

Design Skills

1. Avoid explicit thread Creation

It is boring to manually write the thread to create code and it is prone to errors. You should avoid doing so as much as possible. Mac OS X and iOS provide implicit concurrency support through other API interfaces. You can consider using asynchronous APIs, GCD methods, or operation objects to achieve concurrency, rather than creating a thread by yourself. These technologies provide you with thread-related work and ensure that they are correct. In addition, GCD and OSS are designed to manage threads, which is more efficient than adjusting the number of active threads based on the current load through your own code. For more information about GCD and operation objects, see Concurrency Programming Guid )".

2. Keep your thread busy reasonably

If you want to manually create and manage threads, remember that multithreading consumes valuable system resources. You should do your best to ensure that any task you assign to the thread is running for a long and fruitful period. At the same time, you should not be afraid to interrupt the threads that consume the maximum idle time.

3. Avoid sharing data structures

The simplest and easiest way to avoid thread-related resource conflicts is to give each thread of your application a copy of the data it requires. It is best to minimize the communication between threads and compete for resources in parallel code.

4. multithreading and your user interface

If your application has a graphical user interface, we recommend that you receive interface-related events and initialize and update your interface in the main thread. This method helps avoid synchronization issues related to processing user events and window plotting. Some frameworks, such as Cocoa, usually need such operations, but their event processing can be skipped, the advantage of maintaining this behavior on the main thread is that it simplifies the logic for managing your application user interface.

There are several notable exceptions that facilitate graphical operations on other threads. For example, the QuickTime API contains a series of operations that can be performed in the auxiliary thread, including opening video files, rendering video files, compressing video files, and importing and exporting images. Similarly, in Carbon and Cocoa, you can use a third thread to create and Process computation related to images and other images. Using auxiliary threads to execute these operations can greatly improve performance. If you are not sure whether an operation is related to image processing, you should perform these operations in the main thread.

For details about QuickTime thread security, refer to Technical Note TN2125: "QuickTime thread security programming ". For more information about Cocoa thread security, refer to "thread security summary ". For more information about Cocoa painting, see the Cocoa Drawing Guide ).

5. Understand the thread exit Behavior

The process continues until all non-independent threads have exited. By default, only the main thread of the application is created in a non-independent manner, but you can also use the same method to create other threads. When the user exits the program, it is usually considered to interrupt all independent threads immediately, because the work done by the independent thread is usually optional. If your application uses a background thread to save data to the hard disk or perform other periodic operations, you may want to create these threads as non-independent threads to ensure that data is not lost when the program exits.

You need to do some extra work to create threads (also called as connectable threads) in a non-independent way. Because most of the Upper-layer thread encapsulation technology does not provide the creation of connectable threads by default, you must use the posix api to create the thread you want. In addition, you must add code in your main thread to connect to a non-independent thread when they exit. For more information about creating a connectable thread, see the section "setting thread disconnections.

If you are programming a Cocoa program, you can also use the applicationShouldTerminate: Delegate method to delay program interruption until a period of time or cancel the program. When delayed interruption occurs, your program needs to wait until the threads in any cycle have completed their tasks and called the replyToApplicationShouldTerminate: method. For more information about these methods, see NSApplication Class Reference.

6. Handling exceptions

When an exception is thrown, the exception handling mechanism relies on the current call stack to execute any necessary cleanup. Every thread has its own call stack, so every thread is responsible for capturing its own exceptions. If a thrown exception fails to be captured in the auxiliary thread, your main thread also fails to catch the exception: the process to which it belongs will be interrupted. You cannot catch exceptions thrown by other threads in the same process.

If you need to notify another thread (such as the main thread) of a special situation in the current thread, you should capture the exception and simply send the message to other threads to tell what happened. Based on your model and what you are trying to do, threads that cause exceptions can continue to execute (if possible), wait for instructions, or simply exit.

7. interrupt your thread cleanly

The best way for a thread to exit naturally is to reach its main entry end point. Although many functions can be used to interrupt threads immediately, these functions should be used only as the final means. Before the thread reaches its natural end point, a thread is interrupted to prevent the thread from cleaning itself. If the thread has allocated memory, opened the file, or obtained other types of resources, your code may not be able to recycle these resources, resulting in Memory leakage or other potential problems.

For more information about correctly exiting a thread, see the "interrupt thread" section.

8. Thread-safe database

Although the application developer controls whether the application executes multiple threads, the developer of the class library cannot. When developing a class library, you must assume that the calling application is multi-threaded, or you can switch between multiple threads at any time. Therefore, you should always use the lock function in your critical section.

It is unwise for a class library developer to create a lock only when the application is multi-threaded. If you need to lock some parts of your code, you should create a Lock Object for your class library in the early stage to explicitly call the initialization class library. Although you can also use static library initialization functions to create these locks, you should do so only when there are no other methods. To execute the initialization function, you need to extend the loading time of your class library, which may adversely affect your program performance.

 

 

 

 

 

 

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.