Multi-threaded Learning (II.)

Source: Internet
Author: User
Tags posix

Multithreading Concepts

Concurrency and parallelism 

In a multithreaded process of a single processor, the processor can switch execution resources between threads to perform concurrency.

In the same multithreaded process within a multiprocessor environment with shared memory, each thread in a process can be in a

Run concurrently on a separate processor to perform parallelism. If the number of threads in the process does not exceed the number of processors,

The support system and operating environment of the thread ensures that each thread executes on a different processor. For example, the number of threads

In matrix multiplication with the same number of processors, each thread and each processor computes a row of results.

  Multithreading structure at a glance

Traditional UNIX has supported the concept of multithreading. Each process contains a single thread, so programming multiple processes is

is to program multiple threads. However, the process is also an address space, so creating a process involves creating a new

's address space. Create line turndown The cost of creating a new process is low because the newly created thread uses the address space of the current process.

It takes less time to switch between threads than to switch between processes, because the latter does not include the address space

Switch between the two. Communication between threads within a process is simple, because these threads share everything, especially the address space.

Therefore, the data generated by one thread can be used immediately for all other threads. In the SOLARIS9 and earlier Solaris releases,

The interface that supports multithreading is implemented by a specific sub-example libraries. These sub-examples libraries include libpthread for POSIX threads

And the libthread used for the Solaris thread. Multithreading provides flexibility by separating kernel-level resources from user-level resources. In

In the current release, multithreading support for both sets of interfaces is provided by the standard C library.

User-level threads

Threads are the main programming interface in multithreaded programming. Threads are visible only within a process, and threads within a process share all process resources such as address space, open files, and so on.

User-level thread state

The following States are unique for each thread.

Thread ID

Register status (including PC and stack pointers)

Stack

Signal Mask

Priority level

Thread-specific storage
Because threads can share process directives and most process data, changes made to shared data by one thread are visible to other threads in the process.

When a thread needs to interact with other threads in the same process, that thread can do this without involving the operating system

Thread scheduling

The POSIX standard specifies three scheduling strategies: first-in, first-out (SCHED_FIFO), circular (SCHED_RR), and Custom (Sched_other).

Sched_fifo is a queue-based scheduler that uses a different queue for each priority.

SCHED_RR are similar to FIFO, but each thread of the former has an execution time quota.

Sched_fifo and SCHED_RR are extensions to the POSIX realtime. Sched_other is the default scheduling policy.

Thread cancellation

A thread can request that any other thread in the same process be terminated. The target thread (the thread to be canceled) can postpone the cancellation request.

and performs an application-specific cleanup operation when the thread processes the cancellation request.

With the Pthread cancellation feature, threads can be terminated asynchronously or terminated with delay. Asynchronous cancellation can occur at any time, and delay cancellation

Can only occur at a defined point. Deferred cancellation is the default type.

Thread synchronization

With the sync feature, you can program streams and access shared data to execute multiple threads concurrently.

There are four synchronization models: mutexes, read-write locks, condition variables, and signals.

mutexes allow only one thread at a time to execute specific portions of code or to access specific data.

Read-write locks allow concurrent reads and exclusive writes to protected shared resources. To modify a resource, a thread must first obtain a mutex write lock. Exclusive write locks are allowed only after all read locks are released.

The condition variable blocks the thread until the specific condition is true.

Count semaphores are typically used to coordinate access to resources. Using a count, you can limit the number of threads that access a signal. When the specified count is reached, the signal is blocked.

Multi-threaded Learning (II.)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.