Multithreading in C #-Getting Started

Source: Internet
Author: User

Overview and Concepts
C # supports executing code in parallel through multithreading, where a thread has its own execution path and can run concurrently with other threads. A C # program starts with a single thread, which is created automatically by the CLR and the operating system (also known as the "Main thread") and has multithreading to create additional threads. Here's a simple example with its output:

Unless specified, all examples assume that the following namespaces are referenced: using System; Using System.Threading;

Class ThreadTest {
static void Main () {
Thread t = new thread (Writey);
T.start (); Run Writey in a new thread
while (true) Console.Write ("X"); and keep writing ' X '
}
static void Writey () {
while (true) Console.Write ("Y"); and keep writing ' Y '
}
}
XXXXXXXXXXXXXXXXYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyy Yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy Yyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx ...

The main thread creates a new threads "T", which runs a repeating method of printing the letter "Y" while the main thread repeats but the letter "X". The CLR allocates each thread to its own memory stack to ensure the separation of local variables runs. In the next method, we define a local variable, and then call this method at the same time on the main thread and the newly created one.

static void Main () {
New Thread (Go). Start (); Call the Go () method in a new thread
Go (); Calling Go () in the main thread
}
static void Go () {
Declaring and using a local variable ' cycles '
for (int cycles = 0; cycles < 5; cycles++) Console.Write ('? ');
}
??????????

A copy of the variable cycles is created in the respective memory stack, and the output is the same, and can be predicted with 10 question marks output. When threads refer to some common target instances, they share the data. Here is an example:

Class ThreadTest {
BOOL done;
static void Main () {
ThreadTest tt = new ThreadTest (); Create an instance
New Thread (TT. Go). Start ();
Tt. Go ();
}
Note that go is now an instance method
void Go () {
if (!done) {done = true; Console.WriteLine ("Done"); }
}
}
Because in the same threadtest instance, two threads call Go (), they share the done field, and the result outputs a "done" instead of two.

Done

Static fields provide another way to share data between threads, and here is an example of a done static field:

Class ThreadTest {
static bool done; Static methods are used by all threads
static void Main () {
New Thread (Go). Start ();
Go ();
}
static void Go () {
if (!done) {done = true; Console.WriteLine ("Done"); }
}
}
The above two examples suffice to illustrate that another key concept is thread safety (or vice versa, where it is deficient!) The output is actually indeterminate: it may (although unlikely), "done", can be printed two times. However, if we change the order of instructions in the Go method, the chance of "done" being printed two times will rise sharply:

static void Go () {
if (!done) {Console.WriteLine ("Done"), done = true;}
}
Done-Done (usually!)

The problem is that when a thread is judging the if block, just the other thread is executing the WriteLine statement--before it sets done to true.

The remedy is to provide an exclusive lock when reading and writing public fields, and C # provides a lock statement to achieve this purpose:

Class ThreadSafe {
static bool done;
Static object locker = new Object ();
static void Main () {
New Thread (Go). Start ();
Go ();
}
static void Go () {
Lock (Locker) {
if (!done) {Console.WriteLine ("Done"), done = true;}
}
}
}
When two threads scramble for a lock (in this case, locker), a thread waits, or is blocked, the lock becomes available. In this case, it is ensured that only one thread can enter the critical section at the same time, so "done" is only printed 1 times. The code is called thread-safe in this way in an uncertain multithreaded environment.

Temporary pauses, or blocking is a multi-threaded collaborative effort that synchronizes the essential characteristics of the activity. Waiting for an exclusive lock to be released is the cause of a thread being blocked, and another reason is that the thread wants to pause or sleep for a while:

Thread.Sleep (Timespan.fromseconds (30)); Block 30 seconds
A thread can also use its Join method to wait for another thread to end:

Thread t = new thread (Go); Suppose go is a static method
T.start ();
T.join (); Wait (block) until thread T ends
A thread that, once blocked, no longer consumes CPU resources.

How the thread works
A thread is managed by a thread coordinator--a function that the CLR delegates to the operating system. The thread coordinator ensures that all active threads are assigned the appropriate execution time, and that the waiting or blocking threads-such as in an exclusive lock or user input-do not consume CPU time.

On a single-core processor computer, the thread coordinator completes a time slice and quickly switches execution between active threads. This leads to "choppy" behavior, such as in the first example, where each repetition of x or Y blocks corresponds to a time slice that is divided into threads. In Windows XP, time slices are typically more expensive to select in 10 milliseconds than CPU overhead when dealing with thread switching. (i.e. usually in a few microsecond intervals)

In multicore computers, multithreading is implemented as a mix of time slices and real concurrency--different threads running on different CPUs. This can almost certainly still occur some time slices, as the operating system needs to serve its own threads, as well as some other applications.

Threads that are interrupted by external factors (such as time slices) are called preemption, and in most cases a thread loses control of it at the moment it is preempted.

Thread vs. process
All threads that belong to a single application are logically contained in a process, which refers to the operating system unit that an application is running.

Threads are similar in some way to processes: for example, a process typically runs in a time-slice manner with other processes running on a computer in much the same way as a C # program thread. The key difference between the two is that the process is completely isolated from each other. Threads Share (heap heap) memory with other threads running in the same program, which is why threads are so useful: one thread can read data in the background, while another thread can present the data that has been read in the foreground.

When to use multithreading
Multithreaded programs are typically used to perform time-consuming tasks in the background. The main thread keeps running, and the worker thread does its background work. For Windows Forms programs, if the main thread attempts to perform lengthy operations, the keyboard and mouse operations become dull and the program loses its response. For this reason, you should add a worker thread when running a time-consuming task in a worker thread, even if there is a good hint in "processing ..." on the main thread to prevent the work from continuing. This avoids the program's "no corresponding" prompt by the operating system to persuade the user to force the process to end the program, which results in an error. The modal dialog box also allows the "cancel" feature to be implemented, allowing the continuation of receiving events, while the actual task has been completed by the worker thread. BackgroundWorker happens to assist in this function.

In programs that do not have a user interface, such as Windows Service, multithreading is potentially time-consuming when a task is implemented, because it is particularly meaningful to wait for a response from another computer, such as an application server, a database server, or a client. Accomplishing tasks with a worker thread means that the main thread can do other things right away.

Another use of multithreading is to do a complex computational work in a method. This method will run faster on multicore computers, if the workload is separated by multiple threads (using the Environment.processorcount property to detect the number of processing chips).

A C # program called Multithreading can be done in 2 ways: explicitly creating and running multi-threading, or using the. NET framework to secretly use multithreaded features-such as the BackgroundWorker class, the thread pool, the threading timer, the remote server, or Web Services or an ASP. In the latter case, people have no choice but to use multithreading; a single-threaded ASP. NET Web server is not too cool, even if there is such a thing; Fortunately, multithreading is quite common in application servers, and the only thing worth worrying about is the static variable problem that provides the appropriate locking mechanism.

When not to use multithreading
Multithreading is also a disadvantage, the biggest problem is that it makes the program is too complex, the multi-threaded itself is not complex, the complexity is the interaction of the thread, which brings a long period of development, regardless of whether the interaction is intentional, will bring a longer cycle, and bring intermittent and non-repeatable bugs. Therefore, either multithreaded interaction design is simpler or does not use multithreading at all. Unless you have a strong desire to rewrite and debug.

Multithreading can lead to increased resource and CPU overhead when users frequently allocate and switch threads. In some cases, too many I/O operations are tricky when there are only one or two worker threads that are more than the same time that there are many threads executing the task block. We will implement the producer/consumer queue later, which provides the functionality described above.

Create and start using multithreading
Threads are created with the thread class, and the ThreadStart delegate is used to indicate where the method begins to run, and the following is how the ThreadStart delegate is defined:

public delegate void ThreadStart ();
After the Start method is called, the thread begins to run, and the thread continues until the method it invokes returns to the end. Here is an example of using C # syntax to create a Theadstart delegate:

Class ThreadTest {
static void Main () {
Thread t = new Thread (new ThreadStart (Go));
T.start (); Run Go () in a new thread
Go (); Run Go () in the main thread simultaneously
}
static void Go () {Console.WriteLine ("hello!");}
In this example, the thread T executes the Go () method, about the same time the main thread also calls Go (), and the result is two almost simultaneous Hello is printed out:

Hello! Hello!

A thread can be created more conveniently through the C # heap delegate's short syntax:

static void Main () {
Thread t = new thread (Go); There is no need to explicitly use ThreadStart
T.start ();
...
}
static void Go () {...}
In this case, ThreadStart is automatically inferred by the compiler, and another quick way is to use an anonymous method to start the thread:

static void Main () {
Thread t = new Thread (delegate () {Console.WriteLine ("hello!");});
T.start ();
}
The thread has a IsAlive property that is true until the thread ends until the start () is called.

Once a thread is finished, it cannot start over again.

Passing data into ThreadStart www.2cto.com
Then again, in the example above, we want to better place the output of each thread separately, and let one of the threads output uppercase letters. We pass in a status word to go to complete the task, but we cannot use the ThreadStart delegate because it does not accept parameters, fortunately, the. NET framework defines another version of the delegate called Parameterizedthreadstart, It can receive a separate object type parameter:

public delegate void Parameterizedthreadstart (object obj);
The previous example looks like this:

Class ThreadTest {
static void Main () {
Thread t = new thread (Go);
T.start (TRUE); = = Go (true)
Go (FALSE);
}
static void Go (object uppercase) {
BOOL upper = (bool) uppercase;
Console.WriteLine (upper? "Hello!": "hello!");
}
Hello! Hello!

In the entire example, the compiler automatically infers the Parameterizedthreadstart delegate, because the go method receives a separate object argument, as in this case:

Thread t = new Thread (new Parameterizedthreadstart (Go));
T.start (TRUE);
The Parameterizedthreadstart feature is that we need to boxing the type we want (here is bool) before using it, and it can only receive one parameter.

An alternative is to use an anonymous method to invoke an ordinary method as follows:

static void Main () {
Thread t = new Thread (delegate () {WRITETEXT ("Hello");});
T.start ();
}
static void WriteText (string text) {Console.WriteLine (text);}
The advantage is that the target method (here is WRITETEXT), can receive any number of arguments, and there is no boxing operation. However, it is necessary to put an external variable into the anonymous method, as follows:

static void Main () {
string text = "Before";
Thread t = new Thread (delegate () {WRITETEXT (text);});
Text = "after";
T.start ();
}
static void WriteText (string text) {Console.WriteLine (text);}
After

The anonymous method opens up a strange phenomenon, which may inadvertently interact with external variables when the external variables are modified by the later part of the value. Intentional interaction (usually through a field) is considered enough! Once the thread has started running, the external variable is best handled as read-only-unless someone is willing to use the appropriate lock.

Another common way is to pass the method of an object instance to a thread instead of a static method, and the properties of the object instance can tell the thread what to do, as the following overrides the original example:

Class ThreadTest {
BOOL Upper;
static void Main () {
ThreadTest Instance1 = new ThreadTest ();
Instance1.upper = true;
Thread t = new Thread (Instance1. Go);
T.start ();
ThreadTest Instance2 = new ThreadTest ();
Instance2. Go (); Main thread--run Upper=false
}
void Go () {Console.WriteLine (upper? "Hello!": "hello!"); }
naming threads
A thread can be named by its Name property, which is good for debugging: the name of the thread can be printed with Console.WriteLine, and Microsoft Visual Studio can display the name of the threads in the location of the Debug toolbar. The name of the thread can be set at any time--but only once, and renaming throws an exception.

The main thread of the program can also be named, the main thread in the following example is named by CurrentThread:

Class Threadnaming {
static void Main () {
Thread.CurrentThread.Name = "main";
Thread worker = new Thread (Go);
Worker. Name = "worker";
Worker. Start ();
Go ();
}
static void Go () {
Console.WriteLine ("Hello from" + Thread.CurrentThread.Name);
}
}
Hello from main hello from worker

Foreground and background threads
The line Cheng thinks of the foreground thread, which means that any foreground thread running will keep the program alive. C # also supports background threads, and when all foreground threads end, they do not maintain the program's survival.

Changing a thread from the foreground to the background does not change its priority and status in the CPU coordinator in any way.

The IsBackground property of the thread controls its front and back state, as in the following example:

Class Prioritytest {
static void Main (string[] args) {
Thread worker = new Thread (delegate () {console.readline ();});
if (args. Length > 0) worker. IsBackground = true;
Worker. Start ();
}
}
If the program is called without any arguments, the worker thread is the foreground thread and will wait for the ReadLine statement to wait for the user's trigger return, during which the main thread exits, but the program remains running because a foreground thread is still alive.

On the other hand if a parameter is passed into main (), the worker thread is assigned a background thread, and when the main thread ends the program exits immediately, terminating the ReadLine.

This way of terminating the background thread so that any last action is circumvented is not appropriate. The good way is to explicitly wait for any background worker thread to finish before ending the program, possibly with a timeout (mostly with thread.join). If for some reason a worker thread is unable to complete, it can be used in a way that attempts to terminate it, and if it fails, discards the thread and allows it to die with the process. (Recording is a puzzle, but this scenario is meaningful)

It is beneficial to have a background worker thread, and the most straightforward reason is that it always has a final say when it comes to ending the program. Interwoven with a foreground thread that does not perish, guaranteeing the normal exit of the program. Discarding a foreground worker is particularly sinister, especially for Windows Forms programs, because the program exits until the main thread ends (at least for the user), but its process is still running. In Windows Task Manager, it disappears from the application bar, but it can be found in the process bar. Unless the user finds and ends it, it continues to consume resources and may prevent a new instance from starting or affecting its attributes.

The common reason for program failure exits is that there is a "forgotten" foreground thread.

Thread Priority
The thread's Priority property determines how much execution time the thread threads has for other threads that are active on the same process, and the following are the levels:

Enum ThreadPriority {Lowest, BelowNormal, Normal, AboveNormal, highest}
Priority is only useful if multiple threads are active at the same time.

Setting a thread to a higher priority does not mean that it can perform real-time work because it is limited to the level of the program's process. To perform real-time work, you must elevate the level of the process under the System.Diagnostics namespace, like this: (I didn't tell you how to do it)

Process.getcurrentprocess (). PriorityClass = Processpriorityclass.high;
Processpriorityclass.high is actually the highest priority in the process of a short gap: Realtime. Set the process level to realtime notify the operating system: You don't want your process to be preempted. If your program enters an accidental dead loop, you can expect that the operating system is locked out, except for shutting down nothing can save you! Based on this, high is generally considered the highest useful process level.

If a real-time program has a user interface, raising the level of the process is not good, because when the user interface UI is too complex, the interface update consumes too much CPU time and slows down the entire computer. (While writing this article, Skype is lucky to do so in the Internet telephony program, perhaps because its interface is quite simple.) Reduce the level of the main thread, elevate the process level, and ensure that the real-time thread does not refresh the interface, but this does not prevent the computer from getting slower because the operating system will still dial out too much CPU to the whole process. The ideal solution is to enable real-time work and user interfaces to run in different processes (with different priorities), communicate through remoting or shared memory, and share memory with p/invoking in the Win32 API. (Can search to see CreateFileMapping and MapViewOfFile)

Exception handling
Any thread creates a range of try/catch/finally blocks, and when a thread starts executing it no longer has any relationship with it. Consider the following procedure:

public static void Main () {
try {
New Thread (Go). Start ();
}
catch (Exception ex) {
It's not going to get abnormal here.
Console.WriteLine ("exception!");
}
static void Go () {throw null;}
}
There is no use of the Try/catch statement here, and the newly created thread throws a NullReferenceException exception. When you consider that each thread has a separate execution path, you know that this behavior is justified, and that the remedy is to add their own exception handling within the method of threading:

public static void Main () {
New Thread (Go). Start ();
}
static void Go () {
try {
...
throw null; This exception will be captured below
...
}
catch (Exception ex) {
Log exception logs, and or notify another thread
We had a mistake.
...
}
Starting with. NET 2.0, unhandled exceptions within any thread will cause the entire program to close, which means that ignoring the exception is no longer an option. So in order to avoid a program crash caused by an unhandled exception, the Try/catch block needs to appear within the method each thread enters, at least in the product program. For Windows Forms programmers who often use global exception handling, this can be a bit of a hassle, like this:

Using System;
Using System.Threading;
Using System.Windows.Forms;
Static Class Program {
static void Main () {
Application.ThreadException + = HandleError;
Application.Run (New MainForm ());
}
static void HandleError (object sender, Threadexceptioneventargs e) {
Log exceptions or exit programs or continue running ...
}
}
The Application.ThreadException event is triggered when an exception is thrown, in a way that Windows information (such as the keyboard, the mouse Live "paint", and so on), in short, almost all the code of a Windows Forms program. While this looks perfect, it creates a false sense of security-all anomalies are captured by central exception handling. The exception thrown by the worker thread is a good exception that is not captured by Application.ThreadException. (The code in the main method, including the form of a constructor, executes before the Windows information begins)

The. NET framework provides a lower-level event for global exception handling: Appdomain.unhandledexception, an event that is triggered by any unhandled exception on any thread of any type of program (with or without a user interface). Although it provides a good mechanism for exception handling, this does not mean that the program does not crash, and it does not mean that it can be canceled. NET Exception dialog box.

Multithreading in C #-Getting Started

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.