A simple introduction to go language concurrency programming

Source: Internet
Author: User
Tags mutex

concurrency is a logical ability to handle multiple tasks simultaneously, and parallel is to perform multiple concurrent tasks at the same time physically. On a single-core processor, they can switch execution using intervals, while parallelism is the feature of a physical device that relies on a multicore processor.

Parallel computing is the most ideal mode for concurrent design.

Multithreading or multi-process is the basic condition of parallelism, but a single thread can also be used to do concurrency. Although the process can be implemented on a single thread through active switching to multitask concurrency, it has its own advantages. Multiple tasks running on the process are serially executed in nature, with controllable autonomic scheduling, so there is no need to do synchronous processing.

Even multithreading does not necessarily enable parallel execution. Python is limited by the Gil, and by default it can only be concurrent and not parallel, so many times instead use the "multi-process + coprocessor" architecture.

Generally, using multithreading to achieve distributed and load balancing, reduce the single process garbage collection pressure, with multi-process (LWP) to rob more processor resources, using the association to improve the processor time slice utilization.

The keyword go does not perform concurrent operations, but rather creates a concurrent task unit. The new task is placed in the system queue, waiting for the scheduler to schedule the appropriate system thread to get the execution right. The current process does not block, does not wait for the task to start, and does not guarantee the execution order of concurrent tasks when running.

Each task cell saves the stack memory space required for execution, in addition to the function pointer and invocation parameters. The Goroutine custom stack only needs to initialize 2KB compared to the system's default KB-level line stacks, so you can create thousands of concurrent tasks. The custom stack takes an on-demand strategy to scale up when needed, up to a maximum of GB.

Wait function: The process exits without waiting for the concurrent task to end, can use the channel block, and then emits an exit signal.

In addition to shutting down the channel, writing data can also be contacted for blocking.

If you want to wait for multiple tasks to end, sync is recommended. Waitgroup. By setting the counter, each goroutine is decremented before exiting until the recursion is 0 o'clock unblocked. Although the Waitgroup.add function implements atomic operations, it is recommended to accumulate counters outside of goroutine to prevent add from being executed, and wait has exited.

Gomaxprocs: The runtime may create multiple threads, but at any time only a limited number of threads are involved in the execution of the concurrent task, which is equal to the core number of the processor. You can use runtime. The Gomaxprocs function is modified, and environment variables can be used.

If the parameter is less than 1, Gomaxprocs only returns the value of the current setting and does not make any adjustments.

You can use runtime. Numcpu to show the core number of CPUs.

Localstorage:gorontine task cannot set priority, cannot get number, no local storage (TLS), even return value will be discarded. If you use map as a local memory, the recommended period is synchronous, because the runtime will check it concurrently with read and write.

Gosched: Pauses, releasing threads to perform other tasks. The current task is put back into the queue, waiting for the next schedule to resume execution. This function is rarely used because the runtime proactively issues a preemption schedule like a long-running (10ms) task. Only the current version of the implementation of the problem of the algorithm, can not guarantee that the scheduling is always successful, so active switching and use occasions.

Goexit: Immediately terminates the current task, and the runtime ensures that all registered deferred calls are executed. This function does not affect other concurrent tasks, does not cause panic, and naturally cannot be captured.

If you call Goexit in Main.main, it waits for the other task to end and then causes the other process to crash directly.

Either way, Goexit can terminate the entire call stack immediately, and unlike return, the standard library function Os.exit can terminate the process, but no deferred calls are performed.

Channel:

Go does not implement strict concurrency security.

Go encourages the use of CSP channels, which use communication instead of memory sharing for concurrency security.

The model that avoids the race by means of the message is in addition to the CSP, and the actor.

As the core of the CSP, the channel is explicit, requiring both sides of the operation to know the data type and the specific channel, and does not care about the identity and number of the operator at the other end. The current side is blocked if the other end is ready, or if the message is not processed in a timely manner.

The actor is transparent, does not care about the data type and the channel, as long as knows the recipient's mailbox is OK, the default is the asynchronous way.

The channel is just a queue. In synchronous mode, the sending and receiving parties are paired, and then the data is copied directly to each other. If the pairing fails, it is placed in the wait queue until the other party appears before it wakes up.

The asynchronous Pattern robs the data buffer slot. The sender requires an empty slot to write to, and the receiver requests that the buffered data be readable. When the requirements are not met, they are also added to the waiting queue until the other party writes the data or frees up the empty data buffer.

The channel will also be used as an event notification.

The goroutine operation that must have a pairing operation in synchronous mode appears, otherwise it will remain blocked.

Most of the time, asynchronous channels help improve functionality and reduce queuing congestion.

While passing pointers can avoid copying data, you must be aware of the additional security of data concurrency.

The built-in function cap and Len returns the buffer size and the current number of buffers, while for the synchronization channel returns 0, which can be used to determine whether the channel is synchronous or asynchronous.

The data can be processed using Ok-idom or range mode. It's a little more concise for a loop to receive data range. Use the close function in a timely manner to close a channel-raised end notification, or it may cause a deadlock.

Notifications can be of group type. One-time events use close to be more efficient, with no extra overhead. Continuous or diversity events that can pass different data identity implementations. You can also use sync. Cond achieve thin or broadcast time.

For close or nil channels, the send and receive operations have a response rule:

1. Send data to the closed channel, triggering panic.

2. Receive data from closed, return buffered data or 0 value.

3. The nil channel will block regardless of the transceiver.

Channels are bidirectional by default and do not differentiate between send and receive ends. But at some point, we can limit the direction of sending and receiving operations to get more rigorous operational logic.

However, using make creates a one-way channel, but it has no meaning. Type conversions are typically used to obtain a one-way channel and give both sides of the operation.

If you are working with multiple channels at the same time, you can use the SELECT statement, which randomly selects an available channel for sending and receiving operations.

If all of the channel message processing is finished, you can set the completed channel to nil so that he will be blocked and not selected by the select.

Even the same channel will randomly select Case execution.

When all channels are unavailable, select executes the default statement, which avoids seclect blocking, but must be careful to handle the outer loop to avoid falling into wasting. You can also use default to handle some of the defaults.

The factory method binds the Goroutine and the channel. Since the channel itself is a concurrency-safe queue, it can be used as ID generator. Pool and other uses.

You can use the channel to achieve the semaphore.

The standard library time provides the timeout and tick channel implementations.

Performance: Package The data destined for the channel and reduce the number of transfers, which can improve performance effectively. From the implementation, the channel queue still uses the lock synchronization mechanism, a single time to obtain more data (batch processing), can improve because of frequent locking caused by performance problems.

While words consume more memory, performance gains are noticeable. If the array is changed to a slice, it will cause more memory allocation times.

The channel may cause goroutine leak, which means that the goroutine is in the sending State or is receiving a blocking state, but has not been awakened. The garbage collector does not collect such resources, causing them to be dormant for long periods of time in the waiting queue to form a resource leak.

Channels are not used to replace locks, they have their own different uses, channels tend to solve the logical level of concurrent processing architecture, while the lock is used to protect the local scope of data security.

Standard library Sync provides mutex and read-write locks as well as atomic operations.

When you use a mutex as an anonymous field, the related method must be implemented as pointer-receiver, or the deadlock mechanism will fail because of replication.

The mutex lock granularity should be controlled to a minimum and released early.

Mutexes do not support recursion, and even under the same goroutine can cause deadlocks.

Suggestions:

1. Avoid using Deferunlock when performance requirements are high.

2. When reading and writing concurrency, it is better to use Rwmutex performance.

3. For single data write protection, you can try atomic manipulation.

4. Perform rigorous testing to open the data competition check as much as possible.


A simple introduction to go language concurrency programming

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.