Kotlin Minimalist Tutorial: Chapter 9th Lightweight Threading: Co-process

Source: Internet
Author: User
Tags garbage collection sleep sleep function thread class jcenter

Original link: Https://github.com/EasyKotlin

In the common concurrency model, multi-process, multi-threading, and distributed are the most common, but in recent years there are some languages that provide support for the concurrent-based concurrency model in the form of first-class or library. Some of the more typical scheme, Lua, Python, Perl, go and so on in the first-class way to provide support for the co-process.

Similarly, Kotlin also supports the co-process.

In this chapter we mainly introduce: what is the use of the Cheng example of the implementation of the Hang function channel and Pipeline Association Coroutine Library, etc. 9.1 introduction to the process

From the hardware development point of view, from the initial single-core single CPU, to single-core multi-CPU, multi-core multi-CPU, it seems to have reached the limit, but the single-core CPU performance is still increasing. If the program is divided into IO-intensive applications and CPU-intensive applications, the development process of the two is as follows:

IO-intensive applications: Multi-process, multithreading---event-driven
CPU-intensive applications: Multi-Threading with multiple processes

If multiple processes are multi-CPU and multithreading corresponds to multi-core CPUs, then event-driven and co-scheduling is fully exploiting the potential of a single-core CPU that continually improves performance.

Common APIs with performance bottlenecks (such as network io, file io, CPU or GPU-intensive tasks, etc.) require callers to block (blocking) until they are complete for the next step. Later, we used asynchronous callbacks to implement non-blocking, but the asynchronous callback code was not easy to write.

The process provides a way to avoid blocking threads and replace thread blocking with simpler, more controllable operations: the process hangs.

The main purpose of the process is to let the original use of "Asynchronous + callback method" written out of the complex code, to be able to write in a seemingly synchronous way (further abstraction of the operation of the thread). This allows us to organize code logic that was scattered in different contexts in a serial thinking model without having to deal with complex state synchronization problems.

The earliest description of the process was given by Melvin Conway in 1958: "subroutines who act as the Master program" (a subroutine similar to the main application behavior). He then gave the following definition in his doctoral dissertation:

The data is always maintained in subsequent calls (the values of data local to a coroutine persist between successive calls the part of the co-process)

When the control process leaves, the execution of the association is suspended, and thereafter the control flow enters the process again, the association should only continue from the last place where it was suspended (the execution of a coroutine is suspended as control leaves it, only To carry in where it left is control re-enters the coroutine at some later stage).

The implementation of the process to maintain a set of local state, before re-enter the process, to ensure that these states are not changed, so as to successfully locate the previous position.

The process can be used to solve many problems, such as Nodejs nested callbacks, Erlang, and Golang concurrency model implementations.

In essence, the coroutine is a lightweight thread that is user-state. It is started by the Coprocessor Builder (Launch Coroutine builder).

Below we learn the relevant content of the process through code practice. 9.1.1 Construction of the co-process code project

First, we will create a new Kotlin Gradle project. After you generate the standard Gradle project, in profile Build.gradle, configure Kotlinx-coroutines-core dependencies:

Add Dependencies:

Compile ' org.jetbrains.kotlinx:kotlinx-coroutines-core:0.16 '

The Kotlinx-coroutines also provides the following modules:

Compile group: ' Org.jetbrains.kotlinx ', Name: ' Kotlinx-coroutines-jdk8 ', version: ' 0.16 '
compile group: ' Org.jetbrains.kotlinx ', Name: ' Kotlinx-coroutines-nio ', version: ' 0.16 '
compile group: ' Org.jetbrains.kotlinx ', Name: ' Kotlinx-coroutines-reactive ', version: ' 0.16 '

We use the latest version of Kotlin 1.1.3-2:

Buildscript {
    ext.kotlin_version = ' 1.1.3-2 ' ...
    dependencies {
        classpath "Org.jetbrains.kotlin:kotlin-gradle-plugin: $kotlin _version"
    }
}

Among them, Kotlin-gradle-plugin is a plug-in for Kotlin integrated Gradle.

Also, configure the Jcenter warehouse:

repositories {
    jcenter ()
}
9.1.2 Simple co-process example

Let's take a look at a simple process example.

Run the following code:

Fun FirstCoroutineDemo0 () {
    launch (commonpool) {
        delay (3000L, timeunit.milliseconds)
        println ("Hello,")
    }
    println ("world!")
    Thread.Sleep (5000L)
}

You will find the output:

world!
Hello,

The above code:

Launch (Commonpool) {
    delay (3000L, timeunit.milliseconds)
    println ("Hello,")
}

Equivalent to:

Launch (Commonpool, Coroutinestart.default, {
    delay (3000L, timeunit.milliseconds)
    println ("Hello,")
})
9.1.3 Launch function

This launch function is defined below kotlinx.coroutines.experimental.

Public Fun Launch (
    context:coroutinecontext,
    start:coroutinestart = Coroutinestart.default,
    block: Suspend Coroutinescope. (), Unit
): Job {
    val newcontext = newcoroutinecontext (context)
    val coroutine = if (start.islazy)
        La Zystandalonecoroutine (Newcontext, block) Else
        standalonecoroutine (newcontext, active = True)
    Coroutine.initparentjob (Context[job])
    start (block, Coroutine, coroutine)
    return coroutine
}

The launch function has 3 entry parameters: Context, start, block, which are described as follows:

Parameters Description
Context Co-process context
Start Co-boot options
Block The code block that the co-process is really going to execute must be a suspend decorated suspend function

This launch function returns a job type, and the job is the concept of a background task created by the process, which holds a reference to that process. The job interface actually inherits from the Coroutinecontext type. There are three statuses for a job:

State
isActive iscompleted
New (optional initial state) newly created (optional initial status) False False
Active (default initial state) activity True False
Completed (final state) ended False True

That is, the launch function starts a new co-worker background task with a non-blocking (non-blocking) current thread, and returns an object of the job type as a reference to the current coprocessor.

Also, the delay () function here is similar to the function of Thread.Sleep (), but better yet: it does not block threads, but only suspends the coprocessor itself. When the process is waiting, the thread is returned to the pool, and when the wait is complete, the process resumes on the idle thread in the pool. 9.1.4 commonpool: Shared thread pool

Let's take a look at launch (Commonpool) {...} This piece of code.

First, this commonpool represents the shared thread pool, and its primary role is to dispatch the execution of the compute-intensive task's threads. It is implemented using the API below the Java.util.concurrent package. It first tries to create a java.util.concurrent.ForkJoinPool (Forkjoinpool is a excuteservice that can perform forkjointask, It takes the work-stealing pattern: all threads in the pool try to perform subtasks created by other threads, so that the thread is less idle and more efficient, and if unavailable, Use Java.util.concurrent.Executors to create a common thread pool: Executors.newfixedthreadpool. Related code in KOTLINX/COROUTINES/EXPERIMENTAL/COMMONPOOL.KT:

Private Fun CreatePool (): executorservice {
    val fjpclass = Try {class.forname ("Java.util.concurrent.ForkJoinPool") }
        ?: Return Createplainpool ()
    if (!useprivatepool) {
        Try {fjpclass.getmethod ("Commonpool")?. Invoke (null) as? Executorservice}
            ?. Let {return it}
    }
    Try {fjpclass.getconstructor (Int::class.java). newinstance (Defaultparallelism ()) as? Executorservice}
        ? Let {return it}
    return Createplainpool ()
} private Fun

Createplainpool (): Executorservice {
    val threadId = Atomicinteger ()
    return Executors.newfixedthreadpool (Defaultparallelism () {
        Thread (it, "Commonpool-worker-${threadid.incrementandget ()}"). Apply {Isdaemon = true}}
}

This Commonpool object class is a subtype of coroutinecontext. Their type integration hierarchy is as follows:

9.1.5 suspend function

The delay (3000L, timeunit.milliseconds) function in the code block is a function that is decorated with the Suspend keyword, which we call the Suspend function. The suspend function can only be called from within the process code, and normal non-coprocessor code cannot be called.

The suspend function is only allowed to be called by the coprocessor or another suspend function, for example, we call a suspend function in the process code, the code example is as follows:

Suspend Fun Runcoroutinedemo () {
    run (commonpool) {
        delay (3000L, timeunit.milliseconds)
        println ("suspend , ")
    }
    println (" runcoroutinedemo! ")
    Thread.Sleep (5000L)
} fun

Callsuspendfun () {
    launch (commonpool) {
        Runcoroutinedemo ()
    }
}

If we use the thread class in Java to write code similar to functionality, the above code can be written like this:

Fun ThreadDemo0 () {
    Thread ({
        thread.sleep (3000L)
        println ("Hello,")
    }). Start ()

    println (" world! ")
    Thread.Sleep (5000L)
}

The output is also:

world!
Hello,

In addition, we cannot use thread to start the co-process code. For example, the following compiler will make an error:

/**
 * ERROR counter: Use thread to invoke the co-process error */fun
Threadcoroutinedemo () {
    thread ({
        delay (3000L), Timeunit.milliseconds)//error, Suspend functions is only allowed to being called from a coroutine or another Suspend funct Ion
        println ("Hello,")
    })
    println ("world!")
    Thread.Sleep (5000L)
}
9.2 Bridging blocking and non-blocking

In the above example, we give the use of the non-blocking delay function, while using the blocking Thread.Sleep function, so that the code is written in a way that is not very readable. Let's implement the above blocking + nonblocking example (without thread) using pure Kotlin's co-process code. 9.2.1 runblocking function

The Runblocking function is provided in Kotlin to implement functions similar to the main thread:

Fun Main (args:array<string>) = runblocking<unit> {
    //main coprocessor
    println ("${format (Date ())}: T0")

    / /start main coprocessor
    launch (commonpool) {
        //Create a coprocessor
        println ("${format (Date ()}: T1") in the common thread pool
        delay ( 3000L)
        println ("${format (Date ())}: T2 Hello,")
    }
    println ("${format (Date ())}: T3 world!")  when the sub-process is delay, the main coprocessor still continues to run

    delay (5000L)

    println ("${format (Date ())}: T4")
}

Operation Result:

14:37:59.640:t0
14:37:59.721:t1
14:37:59.721:t3 world!
14:38:02.763:t2 Hello,
14:38:04.738:t4

It can be found that the result is the same as before, but we are not using thread.sleep, we only use the non-blocking delay function. If the main function does not add = runblocking<unit>, then we cannot call delay (5000L) in the main function body.

If this blocked thread is interrupted, runblocking throws an Interruptedexception exception.

The Runblocking function is not intended to be used as a general-purpose function, and is designed primarily to bridge the non-blocking code of common block code and hang style (suspending style), for example in the main function, or in a test case code.

@RunWith (junit4::class)
class Runblockingtest {

    @Test fun testrunblocking () = runblocking<unit> {
        / /So we can call any suspend fun here.
        Launch (commonpool) {
            delay (3000L)
        }
        Delay (5000L)}
}
9.3 Wait for a task to finish executing

Let's look at a piece of code first:

Fun Firstcoroutinedemo () {
    launch (commonpool) {
        delay (3000L, timeunit.milliseconds)
        println ("[ Firstcoroutinedemo] Hello, 1 ")
    }

    Launch (Commonpool, Coroutinestart.default, {
        delay (3000L, Timeunit.milliseconds)
        println ("[Firstcoroutinedemo] Hello, 2")
    })
    println ("[Firstcoroutinedemo] world! ")
}

Running this code, we will find only the output:

[Firstcoroutinedemo] world!

That's why.

In order to understand the internal procedure of the above code execution, we print some logs to look under:

Fun Testjoincoroutine () = runblocking<unit> {
     //Start a coroutine
     val c1 = Launch (commonpool) {
         printl N ("C1 Thread: ${thread.currentthread ()}")
         println ("C1 Start")
         delay (3000L)
         println ("C1 world! 1 ")
     }

     val c2 = Launch (commonpool) {
         println (" C2 Thread: ${thread.currentthread ()} ")
         println (" C2 Start ")
         delay (5000L)
         println (" C2 world! 2 ")
     }

     println (" Main Thread: ${thread.currentthread ()} ")
     println (" Hello, ")
     println (" Hi, ")
     println ("C1 is active: ${c1.isactive}  ${c1.iscompleted}")
     println ("C2 is active: ${c2.isactive}  ${c2.iscompleted} ")
}

Run again:

C1 Thread:thread[forkjoinpool.commonpool-worker-1,5,main]
C1 Start
C2 thread:thread[ Forkjoinpool.commonpool-worker-2,5,main]
C2 Start
main Thread:thread[main,5,main]
Hello,
Hi,
C1 is active:true  false
C2 is active:true  false

As we can see, the C1, C2 code is also starting to execute, using the worker thread in the Forkjoinpool.commonpool-worker thread pool. However, we are in the code execution until the last print out the state of the two iscompleted is false, which indicates that our C1, C2 code, at the end of the main thread (the Java process running the main function also exits), has not been completed, And then the main thread exits with the end.

So we can conclude that the main thread that runs the main () function has to wait until our process is finished, otherwise our program ends up directly before we print Hello, 1 and Hello, 2.

How do we get these two processes to participate in the time sequence of the main thread? We can use join to have the main thread wait until the current process finishes executing, such as the following code

Fun Testjoincoroutine () = runblocking<unit> {//Start a coroutine val c1 = Launch (commonpool) {pri Ntln ("C1 Thread: ${thread.currentthread ()}") println ("C1 Start") delay (3000L) println ("C1 world! 1 ")} val c2 = Launch (Commonpool) {println (" C2 Thread: ${thread.currentthread ()} ") println (" C2 S Tart ") delay (5000L) println (" C2 world! 2 ")} println (" Main Thread: ${thread.currentthread ()} ") println (" Hello, ") println (" C1 is active: ${c1.is

    Active} iscompleted: ${c1.iscompleted} ") println (" C2 is active: ${c2.isactive} iscompleted: ${c2.iscompleted} ") C1.join ()//The main thread would wait until child Coroutine completes println ("Hi,") println ("C1 are active: ${C1
    . isActive} iscompleted: ${c1.iscompleted} ") println (" C2 is active: ${c2.isactive} iscompleted: ${c2.iscompleted} ") C2.join ()//The main thread would wait until child Coroutine completes println ("C1 is Active: ${c1.isactive} iscompleted: ${c1.iscompleted} ") println (" C2 is active: ${c2.isactive} iscompleted: ${c2.isc ompleted} ")}

The output will be:

C1 Thread:thread[forkjoinpool.commonpool-worker-1,5,main]
C1 Start
C2 thread:thread[ Forkjoinpool.commonpool-worker-2,5,main]
C2 Start
main Thread:thread[main,5,main]
Hello,
C1 is Active:true  iscompleted:false
C2 is active:true  iscompleted:false
C1 world! 1
Hi,
C1 are Active:false  iscompleted:true
C2 is active:true  iscompleted:false
C2 world! 2
C1 is active: False  iscompleted:true
C2 is active:false  iscompleted:true

Usually, good code style we will put a separate logic into a separate function, we can refactor the above code as follows:

Fun TestJoinCoroutine2 () = runblocking<unit> {
    //Start a coroutine
    val c1 = Launch (commonpool) {
        fc1 ()
    }

    Val C2 = Launch (commonpool) {
        fc2 ()
    }
    ...
}

Private suspend Fun fc2 () {
    println ("C2 Thread: ${thread.currentthread ()}")
    println ("C2 Start")
    delay ( 5000L)
    println ("C2 world! 2 ")
}

private suspend fun fc1 () {
    println (" C1 Thread: ${thread.currentthread ()} ")
    println (" C1 Start ")
    delay (3000L)
    println (" C1 world! 1 ")
}

As you can see, our fc1 here, the FC2 function is suspend fun. 9.4 Co-process is lightweight

Run the following code directly:

Fun Testthread () {
    val jobs = List (100_1000) {
        Thread ({
            thread.sleep (1000L)
            print (".")
        })
    }
    Jobs.foreach {it.start ()}
    Jobs.foreach {it.join ()}
}

We should be able to see the output error:

Exception in thread "main" java.lang.OutOfMemoryError:unable to create new native thread at
    java.lang.Thread.start0 ( Native Method) at
    Java.lang.Thread.start (thread.java:714)
    at Com.easy.kotlin.LightWeightCoroutinesDemo.testThread (lightweightcoroutinesdemo.kt:30) at
    Com.easy.kotlin.LightWeightCoroutinesDemoKt.main (lightweightcoroutinesdemo.kt:40)
...........................................................................................

We started the 100,000 threads here, and joined together to print ".", no surprise we received the Java.lang.OutOfMemoryError.

The intrinsic reason for this anomaly is that we have created too many threads, and the number of threads that can be created is limited, causing the exception to occur. In Java, when we create a thread, the virtual opportunity creates a thread object in the JVM's memory to create an operating system threads, and the system thread's memory is not jvmmemory, but the remaining memory in the system (Maxprocessmemory- Jvmmemory-reservedosmemory). The number of threads that can be created is calculated as follows:

Number of Threads = (maxprocessmemory-jvmmemory-reservedosmemory)/(Threadstacksize)

Among them, the parameters are described as follows:

Parameters Description
Maxprocessmemory Refers to the maximum memory of a process
Jvmmemory JVM Memory
Reservedosmemory Reserved operating system memory
Threadstacksize The size of the line stacks

We usually optimize this problem by either using a method that reduces the size of the thread stack, or by using a method such as reducing the size of the heap or PermGen initial allocation to solve the problem temporarily.

In the co-process, the situation is completely different. Let's take a look at the process code that implements the above logic:

Fun Testlightweightcoroutine () = runblocking {
    val jobs = List (100_000) {
        //Create a lot of coroutines and list T Heir Jobs
        Launch (commonpool) {
            delay (1000L)
            print (".")
        }
    }
    Jobs.foreach {it.join ()}//wait for all jobs to complete
}

Run the above code and we'll see the output:

start:21:22:28.913 ........ ........
..................... (100,000)
..... end:21:22:30.956

The above program is executed correctly in about 2s of time. 9.5 coprocessor vs Daemon thread

There are two types of threads in Java: The user thread, the Daemon thread.

A daemon thread is a thread that provides a generic service in the background while the program is running, such as a garbage collection thread that is a competent guardian, and that thread is not an integral part of the program. Therefore, when all non-daemon threads end, the program terminates and kills all the daemon threads in the process.

Let's take a look at a thread's daemon code:

Fun TestDaemon2 () {
    val t = Thread ({
        repeat) {i-
            println ("I ' M sleeping $i ...")
            Thread.Sleep (50 0L
        }
    })
    T.isdaemon = TRUE//must be called before starting thread, or error: Exception in thread "main" Java.lang.IllegalThreadStateException
    T.start ()
    thread.sleep (2000L)//Just quit after delay
}

This code starts a thread and sets it as the daemon thread. Inside the thread is the interval 500ms repeat print 100 times output. External main thread sleep 2s.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.