In-depth Asyncio (11) gracefully begins and ends

Source: Internet
Author: User
Tags sigint signal signal handler stream api

Startup and Shutdown Graceful

Most Asyncio-based programs are a long-running, web-based application that is surprisingly complex to handle the right opening and shutting down of such applications.

Open is relatively simple, the general practice is to create a task and then call Loop.run_forever (), as in the example in Chapter three QuickStart.

An exception is when you start a listening server with two stages:

    1. Create a coroutine for the startup of the server, and then call run_until_complete() to initialize and start the server itself;
    2. Called loop.run_forever() to invoke the main function.

Usually startup is very simple, encounter the above exception, see the official example.

It's much more complicated to close, before the run_forever() call blocks the main thread, and when execution shuts down, it blocks and executes subsequent code, which is required:

    1. Collect all the unfinished task objects;
    2. Bring them together in a group mission;
    3. Cancel Group task (need to capture cancellederror);
    4. By run_until_complete() waiting for execution to complete.

After this is done, the beginner always tries to get rid of some error messages when writing asynchronous code, such as the task has not been waiting to be closed, the main reason is to lose one or more of the above steps, with an example to illustrate.

import asyncioasync def f(delay):    await asyncio.sleep(delay)loop = asyncio.get_event_loop()t1 = loop.create_task(f(1))    # 任务1执行1秒t2 = loop.create_task(f(2))    # 任务2执行2秒loop.run_until_complete(t1)    # 只有任务1被执行完成loop.close()
λ python3 taskwaring.pyTask was destroyed but it is pending!task: <Task pending coro=<f() running at taskwaring.py:4> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x0312D6D0>()]>>

This error is that some tasks are not completed when loop is closed, which is why the canonical shutdown process is to collect all the task into a task, cancel them, and wait for the cancellation to complete before the loop closes.

Take a look at more detailed examples than the QuickStart code, this time using the Echo server code in the official documentation as the server, and using the client code to learn more.

From Asyncio import (Get_event_loop, Start_server, Cancellederror, StreamReader, StreamWriter, Task,    Gather) Async def Echo (Reader:streamreader, Writer:streamwriter): # 1 print (' New connection. ') Try:while True: # 2 data:bytes = await reader.readlines () # 3 if data in [B ' ', B ' quit '] : Break Writer.write (Data.upper ()) # 4 await Writer.drain () print (' Leaving Co    Nnection. ') Except Cancellederror: # 5 writer.write_eof () print (' Cancelled ') finally:writer.close () loop = ge T_event_loop () Coro = Start_server (Echo, ' 127.0.0.1 ', 8888, loop=loop) # 6server = Loop.run_until_complete (Coro) # 7try : Loop.run_forever () # 8except Keyboardinterrupt:print (' shutting down! ') Server.close () # 9loop.run_until_complete (server.wait_closed ()) # 10tasks = Task.all_tasks () # 11group = Gather (*tas KS, Return_exceptions=true) # 12group.cancel () Loop.run_until_cOmplete (Group) # 13loop.close () 
    1. This process is used to create a co-process for each established connection, using the stream API;

    2. In order to maintain the connection, use the dead loop to obtain the message;

    3. Obtain information from the server;

    4. Returns the character of the message in all capitals;

    5. Processing exit here, to carry out the cleanup work of environmental exit;

    6. Here is where the program starts, the server needs to follow the line alone, the Start_server method returns a corountine and must be executed in run_until_complete;

    7. Run Coroutine to start the TCP server;

    8. The Listener section of the program is now started, and a coroutine is generated for each TCP connection to the server to perform the Echo routine function, only the only Keyboardinterrupt exception that can interrupt the loop;

    9. When the program runs here, the shutdown operation has started, starting from now on to let the server stop accepting new connections, the first step is to call Server.close ();

    10. The second step is to call server.wait_closed () to close the socket that is still waiting for the connection to be established, and the connection that is still active will not be affected;

    11. Start close task, collect all currently waiting state of task;

    12. Gather the task into a group and then call the Cancel method, where the return_exceptions argument is followed;

    13. Run group this process.

One thing to note is that if you capture a cancellederror inside a coroutine, be aware that you do not create any coroutine in the exception capture code, and you all_tasks() cannot perceive run_until_complete() any new tasks that were created during the run phase.

return_exceptions=TrueWhat are the parameters?

gather()Method has a default parameter is Return_exceptions=false, the default setting to turn off exception handling is problematic, it is difficult to explain directly, can be explained by a series of facts:
1. The run_until_complete() method executes the future object and, during closure, executes the future gather() object returned by the method;
2. If the future object throws an exception, the exception will continue to throw up, causing the loop to stop;
3. If it run_until_complete() is used to execute a group future object, any exceptions thrown by the group's subtasks will be thrown up, including Cancellederror;
4. If a subset of the subtasks handle the cancellederror exception and the other part is unhandled, the unhandled part of the exception also causes the loop to stop, which means that the loop stops before all tasks are completed;
5. When the loop is closed, you do not want the above feature to be triggered, just want all the tasks in group to be executed as soon as possible, or whether some task throws an exception;
6. Use allows gather(*, return_exceptions=True) group to treat exceptions in subtasks as return values and therefore does not affect run_until_complete() execution.

One thing about catching exceptions is that some exceptions are handled within the group without being thrown, which makes it difficult to find anomalies and write logging by results.

import asyncioasync def f(delay):    await asyncio.sleep(1/delay)    # 传入值是0就很恶心了    return delayloop = asyncio.get_event_loop()for i in range(10):    loop.create_task(f(i))pending = asyncio.Task.all_tasks()group = asyncio.gather(*pending, return_exceptions=True)results = loop.run_until_complete(group)print(f‘Results: {results}‘)loop.close()

If you do not set a parameter, it causes the exception to be thrown up, and then loop stops and causes the other task to fail. Safe exit is one of the most difficult problems in network programming, and it is the same for Asyncio.

Signals

In the previous example, we demonstrated how to KeyboardInterrupt exit the loop, which effectively ended run_forever() the blocking and allowed subsequent code to execute.

KeyboardInterruptExceptions are the same as SIGINT signals, and the most commonly used stop signals in a network service are actually SIGTERM , and are also the default signals that are emitted in the UNIX shell environment using kill directives.

In a UNIX system kill , the instruction is actually sending a signal to the process, without parameter calls will send a TERM signal to make the process safely exited or ignored, usually this is not a good way, because if the process does not exit, kill will send a kill signal to force the exit, This will cause your program to end with no control.

Asyncio native supports processing of process signals, but the complexity of handling general signals is too high (not for Asyncio), this article will not be explained in depth, only pick some common signals to illustrate. Let's look at the following example:

# shell_signal01.pyimport asyncioasync def main():   # 这里是应用的主体部分,简单的用一个死循环来表示程序运行    while True:        print(‘<Your app is running>‘)        await asyncio.sleep(1)if __name__ == "__main__":    loop = asyncio.get_event_loop()    loop.create_task(main())    # 这里与前几个例子一样,将coroutine添加到loop中    try:        loop.run_forever()    except KeyboardInterrupt:   # 在本例中,只有Ctrl-C会终止loop,然后像前例中进行善后工作        print(‘<Got signal: SIGINT, shutting down.>‘)    tasks = asyncio.Task.all_tasks()    group = asyncio.gather(*tasks, return_exceptions=True)    group.cancel()    loop.run_until_complete(group)    loop.close()

These are simple, and consider some of the complex features below:
1. Products need to be SIGINT and sigterm as a stop signal;
2. The code that needs to be processed in the application main() CancelledError , and that handles the exception, also takes a short time to run (for example, a bunch of network connections need to be closed);
3. The application of multiple receive stop signal does not appear abnormal, after receiving a stop signal, the subsequent signal is not processed.

Asyncio provides a sufficient granularity of APIs to handle these scenarios.

# shell_signal02.pyimport asynciofrom signal import SIGINT, SIGTERM    # 从标准库中导入信号值async def main():    try:        while True:            print(‘<Your app is running>‘)            await asyncio.sleep(1)    except asyncio.CancelledError:  # 1        for i in range(3):            print(‘<Your app is shtting down...>‘)            await asyncio.sleep(1)def handler(sig):   # 2    loop.stop()    # 3    print(f‘Got signal: {sig}, shtting down.‘)    loop.remove_signal_handler(SIGTERM)    # 4    loop.add_signal_handler(SIGINT, lambda: None)   # 5if __name__ == "__main__":    loop = asyncio.get_event_loop()    for sig in (SIGINT, SIGTERM):   # 6        loop.add_signal_handler(sig, handler, sig)    loop.create_task(main())    loop.run_forever()    tasks = asyncio.Task.all_tasks()    group = asyncio.gather(*tasks, return_exceptions=True)    group.cancel()    loop.run_until_complete(group)    loop.close()
    1. Now in the Coroutine internal processing stop the business, when the call Group.cancel () received a cancellation signal, in the process of closing loop run_until_complete phase, main will continue to run for a period of time;

    2. This is the callback function after receiving the signal, which modifies the loop configuration by Add_signal_handler ();

    3. When the callback function begins execution, the loop is stopped first, which allows the closing of the business code to begin execution;

    4. At this point, the code business has started to stop, so remove the sigterm to ignore the subsequent stop signal, otherwise the stop code business will be terminated;

    5. The principle is similar to the above, but SIGINT can not simply remove, because keyboardinterrupt default is the SIGINT signal handler, need to SIGINT the handler empty;

    6. The callback function that configures the signal here points to handler, so configuring the SIGINT handler overrides the default Keyboardinterrupt.

Waiting for executor to execute during shutdown

There was a block of code in QuickStart that used a blocking call when it sleep() was explained that what would happen if the blocking call took longer than the execution of the loop, and now it's time to discuss the conclusion that a series of errors would be obtained without human intervention.

import timeimport asyncioasync def main():    print(f‘{time.ctime()} Hello!‘)    await asyncio.sleep(1.0)    print(f‘{time.ctime()} Goodbye!‘)    loop.stop()def blocking():    time.sleep(1.5)    print(f"{time.ctime()} Hello from a thread!")loop = asyncio.get_event_loop()loop.create_task(main())loop.run_in_executor(None, blocking)loop.run_forever()tasks = asyncio.Task.all_tasks(loop=loop)group = asyncio.gather(*tasks, return_exceptions=True)loop.run_until_complete(group)loop.close()
λ python3 quickstart.pySun Sep 30 14:11:57 2018 Hello!Sun Sep 30 14:11:58 2018 Goodbye!Sun Sep 30 14:11:59 2018 Hello from a thread!exception calling callback for <Future at 0x36cff70 state=finished returned NoneType>Traceback (most recent call last):    ...    raise RuntimeError(‘Event loop is closed‘)RuntimeError: Event loop is closed

To see what's going on behind run_in_executor() it, returning to the future instead of the task, which means it can't be asyncio.Task.all_tasks() perceived, so the follow-up will run_until_complete() not wait for it to complete.

There are three solutions, all through a different degree of trade-offs, the following one after another, from different perspectives to observe the connotation of the event loop, thinking in the program call each other coroutine, thread, child process lifecycle management.

The first idea is to put executor into coroutine and create a task.

# OPTION-Aimport timeimport asyncioasync def main():    print(f‘{time.ctime()} Hello!‘)    await asyncio.sleep(1.0)    print(f‘{time.ctime()} Goodbye!‘)    loop.stop()def blocking():    time.sleep(2.0)    print(f"{time.ctime()} Hello from a thread!")async def run_blocking():  # 1    await loop.run_in_executor(None, blocking)loop = asyncio.get_event_loop()loop.create_task(main())loop.create_task(run_blocking())  # 2loop.run_forever()tasks = asyncio.Task.all_tasks(loop=loop)group = asyncio.gather(*tasks, return_exceptions=False)loop.run_until_complete(group)loop.close()
    1. The idea is that Run_in_executor returns a future rather than a task, although it cannot be captured with All_tasks (), but can await a future with await, So with a new coroutine to await the blocking call in the executor, this new coroutine will be added as a task to the loop;

    2. Add this coroutine to the loop as you would run main.

The above code looks good except that you cannot perform a task cancellation. You can see that there is less code group.cancel() , and if you add it back you will get an Event loop is closed error, even if you can't run_blocking() handle cancellederror in order to await the future again, whatever you do, the task will be canceled. But executor will execute its internal sleep.

The second idea is to collect the unfinished tasks and cancel them only, but add the run_until_complete() resulting future to the call before calling run_in_executor() .

# OPTION-Bimport timeimport asyncioasync def main():    print(f‘{time.ctime()} Hello!‘)    await asyncio.sleep(1.0)    print(f‘{time.ctime()} Goodbye!‘)    loop.stop()def blocking():    time.sleep(2.0)    print(f"{time.ctime()} Hello from a thread!")loop = asyncio.get_event_loop()loop.create_task(main())future = loop.run_in_executor(None, blocking)   # 1loop.run_forever()tasks = asyncio.Task.all_tasks(loop=loop)   # 2group_tasks = asyncio.gather(*tasks, return_exceptions=True)group_tasks.cancel()    # 取消tasksgroup = asyncio.gather(group_task, future)  # 3loop.run_until_complete(group)loop.close()
    1. Record the return of the future;

    2. Here the loop has stopped, get all the task first, notice that there is no future of executor;

    3. A new group was created to merge tasks and the future, in which case the executor would exit normally, and tasks would still be canceled by normal cancel.

The solution was friendly when it was closed, but still flawed. In general, it is inconvenient, though effective, to collect all the future objects returned by executor in a way throughout the program and then merge with tasks, and then wait for execution to complete, but there is a better workaround.

# OPTION-Cimport timeimport asynciofrom concurrent.futures import ThreadPoolExecutor as Executorasync def main():    print(f‘{time.ctime()} Hello!‘)    await asyncio.sleep(1.0)    print(f‘{time.ctime()} Goodbye!‘)    loop.stop()def blocking():    time.sleep(2.0)    print(f"{time.ctime()} Hello from a thread!")loop = asyncio.get_event_loop()executor = Executor()   # 1loop.set_default_executor(executor)    # 2loop.create_task(main())future = loop.run_in_executor(None, blocking)   # 3loop.run_forever()tasks = asyncio.Task.all_tasks(loop=loop)group = asyncio.gather(*tasks, return_exceptions=True)group.cancel()loop.run_until_complete(group)executor.shutdown(wait=True)    # 4loop.close()
    1. Establish their own executor instances;

    2. Set it as the default executor of the loop;

    3. As before;

    4. Explicitly waits for all future executor of the loop to be executed before looping is closed, which avoids error messages such as "Event loop is closed", which can be done because the permission to use executor is obtained. The Asyncio default executor does not open the corresponding interface call.

It can now be called anywhere run_in_executor() , and the program can gracefully exit.

In-depth Asyncio (11) gracefully begins and ends

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.