Erlang OTP Learning: Supervisor [Go]

Source: Internet
Author: User

Transferred from: http://diaocow.iteye.com/blog/1762895

Today carefully looked at the next supervisor, now do a summary:

Where the block represents supervisor process, its function is simple, it is responsible for the following "younger brother" (child processes) and when necessary to a child process restart or terminate operation , and the circle represents the worker process, which is the real process of doing the work; In particular, it is important to note that supervisor process monitoring is not necessarily a worker process that can be another supervisor (for example). In this way, we can manage the process in a certain hierarchy and build a robust fault-tolerant system.

Now let's see how to create a supervisor:

Like the Gen_server, Gen_event module, Erlang has created a supervisor decoupling: into non-functional modules and function modules. The non-functional module is a modules called supervisor (referred to as S module), the function module is provided by each user in the form of callback module, in the callback module, youyou only need to write an Init methodFor the S module callback, the Init method specifies the three attributes of the supervisor that will be created:

1. Restart policy (Restart strategy)
A. One_for_one
When a child process hangs, its monitor (supervisor) Only restarts the child process without affecting the other child process
B.one_for_all
When a child process hangs, its monitor (supervisor) will terminate all the rest of the child process and then restart all child process
C.rest_for_one
When a child process is hung, its monitor (supervisor) will only terminate the process that was initiated after the child process and then restart the process.
D.simple_one_for_one
The restart policy is the same as One_for_one, except that all child process is dynamically added and executes the same code (detailed later)

2. Maximum Restart frequency (Maximum Restart Frequency)
The main purpose of this property is to prevent child proces from frequent Terminate-restart, and when a child process exceeds this frequency, supervisor will terminate all child Process and then terminate yourself (based on my test results, this frequency is calculated from the time interval of the last restart of this restart)

3.Child Specification
This property is plainly telling supervisor what child process you want to monitor, how you can start these child process and how to end them, and so on, the detailed format of this property is as follows:
{Id, Startfunc, Restart, Shutdown, Type, Modules}
Id = term ()
Startfunc = {M, F, A}
Restart = Permanent | transient | Temporary
Shutdown = Brutal_kill | Integer () >0 | Infinity
Type = Worker | Supervisor
Modules = [Module] | Dynamic
which

The Id uniquely identifies a child process;
Startfunc tells supervisor how to start it (that is, which method to call), especially note that: 1. Startfunc must create a link to the child process (only so that supervisor can monitor the child process and perceive its life and death) 2. If the child process is created successfully, it must return {OK , child} or {OK, child, Info}, where the pid,info value of child for child process is supervisor ignored (I have tripped here and not returned in standard format)
Restart This parameter is used to tell supervisor if it can be restarted when the child process is hung, and permanent means that it can always be (regardless of the reason that child process is hung out), Temporary means never (that is, it will not be restarted), transient is a bit special, it means that if the child process is not restarted because of normal or shutdown reason, otherwise it can restart (PS: The restart parameter setting overrides the restart strategy, such as a child process restart set to Temporary,supervisor restart is strategy, Then when one of the other child process hangs, it will cause the child process (Temporay) to be terminate and no longer restarted)
Shutdown is used to tell supervisor when it wants to terminate a child process how terminate,brutal_kill as the name implies is very rude, very violent end of a child Process (Supervisor internal call exit (Childpid, Kill) method, note that exit reason for kill signal is not captured, regardless of whether childpid is System process); An integer value of-timeout means that when supervisor wants to end a child process, it calls exit (Childpid, Shutdow), and supervisor does not receive a child from child within the Timout time range Exit signal of process (because supervisor linked to child process, supervisor receives an exit signal when the child process is hung), Then supervisor will call the exit (Childpid, Kill) method, the violent Terminate child process (here I suddenly wonder?) Does this also cause Supervisor to receive an uncaught exit signal? Infinity: When your child process is also a supervisor and you need to terminate, you need to set the shutdown parameter to infinity to ensure that child process (supervisor ) can have enough time to end its supervision tree;
type: Used to specify the kind of child process (worker or SUPERVIOSR)
Module: This parameter I do not quite understand, for the time being shelved


Having said this, let's look at a simple example (child process restarts every 5s by its supervisor):

worker Process


The worker process does a simple thing, starts with a print start ..., then pauses 5s, and finally exits printing a quit ... Message, where Start_link is called for supervisor

Supervisor


Let's focus on the settings for the next few parameters:
Supervisor Restart Policy: One_for_one
Supervisor Max Restart frequency: 1/s
The Start_link method of the Startfunc:tick module of the child process, the parameter is empty
Restart property of child process: permanent (this is the key that can be restarted after the child process has been hung)

Let's see how the program works:


The first line calls Supervisor:start_link (My_supervisor, []) to create a supervisor (where My_supervisor is callback module), If the successful creation of supervisor (the child process it monitors is also created), return {OK, suppid} (for example, {OK, <0.34.0>), then we will see that the screen has been circulating printing start .... Quit .... And each PID is different, this shows that when supervisor found that child process hangs (whatever the reason, even if it is normal exit), will restart child process (you can try to put the child Process permanent modified to temporary to see how the results are run)

At this point we have completed a supervisor example, although it is simple, but did build a supervision tree, see the following documents for more details about supervisor:
Http://www.erlang.org/doc/design_principles/sup_princ.html
Http://www.erlang.org/doc/man/supervisor.html

Finally we are looking at: Simple_one_for_one, this restart strategy and One_for_one are basically the same (that is, when a child process is hung, just restart the child process without affecting the other child Process), the only difference is that Simple_one_for_one can only dynamically add the child process and all the child process executes the same code, let's take a look at an example (from the official OTP document)



Notice here that Startfunc: {call, Start_link, []} does not actually start a child process, but must dynamically add the child by calling Supervisor:start_child (Sup, List) Process, where the first parameter sup is to indicate which supervisor you want to add the child process to, and the second parameter to pass to it when the child process is created (internal call to apply (M, F, a++list))


Well, about supervisor first here, if there is no place to ask to point!

Erlang OTP Learning: Supervisor [Go]

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.