[Ecug Feature review] "talk about Cerl: A detailed discussion of the concurrency programming model differences between GO and ERLANG"-Xu Xiwei (seven Qiniu storage CEO)

Source: Internet
Author: User
Tags php server
This is a creation in Article, where the information may have evolved or changed.

Xu Xiwei: Let's start by introducing Ecug, starting in 07, the first in the Pearl River Delta Zhuhai, Guangzhou, Shenzhen, in the Pearl River Delta, the first is the community of Erlang. About 10 when the name changed to real aging cloud computing Group, the earliest time is not limited to Erlang, but there will be a variety of languages such as Haskell, Scala, etc., in fact, there is no limit, as long as it is halfway interspersed with the back-end development operation of the practice can be, Later, we formally renamed the Cloud Computing group. , the scope of expansion is quite large to the country, basically Beijing, the Yangtze River delta has been held. So it should be said that today insisted on also almost 8 years, a total of 9, in 07 when the 2 session. This is the history of ECUG. Nanjing is the first time to run this Congress, my university is in Nanjing Read,. This year I think for a long time, we hope to be able to put this spark in all cities can be ignited, so this year to choose Nanjing such a place for me more special significance.

I started my topic. Actually, this topic I actually said in the ecug of Hangzhou, but it was quite graceful at that time, actually I had realized the problem of Erlang's programming style, but the solution was not thorough, so I went back to this topic today, What is the difference between the go and the Erlang concurrency model in a relatively detailed way? Because basically I know people with ECUG history are confused, why I go from Erlang to goo.

This is the point of the topic or want to talk about the concurrency of Go and Erlang, I decided in 09 to abandon the use of Erlang itself in C + + to re-build an Erlang programming model, Cerl This network library is the original start of this, so Cerl C for the c/c+ +, Erl stands for Erlang. The first idea was to simply move Erlang to C + +, because Erlang's programmers were really hard to recruit, but later found that Erlang's concurrency model was not as comfortable as I had imagined, and it was a CERL2.0, a reflection and an improvement on the Erlang model. Finally, it is found that the result of this reflection is exactly the same as the concurrency model of go. So CERL1.0 and 2.0 contrast, you can actually think of the comparison between Erlang and Go. In fact, many people ask me this topic, why Cerl not open source? The reason is that I think after that time point, open source does not have much meaning, so I do not want to fraught, because I am the first C + + fans, but I contact go after a very strong desire I hope that C + + such things are best to exit the history stage earlier, On this topic I once had a speech, is about go and seven cattle history, I reflect on my C + + struggle history, that speech I will give you as a supplementary material put up.

Since there is no open source, how should we understand Cerl? In fact, the world has a similar thing, this PTH is I recently know, halfway also saw another open source library, probably in 08 open source, unfortunately I can not find the project URL. At that time I looked at the same as Cerl, but there is no Cerl library to write the complete, but PTH this library history is very early, and is an open-source project under the GNU, it has started in 99, it is no longer updated in about 06 years. It appears very early, the concurrency model and Cerl are almost the same, and the completion is very high, after all, after 7 years of development, so to understand the Cerl is actually a study of the library is basically almost. But I actually have a reflection, why this pth so good things are not popular up? The first one was untimely, and it was too early to get noticed, because in fact the concept of the multi-core era was in my impression that it was about 07 and a half years ago that Erlang was gradually perceived as being valuable. The second is not the standard library, because such a library intrusive is very strong not only you know it good and others also realize, otherwise there is a problem others write library you are not able to use, this is invasive and contagious, so will lead to actually no way to really put this library up, This intrusive and afferent library, when it first arose, needed some exciting conditions, and it had no such conditions. This is similar to the fact that GC is more difficult to pop in C + + because the GC is also intrusive and contagious. The third is from the realization, or flawed, the biggest flaw is that the lightweight process is not really light weight. The core of lightweight process is not only to good performance, more important is the resource consumption is small, but most of the time this resource occupies small this is actually more difficult to achieve, Go at this point to do better, there is a stack of automatic growth, the smallest stack can only be 4 K at the beginning, So each lightweight process is really lightweight from the resource footprint. But to achieve this, the vast majority of libraries are hard to do. Like Cerl we can only say that you specify how big this lightweight process stack is, but it is very difficult for programmers to specify the stack size, which has a great mental burden. To understand Cerl, the study of this PTH is a better learning material, of course, the second I think is directly learning go runtime. In terms of lightweight processes, the bottom layer is the same as Cerl.

The lightweight process model I mentioned very early on, from the time I first advocated Erlang, the concept was introduced, what is the lightweight process model? Very simple is two, one is to encourage the use of synchronous IO Write program logic, the 2nd is to use as many concurrent processes to enhance the IO concurrency capability. This is not the same as the asynchronous IO concurrency model, even if you are single-threaded and can do high concurrency.

The core idea of all lightweight process concurrency models is the same, first making each lightweight process less resource-intensive, so you can create millions-level concurrency, and the only limit on the number of processes that can be created is your memory. The smaller each process resource occupies, the higher the concurrency capability can be generated. As a server, we all know that memory resources are very valuable resources, but they are also very inexpensive in some sense. The second is a lighter switching cost, which is why the process is user-state, this and the function of the call is basically in the same order of magnitude, the switching cost is very very low. However, if the operating system process is at least from the user state to the kernel mentality to the user state switch.

Let's talk about the implementation principle of the lightweight process model, which is quite a lot of people or more concerned. I've talked less about this before, but today we're going to talk a little bit about what the lightweight process is all about. Let's talk about the process, what is the so-called process? In fact, the process is essentially nothing more than a stack plus register state. How does the process switch? is to save the current process register and then modify the register to the Register state of another new process, which is equivalent to switching the stack at the same time, because the stack's position is actually register-maintained (ESP/EBP). This is the concept of process, even if the kernel of the operating system helps you do it in essence. So these things can be done in a user-state, not in the same way. In essence, the call to the function is similar to the one you can think of, because the call to the function also holds the register, but it is relatively small, at least not switching the stack. So essentially it is the cost of this switchover is basically the same as the function call, I have measured, probably is the function call of about 10 times times, basically still in the same order of magnitude category. So after the introduction of the concept of such a lightweight process, what is actually the process of the entire lightweight process physically? The underlying is actually the line Cheng asynchronous Io, and you can think of each thread in the thread pool as a virtual CPU (Vcpus). The number of logical lightweight processes (routine) is usually much larger than the number of physical threads, and each physical thread must have only one routine running at a time, and more routine are waiting. But this waiting for the routine there are two, one is waiting for Io, that is, I put the CPU to him also can not do live, there is an IO operation has been completed, or itself does not wait for any preconditions, in short, is able to participate in scheduling. If a physical thread (Vcpus) its routine is active or because IO triggers a dispatch, the thread (Vcpus) is given out, this time a new routine can be run on it, that is, from the waiting and can meet the schedule routine participate in scheduling, Select a routine according to a priority algorithm. So the principle of lightweight process scheduling is this, it is a user-state thread, and then there is a non-preemptive scheduling mechanism, the timing is mainly triggered by the IO operation. Io operation occurs, the function of IO operation is implemented as follows: First initiate an asynchronous IO request, after initiating the routine state is set to wait for IO to complete, and then to give the CPU, this time also triggered the dispatch scheduler, this time the dispatcher will see if there are people waiting to dispatch, With it, you can switch the past. Then when the IO event is completed, the IO will usually have a callback function as the completion of the IO event notification, which will be taken over by the scheduler, what exactly? It is very simple to put this IO operation belongs to the ROutine is set to ready and can participate in scheduling. Because the state is just waiting for IO, even if it is dispatched to it, there is no way to do things. And ready is to let the routine participate in scheduling. There is also a situation is routine actively transfer CPU, in this case the state of routine in the switch is still ready, any time can be cut to it. These are basically non-preemptive scheduling inside the most basic of several scheduler trigger conditions: IO operations, IO completion events, the active transfer of CPU. But in fact, in the user-state thread can also implement preemptive scheduling, the practice is very simple, the scheduler up a timer, this timer scheduled to start a scheduled task, this timer task check each is executing in the state of the routine, The discovery takes up the CPU time to be able to let it voluntarily give up the CPU, this can realize the preemptive type dispatch. So even in the user state, it can fully implement the operating system process scheduling all the things done. This is how lightweight processes are implemented.

The next question is, what's the difference between Erlang and go? These two are not all concurrent models of lightweight processes? It should be said that their basic philosophy is quite similar, but there are very big differences in detail, not a little bit of difference. The main difference is: The first attitude to the lock is different, the second attitude to asynchronous IO is not the same, the third is not the main details, but the important details, the message mechanism of the two are not the same.

First of all, the attitude to the lock, Erlang is very offensive to the lock, it is believed that the variable can be largely avoided lock, Erlang thought the lock has a great mental burden, so there should be no lock. The idea of Go is that locks do have a great mental burden, but locks are largely avoided. We first macro look at why the lock is to avoid the first server is a shared resource, is a lot of users in use, not for someone to use, so the server itself is a shared resource, once there is concurrency is these concurrent requests in the grab this shared resources. We know that once someone shares the state and each other to change it, this time must be locked, this is not the technical implementation of the details for the transfer, of course, this analysis is from a macro point of view, I will also talk about the technical details, to talk about why the lock can not be avoided.

Why does Erlang have no locks? In fact, Erlang's server is a single process, which is logically non-concurrent. A process is an executor, so Erlang's server is different from Go's server, and the go server must be a multi-process (goroutine) together to form a server, each request a separate process (goroutine). But Erlang is not the same, an Erlang server is a single process of things, since it is a single process of all the concurrent requests into the process mailbox (the next session of the process mailbox), and then the server from the process mailbox to pick up the mail (the requested content) and then processing, So the single server in Erlang does not have concurrent requests, and this is the root cause of the fact that he does not need a lock, not because it has no variables, and the variables are immutable. Because everyone knows that a single-threaded server must be unlocked. Then someone might ask, how does Erlang do high concurrency? is actually two points: first, each Erlang physical process will have a lot of servers, each server is non-interference with each other, they can be concurrent. Second, what if a single server wants high concurrency? Erlang's answer to this question is to ask for asynchronous IO.

But what's the trouble with async IO for Erlang? The first is the complexity of the server state, which is very, very deadly, which leads me to conclude that Erlang once introduced asynchronous IO is actually worse than the Orthodox asynchronous IO programming model. Let's see what time it is. First of all, why are there intermediate states introduced? Because there is asynchronous IO, a request has not yet been completed, but it must give the time to another request, so this time the server will maintain the middle state of the request that was just not completed. Once there is an intermediate state, the state of the server itself is not clean, the middle state of a single request to the server to maintain the state, this is very unreasonable thing. Second, the intermediate state of this server will result in a more complex state machine, where the state is complex because the server does not just want to maintain the state of a request, but the state of all outstanding requests to maintain it. Third, these intermediate states will lead to a lock of the demand, why there is a lock appeal I will say below. So Erlang tries to avoid locks, but there is essentially no way to avoid locks once asynchronous IO is in fact.

Why isn't Erlang avoiding locks? As we have just said, it is essentially because of the existence of a process mailbox, and Erlang's server is a single process (the execution body), so there is no concurrency on a regular, so there is no need to lock, but once the asynchronous IO is introduced there will be pseudo concurrency. Since it is a single process, it is impossible to really have concurrency, but if we consider the Erlang process to be an vcpus, because there are requests not being completed, there are many concurrent requests running on the same vcpus. This may occur when a request needs to be temporarily occupied by a resource that cannot be freed, and there are mutually exclusive behaviors. Once such behavior is bound to have a lock, although the lock is not the implementation of the operating system but its own implementation, the specific may be reflected in something like Busyflag, which is actually a lock. All the features of the lock, such as forgetting to release it, the entire server is suspended, and its behavior is exactly the same as all locks. Some people will say I do not have an operating system lock, it is true that single-threaded program will not have the operating system lock, but there is no doubt that our code is actually locked.

So, on the issue of attitude to locks, Erlang tries to avoid locks, but essentially simply throws the lock problem to the user. And go chooses to accept the fact that the lock cannot be avoided.

Let's look at the attitude to asynchronous IO again. Go believes that there should be no asynchronous IO code in any case. While Erlang is not very pure from the lightweight process concurrency model, it does not repel asynchronous IO, is a hybrid, asynchronous IO programming coupled with a lightweight process model, and the mixed result is that Erlang programming, once asynchronous IO is used, is actually more than the mental burden of simple asynchronous IO programming.

The last detail is the second most important concept I've just talked about, which is the process mailbox for Erlang, all messages sent to the Erlang process are sent to the process mailbox, and Erlang provides the meta-language for sending and receiving messages from the mailbox. Go provides communication facilities such as channel, which can be easily created and then used for process communication. By contrast, the message mechanism of GO is more lightweight. Message Queuing and processes are completely separate facilities.

So, let's see how we can understand the concurrency model of go? is the concurrency model of Go new? Actually, it's not. I have said on many occasions that the concurrency model of Go is not really an innovative thing at all, why? Because the go concurrency model is the way we write programs since we have a network, we write about the concurrency model that go advocates when we write the Web program on the first day. So where's the problem? Why did everyone finally give up the oldest concurrency model? The reason is because the process and thread of the OS are too heavy, which leads people to try to improve the IO concurrency with some crooked strokes, that is, today's widely accepted asynchronous IO programming paradigm. The problem with this asynchronous IO paradigm is that the programmer's programming mental burden is greatly aggravated. So go's pioneering has two points: the 1th is the return of value, in fact the oldest concurrent programming model is the best concurrency model. Its problem is the cost of executing the body, so the most important thing to go is to let the cost of the implementation of the unlimited reduction, you know that the latest version of the go to the minimum stack of 4 K, small enough to make many people feel incredible. So the go is actually solved from the implementation level, not from the programming paradigm. Go the second initiative is to make the implementation of the language built into the standard facilities, I said that the PTH library is not popular because the concurrency model is contagious and mutually exclusive, the system should not have two such facilities, and if you use different facilities, it will be rejected, This contagion must require the executor to be a standardized thing. And what's the age of this? The multicore era has been shouting for almost ten years, but as we can see, there are few languages that implement this as a language built-in standard, and I think it's a great initiative for go.

Let us recall that the concurrency model of Go is actually what this page mentions. It is the oldest concurrency model. The modern operating system, and the principles of the operating system that everyone learns, are exactly the same as the concepts in go. First of all, this concurrency model involves the concept of the execution body, which is the goroutine of go, and then the atomic operation, the mutex, the synchronization, the message, and finally the synchronous IO. These are all the contents of the concurrency model of go.

So the last question is, is it possible to implement the concurrency model of go in Erlang? It is easy to implement the concurrency model of Erlang in go, but would you like to implement the concurrency model of go in Erlang? In principle, No. Because the process is not shared in Erlang, this is the most important basis for his opposition to the lock. Process cannot share state, so do not lock, but in fact I think this is the biggest problem, why? Since Erlang receives the request there is no way to create a sub-executor and then let it handle a specific request without having to take care of it. However, Erlang inside the process is not shared state, you have to change the server state must use asynchronous IO, the thing to do and then throw the message to the server told him you changed the state. By changing the server state through the message, this cost is relatively large, and it brings a lot of problems. So I think it's a bad idea for Erlang to change the state of the message, and it doesn't change anything in a big circle. Of course, if I had to do the concurrency model of go in Erlang, it would require an castration of Erlang, and if we let Erlang's server be stateless, we could implement the concurrency model of go. What kind of server is stateless? It may be easy to think of a PHP server. It gives state to all external storage services, which are maintained by the storage service. If the server of Erlang is stateless, it is possible to implement the concurrency model of go, because all States are modified by external storage. However, the Erlang programmer must be very sad, and it seems that the Erlang language does not bring any substantial benefits. So my conclusion is, it's time to give up Erlang.

This is the content of my speech, thank you all!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.