Original: Http://stackoverflow.com/questions/4113299/ruby-on-rails-server-options
Apache vs NginxThey are all Web servers that can servo static files. Apache is more popular, has more features, and nginx is relatively less functional, small, and fast. Both Apache and Nginx are able to servo the Ruby server outside the box (Out-of-the-box), and you need to use additional plugins to assemble them. Both Apache and Nginx can act as reverse proxies, which means they can forward incoming HTTP requests to other servers and then transfer the server's response to the client.
Mongrel and other production-mode servers vs WebrickMongrel is a ruby-implemented application server, specifically: 1, loading Ruby APP.2 in its own process space, creating a TCP socket that allows it to communicate with the outside world (such as the Internet). Mongrel listens for HTTP requests on this socket and forwards the request data to the Ruby app. The 3,ruby app returns an object that describes the HTTP response, mongrel converts it to a true HTTP response byte and goes back to the socket. Then the mongrel is no longer maintained, and the other alternative servers are:
- Phusion Passenger
- Unicorn
- Thin
- Puma
- Trinidad (JRuby only)
- Torquebox (JRuby only)
Next I'll talk about the difference between them and mongrel Webrick and mongrel, the difference is as follows:
- Webrick is not suitable for production mode. Webrick is written entirely in Ruby, Mongrel and other Ruby app servers, some Ruby part C, mostly Ruby, but its HTTP parser is written in C for performance.
- Webrick is slow and not strong enough, there are commonly known memory leak issues, and HTTP parsing issues.
- Because Webrick is the default for Ruby, Webrick is often used as the default server in development mode, while other servers require additional installation. It is not recommended to use the Webrick server in production mode, although for some reason Heroku chose Webrick as the default server, they used to be thin, but I don't know why they switched to Webrick
app Server World
- All of the current Ruby app servers are HTTP type, some servers expose 80 ports directly to the Internet, others do not
- Exposed 80-Port: phusion passenger, rainbows
- No direct exposure: Mongrel, Unicorn, Thin, Puma. These servers must be placed behind a reverse proxy server, such as Apache and Nginx.
- I don't know Trinidad and Torquebox, so I just ignore it.
Why do some servers have to be placed behind a reverse proxy?
- Some servers have a process that can handle only one request at a time, and if you want to process two requests simultaneously, you need to launch multiple server instances, both of which are servo to the same Ruby app. This multi-process app server is called an app server cluster (such as Mongrel Cluster, Thin Cluster). You have to start Apache or Nginx, to do the reverse proxy for the cluster, Apache/nginx will handle the distribution of different application instances in the cluster. (See section "I/O concurrency model" For more information.)
- The Web server can cache requests and responses. Some clients are sending data, receiving data slowly, and the Web server can isolate both the app server and the slow client. Of course you don't want app server to do nothing while waiting for the client to send and receive data. Apache and Nginx specialize in many things at the same time because they are multi-threaded or event-based.
- Most app servers can servo static files, but are not very good at it. Apache and Nginx are faster.
- It is a more secure strategy to use Apache or Nginx servo static files directly instead of processing forward requests (forward requests). Apache and Nginx are smart enough to protect app server from malicious requests.
Why are some servers exposed directly to the Internet?
- phusion passenger is not the same as other app servers, and one of the more common features is the ability to fit into another server.
- Rainbows's authors publicly state that rainbows can be exposed directly to the Internet. He is very unlikely to be attacked during the parsing of HTTP. Still, the author provides no warranty and says that usage are at own risk.
Application Server ComparisonIn this chapter, I'll compare most of the servers I mentioned, but not the phusion passenger. Phusion passenger and the others, I'll make a separate chapter. I will also ignore Trinidad and Torquebox, because I do not know them very well. It is only when you use the JRuby that they are involved.
- Mongrel is a piece of exposed stone. As mentioned earlier, mongrel is only a single-threaded, multi-process, so it is only used in clusters (cluster). No process monitoring means that if a process in the cluster crashes, a manual restart is required. People need to use extra processes to look after mongrel, such as Monit and God.
- The Unicorn was forked from the mongrel. Support for monitoring a certain number of processes: If a process crashes, it will be automatically restarted by the main process. It allows all processes to listen on the same shared socket, rather than using separate sockets for each process alone. This simplifies the configuration of the reverse proxy. Like mongrel, it is a single-threaded, multi-process.
- Thin uses the Eventmachine library to implement event-based I/O model. It is not an HTTP parser that uses mongrel, and is not based on mongrel. Its cluster nodes do not have process monitoring, so you need to monitor whether the process crashes. Each process listens on its own socket and does not share the socket like unicorn. In theory, thin's I/O mode allows for high concurrency, which is the majority of applications where thin is used. A thin process can handle only one concurrent request, so you need a cluster. For this quirky nature, see "I/O concurrency model" For more information.
- Puma was also forked from the mongrel, but unlike Unicorn, Puma was designed to be multi-process. Cluster is not currently supported. What you need to specifically confirm is you can achieve multi-core (you need-to-take special-care to ensure-you can utilize multiple cores). See "I/O concurrency model" For more information.
- Rainbows implements multiple concurrency models for different libraries .
I/O concurrency model
- single thread, multi-process. the common, popular I/O model in Ruby app server is due to the poor multithreading support of the Ruby ecosystem. A process can handle only one request at a time, and the Web server balances the load through multiple processes. This model is stable and developers do not easily create concurrency bugs. This model is suitable for fast short requests and is not suitable for operations with slow and long request blocking I/O, such as invoking the HTTP API.
- Pure Multithreading . Now that the Ruby ecosystem is already multi-threaded, this I/O model becomes practical. Multithreading supports high I/O concurrency, both for short requests and for long requests. Developers are also prone to concurrency bugs, and fortunately most frameworks are designed in this way, so it's not likely to happen. One thing to note is that because the global interpreter lock (GIL) is used, the MRI Ruby interpreter cannot use multiple CPU cores evenly, even if there are multiple threads. To do this, you can use multiple processes, one CPU core per process. JRuby and Rubinius do not have Gil, so one of their processes can load multiple CPU cores evenly.
- combine multithreading, multi-process . Phusion Passenger Enterprise 4 version was implemented. You can easily switch between the following modes: single-process multi-threading, pure Multi-threading, multi-process multithreading. This model gives the best choice.
- event. This pattern is not the same as the pattern mentioned earlier. It allows very high I/O concurrency, and is well suited for long requests. To implement this functionality, detailed support from application to framework is required. However, the main framework (rails and Sinatra) does not support the event model. This is also the reason why a thin process cannot handle multiple requests at the same time, just like a single-threaded multi-process model. Only dedicated frameworks take advantage of event I/O patterns, such as cramp.
[Translate]ruby Rails-related common servers