[Erlang] General-purpose Erlang optimization settings

Source: Internet
Author: User
Tags epoll



The settings are common to multiple projects, but there are other fine notes depending on the scenario, which is not in the one by one description. -Sunface


First, the Erl startup parameters:

+k true

Open Epoll Scheduling, open Epoll in Linux, will greatly increase the efficiency of scheduling

+a 100

Asynchronous thread pool, invoking services for some ports

+p 1024000

Maximum number of processes

+q 65535

Maximum number of ports

+SBT DB

The bound scheduler, after which the scheduler's task queue does not jump between CPU threads, combined with sub usage, allows the CPU to load balance while avoiding a large number of transitions.

Note: In a Linux system, it is best to have only one evm on this option, if multiple Erlang VMs are running in the system at the same time, or shut down as good

+sub true

When CPU load balancing is turned on, false is a CPU-intensive scheduling policy that prioritizes running tasks on a CPU thread until the CPU load is high.

+SWCT Eager
When this option is set to eager, the CPU will be woken more frequently and can increase CPU utilization

+SPP true
Open parallel port parallel dispatch queue, when turned on will greatly increase the system throughput, if shutdown, will sacrifice the throughput for a lower latency.

+ZDBBL 65536

The port buffer size of the distributed Erlang, and when buffer is full, sending messages to the distributed remote port blocks


Second, Erlang internal process startup parameters

Example: Create a new process and register it, which is the globally unique self-increment ID generation process, so it is not possible to do a long process, and the performance of a single process is critical

First, for performance and functional considerations, this process is not gen_server; second, partial parameter tuning can be

Register (num_generator, spawn_opt (? MODULE, Init, [],[{priority,high},{scheduler,0}, {min_heap_size, 65536 * 2},{min_bin_vheap_size,65536 * 2}]) ).

Parameter explanation:

1.priority

Er Lang is a fair scheduling policy, so each process gets the same run time slice by default: 2000reductions, but for our scenario, this process should be higher priority, need more scheduling, and therefore set to high, and can also be set to Max , but Max is a reserved priority for the system process, with high

2. Scheduler

Binding the process to the specified scheduler, preventing the task of the process from being allocated by scheduler allocation, reducing CPU calls, noting that this is different from +SBT db, +SBT DB is the task queue that is preventing the scheduler from jumping between CPU threads, Scheduler is designed to prevent processes from being assigned to other schedulers during time-slice switching

3.min_heap_size

Process initial heap size, typical practice of swapping memory for CPUs, increasing initial size, can significantly reduce GC count and memory reallocation Times

4.min_bin_vheap_size

The process initial binary heap size, when the process of binary data processing exchange a lot, you can get and increase the same effect min_heap_size



Third, port (socket) tuning

Example: A server listens on a port, accepting a client request. A typical scenario Web server that requires high throughput, low latency targets

Res = Gen_tcp:listen (Port, [Binary,

{reuseaddr, true},

{Nodelay, true},

{Delay_send,true},

{high_watermark,64 * 1024},

{send_timeout, 30000},

{Send_timeout_close, true},

{keepalive, true}])


Detailed parameters:

Binary

After receiving the client's message, as binary to deal with, binary in Erlang is very efficient data structure, more than 64 bytes, is the global preservation, so in many operations is not required to replicate, just copy binary pointer, detailed search REFC binary, Note: Binary usage requires a lot of experience, otherwise memory leaks may occur

REUSEADDR:

Allow system multiplexing port, this parameter is important for high throughput systems, search: Linux port multiplexing

Nodelay:

To open the Tcp_nodelay parameter in Linux, search: tcp_nodelay 40 ms delay

Delay_send:

The default Erlang port message is sent, is sent directly, if the failure is queued processing, and then by the Scheduler queue poll operation, if set to true, then do not try to send directly, and throw in the queue, waiting for poll, the Open option will increase a little bit of message latency, in exchange for a large number of throughput improvements

High_watermark:

Port's send cache, when the cache is full, the next send will block directly until the cache falls below a certain threshold of low_watermark. If the network IO system is dense, increase the buffer to avoid sending blocking

Send_timeout:

In High_watermark, the send blocking is mentioned, if the block exceeds this time, then it will time out, send a direct return, stop sending

Send_timeout_close:

If Send_timeout has been set with the Send_timeout_close option, then the timeout The socket will be closed directly. If the sending process is not very important, such as Web user processes, it is strongly recommended to turn this option on, when sending a 30-second timeout, it means that the user has a lot of trouble, disconnecting is the best practice, or there may be a lot of strange bugs.

KeepAlive

In accordance with the keepalive provisions of the http/1.1 agreement, this is based on the business need to choose whether to open, if the same client will continue to initiate HTTP requests, then the recommended set to TRUE, avoid multiple TCP handshake


Example: The server initiates a large number of HTTP requests, and after tuning the parameters, the same throughput time is 1/3-1/2 before the optimization (after rigorous testing of the data)

inets:start (),

httpc:set_options ([{max_keep_alive_length,500},{max_sessions,100},{nodelay,true},{reuseaddr,true}]),

Detailed parameters:

Max_keep_alive_length:

The maximum number of packets that are allowed to be sent on the same HTTP connection, which defaults to 5 and more than 5 packets, is re-connected

Max_sessions:

Maximum number of concurrent HTTP connections to the target server, significantly increasing data upstream throughput

Nodelay_true:

See above

REUSEADDR:

See above



First write so much, will be updated in the future, if this article is helpful to you, please leave a message support, a word of a word knock is also very hard;)

[Erlang] General-purpose Erlang optimization settings

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.