Socket Communication Between Erlang and PHP

Source: Internet
Author: User
Tags apc unpack
Unbe. cnerlang_php_socket_test_01 some time ago, I talked about memcached and APC in the group and talked about devshm gradually. I said that he used devshm for caching. This discussion triggered my tests on the read/write performance of memcached, APC, and devshm data. During the test, I thought of Erlang's built-in ets and the legendary

Http://unbe.cn/erlang_php_socket_test_01/ some time ago, in the group and brother chat memcached and APC, gradually talked about/dev/shm, said he used/dev/shm cache is very useful. This discussion triggered my tests on the read/write performance of memcached, APC, and dev/shm data. During the test, I thought of Erlang's built-in ets and the legendary

Http://unbe.cn/erlang_php_socket_test_01/

Some time ago, I chatted with my brother in the group about memcached and APC and gradually talked about/dev/shm. I said that he used/dev/shm for caching. This discussion triggered my tests on the read/write performance of memcached, APC, and dev/shm data.

During the test, I thought of the Erlang built-in ets and the legendary concurrency performance. If I use Erlang + ets as a key-value cache server like memcached, will the performance be better than memcached? As a result, I started my experiment and used Erlang to write a Socket server that supports concurrent connections and write a PHP client.

In the test result, I first press "no table" and put it at the end of the article with instructions, so as not to dilute the topic of this article.

The most worth sharing experience in this experiment is the Socket between Erlang and PHP.CommunicationMethod.

The following Erlang code and PHP Code do not actually involve the cache function. I hope this article will explain it later.CommunicationMechanism.

The server code of Erlang has a nice name: mycached. The method to start the service is as follows:

mycached:start(10086, 10).

Mycached: the first parameter of start is the port number, and the second parameter is the number of worker processes. After the server is started, there will be n working processes waiting for accpet at the same time. I don't know if there will be a "surprise group" problem here, I assume that the Erlang internal mechanism will properly handle the situation where multiple processes are waiting for one socket at the same time. The reason for this assumption is that the basis of this server code comes from the official Erlang documentation, the sample code above is in this mode.

When a process receives a connection from the client, it is stuck in the loop function to process client requests until the client is disconnected.

In fact, you can also use spawn to start a new process to execute the loop function, respond to the client request, and the working process continues to return to the accept state. With only one line of code changed, the server's working mode immediately changed. Erlang is amazing.

I have also compiled two functions for testing mycached itself. One is called test, which is used to execute a single request and output the returned results. It is mainly used for functional testing. One is called banch, which is used to execute batch requests and output the execution time. It is mainly used for stress testing.

Example of calling the test function:

mycached:test("localhost", 10086, 1, "Hello").

The first parameter of the test function is the server address, the second parameter is the server port number, the third parameter is the request type, and the fourth parameter is the request parameter.

Example of calling the banch function:

mycached:banch("localhost", 10086, 1, "Hello", 10, 1000).

The first four parameters of the banch function are the same as those of the test function. Two parameters are added: the number of connections and the number of requests. The sample code above will execute 10 connections, and each connection will initiate 1000 requests respectively, totaling requests.

The complete code of Erlang is as follows:

-module(mycached).-export([start/2, server/1, loop/1, test/4, for/3, banch/6, banch_call/6]).-define(CMD_GET, 1).-define(CMD_SET, 2).-define(CMD_DEL, 3).start(LPort, Num) ->    case gen_tcp:listen(LPort, [binary, {active, false}, {packet, 2}]) of            {ok, LSock} ->            start_servers(LSock, Num),                        {ok, Port} = inet:port(LSock),                        Port;                    {error, Reason} ->            {error, Reason}    end.start_servers(_, 0) ->    ok;    start_servers(LSock, Num) ->    spawn(?MODULE, server, [LSock]),        start_servers(LSock, Num - 1).server(LSock) ->    case gen_tcp:accept(LSock) of            {ok, CSock} ->            loop(CSock),            server(LSock);                    Other ->            io:format("accept returned ~w - goodbye!~n", [Other]),            ok    end.loop(CSock) ->    inet:setopts(CSock, [{active, once}]),        receive            {tcp, CSock, Request} ->            Response = process(Request),                        Response_Bin = list_to_binary(Response),                        gen_tcp:send(CSock, Response_Bin),                        loop(CSock);                    {tcp_closed, CSock} ->            io:format("socket ~w closed [~w]~n", [CSock, self()]),            ok    end.process(Request) ->    try        {<
 
  >, Params} = split_binary(Request, 1),                case Type of            ?CMD_GET ->                "Command: GET";            ?CMD_SET ->                "Command: SET";            ?CMD_DEL ->                "Command: DEL";            _ ->                "Unknow Command"        end    catch        _:E -> io:format("process failed: ~w [~w]~n", [E, self()]),        "Server Error"    end.test(Host, Port, Command, Params) ->    test_call(Host, Port, Command, Params, 1).banch(Host, Port, Command, Params, Times, RTimes) ->    {M, _} = timer:tc(?MODULE, banch_call, [Host, Port, Command, Params, Times, RTimes]),        io:format("Time: ~p micro seconds~n", [M]),        ok.banch_call(Host, Port, Command, Params, Times, RTimes) ->    for (0, Times,        fun() ->            test_call(Host, Port, Command, Params, RTimes)        end    ),    ok.test_call(Host, Port, Command, Params, Times) ->    {ok, Sock} = gen_tcp:connect(Host, Port, [binary, {active, false}, {packet, 2}]),        Request = [Command, Params],        Request_Bin = list_to_binary(Request),        case Times of        1 ->            {ok, Bin} = test_send(Sock, Request_Bin),        ok = gen_tcp:close(Sock),    Bin;        _ ->            for (0, Times,                fun() ->                    {ok, _} = test_send(Sock, Request_Bin)                end            ),        ok = gen_tcp:close(Sock),ok    end.test_send(Sock, Request_Bin) ->    ok = gen_tcp:send(Sock, Request_Bin),        gen_tcp:recv(Sock, 0).for (To, To, _) ->    ok;for (From, To, Callback) ->    Callback(),    for (From + 1, To, Callback).
 

The biggest difficulty on the PHP end is the request package and settlement package. In fact, this part is concentrated on the use of php's built-in pack and unpack functions.

Mycached'sCommunicationIt is based on the {packet, 2} mode of Erlang, Erlang will automatically treat the first two bytes of the data packet as the length of the request, on the Erlang end, you do not need to perform complex packet settlement and package operations on your own. Instead, you only need to focus on Parsing business data. PHP is not so lucky. You must add two bytes of packet size information to the request header and receive the server response, you must first read the package size information of two bytes from the header and then unpack the package.

Complete PHP client code:

 host = $host;        $this->port = $port;                $this->sock = @socket_create(AF_INET, SOCK_STREAM, getprotobyname('tcp'));                if ($this->sock)        {            socket_connect($this->sock, $this->host, $this->port);        }    }        public function set ($key, $value)    {    }        public function get ($key)    {        $msg = $this->pack_data(1, $key);                $sent = @socket_write($this->sock, $msg, strlen($msg));                if ($sent === FALSE)        {            return null;        }                $buff = $this->socket_read_len($this->sock, 2, PHP_BINARY_READ);                $head = unpack("H*", $buff);                $len = hexdec($head[1]);                $res = $this->socket_read_len($this->sock, $len, PHP_BINARY_READ);                return $res;    }        public function remove ($key)    {    }        public function remove_by_search ($key)    {      }        private function pack_data ($type, $data)    {        $cmd = pack("C*", $type);                $cmd_len = strlen($cmd);                $body = pack("A*", $data);            $body_len = strlen($body);                $len = $cmd_len + $body_len;                $head = pack("H*", $this->to_hex_str($len));                return $head.$cmd.$body;    }        private function to_hex_str ($num)    {        $str = dechex($num);                $str = str_repeat('0', 4 - strlen($str)).$str;                return $str;       }         private function socket_read_len ($socket, $len, $type)    {        $offset = 0;        $socketData = '';                while ($offset < $len)        {            if (($data = @socket_read ($socket, $len - $offset, $type)) === false)            {                return false;            }                        $dataLen = strlen ($data);                        $offset += $dataLen;                        $socketData .= $data;                        if ($dataLen == 0) { break; }        }                return $socketData;    }}?>

The following is the PHP test code:

 get("hello");}$etime = microtime(true);echo "Time: " . ($etime - $stime) . "\n";?>

Okay,CommunicationThe related code has been introduced. Here we can talk about the test results.

The test results show that APC reads and writes are about five times faster than memcached, but APC cannot share data between processes, not what I am looking. Memcached is similar to PHP reading/writing/dev/shm. memcached is slightly better than PHP, but the great advantage of/dev/shm is that it can easily implement hierarchical management of cache.

After I just set up Erlang and PHPCommunicationThen I decided to test it first.CommunicationPerformance. If the cache operation logic is not addedCommunicationIf the performance is unacceptable, it is unnecessary to fully implement the entire cache server.

The test results are really unexpected,CommunicationThe performance is too far different from previously imagined. The response time for 10 thousand requests is within 2 s ~ 1 s fluctuation, while 10 thousand write times of memcached is only 0.3 s, reading is a little faster. This is why the code shared above does not actually involve any caching operations.

Maybe it was my previous estimation of Erlang performance, or maybe my code was not well optimized. However, in any case, the development efficiency of Erlang is worth noting. During the experiment, the entire Socket server has been evolving smoothly. From scratch, from having to supporting concurrency, from supporting concurrency to supporting testing, from supporting testing to supporting stress testing.

I think you may have a question: is it because the efficiency of the PHP end is too low that I have come to the wrong conclusion? You can rest assured that I have provided the test code on both the PHP and Erlang ends, and you can test it on your own. In fact, the performance loss on the PHP end is not as small as I expected.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.