Talk about TCP, HTTP, WS-Protocol for surging engine and how to containerized deployment

Source: Internet
Author: User
Tags docker run

1 , preface

Distributed has become the hottest topic in the world, and distributed framework is crowded. From the centralized service governance framework, to decentralized distributed service frameworks, to distributed micro-service engines, this is the result of the continuous accumulation of technological improvements. ESB, Gateway, Nginx Gateway These centralized service governance frameworks are now a more mainstream structure for companies, and in recent years we have been firing a more fire-centric microservices framework, each with its own representative work, such as. NET has Orleans, Akka.net, these frameworks are self-evident can learn from the Internet one or two, but for these frameworks, is not to meet the needs of the company, you can build the entire platform it?

It can be told that the next-generation framework should be called the distributed micro-service engine, also called the service grid, it should be the infrastructure engine, load drive Business module services, responsible for the reliable transfer between services, provide the required network protocol, and for the surging service engine is to move towards this idea, Internal calls through RPC, a complete set of service governance rules, TCP, HTTP, WS protocol, and can support containerized, customizable engine deployment, let's see how this is done.

2. Service Engine

The service engine is a dedicated infrastructure for handling reliable communication between services and services. and the service should be deployed independently, without boarding in other frameworks, because of the independence of the service, the business team no longer needs to worry about the complexity of service governance, and to the service engine processing. For each service instance, the service engine deploys a service process on the same host and one to the other, implementing all the external network traffic (see), and with good framework encapsulation, operation and maintenance costs can be effectively controlled.

2.1 History of evolution

Surging from scratch can be divided into three stages of evolution

The first phase of the RPC service Governance framework, where the communication between services and services is accessed through an interface creation agent

Second Stage RPC Service governance framework + Gateway, service-to-service communication through interface creation agent or Routepath access, external call via Gateway

The third phase of the service engine, the service no longer cares about communication details and communication protocols, all to the engine, only need to focus on the implementation of the business

2.2 Architecture

For surging now provides TCP, HTTP, WS Three communication Protocols, TCP, HTTP protocol is based on Dotnetty, and WS is WEBSOCKET-SHARP-based branch version Websocketcore (this version supports. NET CORE)

The architecture of the engine as shown, through the External network communication protocol, can docking mobile, web, Internet of things applications, through the service Discovery RPC Remote call internal business services.

3. How to develop a protocol-based business module

3.1 Service interface based on HTTP,TCP protocol

Inherit Iservicekey, and all need to identify [Servicebundle ("Api/{service}")], the code is as follows

    [Servicebundle ("Api/{service}")]    Public interface Imanagerservice:iservicekey    {        [Command (strategy = strategytype.injection, Shuntstrategy = Addressselectormode.hashalgorithm, executiontimeoutinmilliseconds = 2500, Breakerrequestvolumethreshold = 3, Injection = @ "return 1;", requestcacheenabled = False)]        task<string> SayHello (string name);    

3.2 Based on the WS-Protocol service interface

Inherit the Iservicekey, and both need to identify the remote call between the [Servicebundle ("Api/{service}")],ws service and the service, and the load offload needs to be set to the hash algorithm code as follows

  [Servicebundle ("Api/{service}")]    Public  interface Ichatservice:iservicekey    {        [Command (shuntstrategy= Addressselectormode.hashalgorithm)]        Task SendMessage (string name,string data);    }

3.3 Service implementation based on HTTP,TCP protocol

Inherit proxyservicebase and Business interface Imanagerservice

public class Managerservice:proxyservicebase, Imanagerservice    {public        task<string> SayHello (string Name)        {              return Task.fromresult ($ "{name} Say:hello");        }    }

3.4 based on WS-Protocol service implementation

Inherit wsservicebase and Business interface Ichatservice, Note: Calls between WS-Services can only be made through Routepath remote calls, and proxy remote calls via interfaces are not supported

 public class Chatservice:wsservicebase, Ichatservice {private static readonly concurrentdictionary<string        , string> _users = new concurrentdictionary<string, string> (); private static readonly concurrentdictionary<string, string> _clients = new concurrentdictionary<string,        String> ();               private string _name; protected override void OnMessage (Messageeventargs e) {if (_clients. ContainsKey (ID)) {dictionary<string, object> model = new dictionary<string, object>                (); Model.                ADD ("name", _clients[id]); Model.                ADD ("Data", e.data); var result = servicelocator.getservice<iserviceproxyprovider> (). Invoke<object> (model, "Api/chat/sendmessage").            Result;            }} protected override void OnOpen () {_name = context.querystring["name"]; if (!string. IsNullOrEmpty (_naMe) {_clients[id] = _name;            _users[_name] = ID; }} public Task SendMessage (string name, string data) {if (_users. ContainsKey (name)) {this. Getclient ().            SendTo ($ "Hello,{name},{data}", _users[name]);        } return task.completedtask; }    }

3.5. provide hash splitter location interface

By invoking the internally provided hash splitter location interface, you can assign the same parameter key to the same service provider.

3.6 based on WS-Protocol testing

4. Containerized deployment

Download the surging engine via Docker, the current version is v0.8.0.2

Docker Pull serviceengine/surging:v0.8.0.2

Start the surging engine

Docker run--name surging--env mapping_ip=192.168.249.242  --env mapping_port=93--env rootpath=/home/fanly--env register_conn=192.168.249.162:8500--env eventbusconnection=192.168.249.162--env Surging_Server_IP=0.0.0.0  -- Env Surging_server_port=93-v/home/fanly:/home/fanly-it-p 93:93 surging

Run as shown:

Environment variables

Protocol: You can set HTTP, TCP, WS, none, where the HTTP, TCP, and WS representations support only the relevant protocol, None means that all protocols can be supported

RootPath: The root directory of the business module store, such as:/home/fanly

Httpport: Start the HTTP protocol host port

Wsport: Start the WS-Protocol host Port

Useengineparts: Set the Enabled service engine component, default is Dotnettymodule; Nlogmodule; Messagepackmodule; Consulmodule; Httpprotocolmodule; Eventbusrabbitmqmodule; Wsprotocolmodule; ( Note: If it is a NuGet customization engine, you do not need to configure packages, you can delete it, just download the required engine components and automatically assemble the registration to the service engine )

IP: Private container IP, usually set 0.0.0.0

Server_port: private Container Port

MAPPING_IP: Public Host IP

Mapping_port: Exposing host ports

5. Summary

Surging research and development has been in the past year, from the original only support RPC remote service access, to now can support containerized deployment, support TCP, HTTP, WS-Protocol service engine, the growth of the evolution is very interesting, there are many inspirations only in the development of the time will be a flash, Also hope that in the future can have better design ideas into the surging, but also hope that surging more and more powerful.

Talk about TCP, HTTP, WS-Protocol for surging engine and how to containerized deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.