GRPC Service Discovery & Load Balancing

Source: Internet
Author: User
Tags etcd
This is a creation in Article, where the information may have evolved or changed.

GRPC Service Discovery & Load Balancing

Building high-availability, high-performance communication services, typically implemented by mechanisms such as service registration and discovery, load balancing, and fault-tolerant processing. Depending on where the load balancer is implemented, the following three solutions are usually available:

1. Centralized lb (Proxy Model)

There is a separate lb between service consumers and service providers, typically specialized hardware devices such as F5, or software-based implementations such as Lvs,haproxy. There is an address mapping table for all services on LB, which is usually registered by the operations configuration, and when a service consumer invokes a target service, it initiates a request to LB and forwards the request to the target service with a load balancer of a certain policy, such as polling (Round-robin). LB generally has the ability to check health, and can automatically remove unhealthy service instances. The main issues of the programme:

    1. Single point of issue, all service invocation traffic through LB, when the number of services and the amount of calls, LB easily become a bottleneck, and once the LB failure affects the entire system;

    2. There is an increase in the level of service consumer and provider, with a certain performance overhead.

2. In-process lb (balancing-aware Client)

For the lack of the first scenario, this scenario integrates LB functionality into the service consumer process, also known as a soft-load or client-side load scenario. When the service provider starts, it first registers the service address with the service registry, and periodically reports the heartbeat to the service registry to indicate that the service is alive, equivalent to a health check, and when a service consumer accesses a service, it queries the service registry through the built-in lb component, caches and periodically refreshes the target service address list. Then select a target service address with a load balancing policy, and finally initiate a request to the target service. LB and service discovery capabilities are dispersed within the process of each service consumer, while the service consumer and the service provider are directly called, with no additional overhead and better performance. The main issues of the programme:

    1. Development cost, the program integrates the service callers into the client's process, if there are many different language stacks, it is necessary to cooperate with the development of a variety of different clients, there is a certain amount of research and development and maintenance costs;

    2. In addition, in the production environment, if you want to upgrade the customer library, the service caller will be required to modify the code and re-release, the upgrade is more complex.

3. Independent LB Process (External Load balancing Service)

The scheme is a compromise scheme proposed for the deficiency of the second scheme, which is basically similar to the second one.
The difference is that the LB and service discovery functions are moved from within the process into a separate process on the host. When one or more services on the host are accessing the target service, they are doing service discovery and load balancing through a separate LB process on the same host. The program is also a distributed solution without a single point of problem, a lb process hangs only affects the service callers on the host, the service caller and LB are in-process invocation performance is good, while the scheme also simplifies the service callers, does not need to develop client libraries for different languages, LB upgrades do not require service callers to change code.
The main problem of the program: deployment is more complex, more links, error debugging troubleshooting Problems inconvenient.

GRPC service discovery and load balancing implementation

The GRPC open Source component does not provide direct service registration and discovery functionality, but its design documentation has provided a way to implement it, and a named resolution and load Balancer interface has been provided for expansion in the GRPC code API in different languages.

Its basic realization principle:

    1. After the service starts, the GRPC client issues a name resolution request to the naming server, the name resolves to one or more IP addresses, each IP address indicates whether it is a server address or a load balancer address, and indicates which client load balancing policy or service configuration to use.

    2. The client instantiates the load balancing policy, and if the address returned by the resolution is the load Balancer address, the client uses the GRPCLB policy, otherwise the client uses the service to configure the request's load balancing policy.

    3. The load balancing policy creates one sub-channel for each server address.

    4. When there is an RPC request, the load balancing policy determines that the sub-channel, the GRPC server, will receive the request and the client's request will be blocked when the available server is empty.

According to the design ideas provided by GRPC, based on the in-process LB program (2nd case, Ali Open Source Service Framework Dubbo also adopt a similar mechanism), combined with distributed and consistent components (such as zookeeper, Consul, ETCD), Find actionable solutions for GRPC service discovery and load balancing. Next, take the go language as an example and briefly introduce the key code implementation based on ETCD3:

1) Named resolution implementation: RESOLVER.GO

Package Etcdv3import ("Errors" "FMT" "Strings" ETCD3 "GITHUB.COM/COREOS/ETCD/CLIENTV3" "google.golang.org/ Grpc/naming ")//resolver is the implementaion of Grpc.naming.Resolvertype resolver struct {serviceName string//Servi Ce name to resolve}//newresolver return resolver with service Namefunc newresolver (ServiceName string) *resolver {RET Urn &resolver{servicename:servicename}}//Resolve to Resolve the service from ETCD, Target is the dial address of etc d//Target Example: "http://127.0.0.1:2379,http://127.0.0.1:12379,http://127.0.0.1:22379" func (re *resolver) Resolve (Target string) (Naming. Watcher, error) {if re.servicename = = "" {return nil, errors. New ("Grpclb:no service Name provided")}//Generate ETCD Client client, err: = Etcd3. New (ETCD3. config{endpoints:strings. Split (Target, ","),}) if err! = Nil {return nil, fmt. Errorf ("Grpclb:creat ETCD3 client failed:%s", err. Error ())}//Return watcher REturn &watcher{re:re, client: *client}, nil} 

2) Service Discovery implementation: Watcher.go

Package Etcdv3import ("FMT" ETCD3 "GITHUB.COM/COREOS/ETCD/CLIENTV3" "Golang.org/x/net/context" "Google.golang . org/grpc/naming "" GITHUB.COM/COREOS/ETCD/MVCC/MVCCPB ")//Watcher is the implementaion of Grpc.naming.Watchertype WATC Her struct {re *resolver//RE:ETCD resolver client etcd3.  Client isinitialized bool}//Close do Nothingfunc (W *watcher) Close () {}//Next to return the Updatesfunc (w *watcher) Next () ([]*naming. Update, error) {//prefix is the ETCD prefix/value to watch prefix: = FMT. Sprintf ("/%s/%s/", Prefix, W.re.servicename)//Check if is initialized if!w.isinitialized {//query addres Ses from Etcd resp, err: = W.client.get (context. Background (), prefix, ETCD3. Withprefix ()) w.isinitialized = true if Err = = Nil {Addrs: = Extractaddrs (RESP)//if Not empty, return the updates or watcher new dir if L: = Len (Addrs); L! = 0 {updates: = make ([]*nAming. Update, L) for I: = Range Addrs {updates[i] = &naming. Update{op:naming. ADD, Addr:addrs[i]}} return updates, nil}}}//Generate ETCD Wa Tcher RCH: = W.client.watch (context. Background (), prefix, ETCD3. Withprefix ()) for wresp: = Range Rch {for _, Ev: = Range wresp. Events {switch ev. Type {case MVCCPB. Put:return []*naming. Update{{op:naming. ADD, addr:string (EV. Kv.value)}}, nil case MVCCPB. Delete:return []*naming. Update{{op:naming. Delete, addr:string (EV. Kv.value)}}, nil}}} return nil, Nil}func Extractaddrs (resp *etcd3. GetResponse) []string {Addrs: = []string{} If resp = Nil | | resp. Kvs = = Nil {return Addrs} for I: = Range resp. Kvs {if V: = resp. Kvs[i]. Value; V! = Nil {Addrs = append (Addrs, String (v))}} return Addrs}

3) Service Registration implementation: Register.go

Package Etcdv3import ("FMT" "Log" "Strings" "Time" ETCD3 "GITHUB.COM/COREOS/ETCD/CLIENTV3" "golang.org/ X/net/context "" github.com/coreos/etcd/etcdserver/api/v3rpc/rpctypes ")//Prefix should start and end with no Slashvar P Refix = "Etcd3_naming" var client etcd3.  Clientvar servicekey Stringvar stopsignal = make (chan bool, 1)//Registerfunc Register (name string, host string, Port int, Target string, Interval time. Duration, TTL int) error {servicevalue: = FMT. Sprintf ("%s:%d", host, port) Servicekey = FMT.  Sprintf ("/%s/%s/%s", Prefix, Name, Servicevalue)/Get Endpoints for Register dial address var err client, ERR: = etcd3. New (ETCD3. config{endpoints:strings. Split (Target, ","),}) if err! = Nil {return FMT.        Errorf ("Grpclb:create ETCD3 client failed:%v", err)} go func () {//Invoke self-register with ticker Ticker: = time.      Newticker (interval) for {//minimum lease TTL is Ttl-second      Resp, _: = client. Grant (context. TODO (), Int64 (TTL))//should get first, if not exist, set it _, err: = client. Get (context. Background (), servicekey) if err! = Nil {if Err = = Rpctypes. Errkeynotfound {If _, err: = client. Put (context. TODO (), Servicekey, Servicevalue, ETCD3. Withlease (resp.id)); Err! = Nil {log. Printf ("Grpclb:set service '%s ' with TTL to ETCD3 failed:%s", name, err. Error ())}} else {log. Printf ("Grpclb:service '%s ' Connect to ETCD3 failed:%s", name, err.                Error ())}} else {//Refresh set to True for notifying the watcher If _, Err: = client. Put (context. Background (), Servicekey, Servicevalue, ETCD3. Withlease (resp.id)); Err! = Nil {log. Printf ("Grpclb:refresh service '%s ' with TTL to ETCD3 failed:%s", name, err.      Error ())}      } Select {Case <-stopsignal:return case <-ticker.    C:}}} () return nil}//UnRegister Delete registered service from Etcdfunc UnRegister () error { Stopsignal <-true stopsignal = Make (chan bool, 1)//Just a hack to avoid multi UnRegister deadlock var err er    Ror If _, Err: = client. Delete (context. Background (), Servicekey); Err! = Nil {log. Printf ("Grpclb:deregister '%s ' failed:%s", Servicekey, err. Error ())} else {log. Printf ("Grpclb:deregister '%s ' OK.", Servicekey)} return err}

4) Interface description file: Helloworld.proto

syntax = "proto3";option java_multiple_files = true;option java_package = "com.midea.jr.test.grpc";option java_outer_classname = "HelloWorldProto";option objc_class_prefix = "HLW";package helloworld;// The greeting service definition.service Greeter {    //   Sends a greeting    rpc SayHello (HelloRequest) returns (HelloReply) {    }}// The request message containing the user's name.message HelloRequest {    string name = 1;}// The response message containing the greetingsmessage HelloReply {    string message = 1;}

5) Implement the service-side interface: Helloworldserver.go

Package Mainimport ("Flag" "FMT" "Log" "NET" "OS" "Os/signal" "Syscall" "Time" "golang.org/x/n Et/context "" Google.golang.org/grpc "GRPCLB" Com.midea/jr/grpclb/naming/etcd/v3 "" COM.MIDEA/JR/GRPCLB/EXAMPLE/PB " ) var (serv = flag. String ("service", "Hello_service", "Service name") port = flag. Int ("Port", 50001, "Listening port") REG = flag. String ("Reg", "http://127.0.0.1:2379", "Register ETCD Address")) Func main () {flag. Parse () lis, err: = Net. Listen ("TCP", FMT. Sprintf ("0.0.0.0:%d", *port)) if err! = Nil {panic (err)} err = grpclb. Register (*serv, "127.0.0.1", *port, *reg, time. SECOND*10, if err! = Nil {panic (ERR)} ch: = Make (chan os. Signal, 1) Signal. Notify (CH, syscall. SIGTERM, Syscall. SIGINT, Syscall. SIGKILL, Syscall. SIGHUP, Syscall. Sigquit) go func () {s: = <-ch log. PRINTF ("Receive Signal '%v '", s) grpclb. UnRegister () OS. Exit (1)} () log. Printf ("Starting HeLlo Service at%d ", *port) S: = Grpc. NewServer () PB. Registergreeterserver (S, &server{}) S.serve (LIS)}//server is used to implement HelloWorld. Greeterserver.type Server struct{}//SayHello implements HelloWorld. Greeterserverfunc (S *server) SayHello (CTX context. Context, in *PB. Hellorequest) (*PB. Helloreply, error) {FMT. Printf ("%v:receive is%s\n", time. Now (), in. Name) return &AMP;PB. Helloreply{message: "Hello" + in. Name}, nil}

6) Implement Client interface: Helloworldclient.go

Package Mainimport ("Flag" "FMT" "Time" GRPCLB "Com.midea/jr/grpclb/naming/etcd/v3" "com.midea/jr/grpclb/e XAMPLE/PB "" Golang.org/x/net/context "" Google.golang.org/grpc "" StrConv ") var (serv = flag. String ("service", "Hello_service", "Service name") REG = flag. String ("Reg", "http://127.0.0.1:2379", "Register ETCD Address")) Func main () {flag. Parse () r: = GRPCLB. Newresolver (*serv) B: = Grpc. Roundrobin (R) CTX, _: = Context. Withtimeout (context. Background (), 10*time. SECOND) conn, err: = Grpc. Dialcontext (CTX, *reg, Grpc. Withinsecure (), Grpc. Withbalancer (b)) if err! = Nil {panic (ERR)} Ticker: = time. Newticker (1 * time. Second) for T: = Range Ticker. C {client: = PB. Newgreeterclient (conn) resp, err: = client. SayHello (context. Background (), &AMP;PB. Hellorequest{name: "World" + StrConv. Itoa (T.second ())}) If Err = = Nil {fmt. Printf ("%v:reply is%s\n", T, resp. Message)}}}

7) Run the test

    1. Run 3 server S1, S2, s3,1 client C, observe the number of requests received by each server is equal?

    2. Close the 1 server S1 to see if the request will be transferred to another 2 service side?

    3. Restart the S1 server to see if the other 2 server requests are evenly distributed to S1?

    4. Shut down the ETCD3 server to see if the client communicates properly with the service side?
      Shutdown communication is still normal, but the new server will not be registered in, the service side dropped off the line can not be removed.

    5. Restart the ETCD3 server, the service end of the offline can automatically return to normal.

    6. Close all server ends and client requests will be blocked.

Reference:

http://www.grpc.io/docs/https://github.com/grpc/grpc/blob/master/doc/load-balancing.md
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.