Everything has a reason.
This time change a large font, because there will be a lot of messy things. When doing business and helping friends solve problems in their own company, the problem is in the interface. Most of the time, the SOA governance of Java chooses transparent proxies (such as Dubbo, the SOA governance framework), but it does not seem like a good way to deal with cross-language scenarios.
Design of the interface
From my point of view, there are several features that can be analyzed:
Large-scale systems and small-scale systems
Interfaces for internal system interfaces and external systems
Interface for large data transfer and small data transfer
Long-linked interfaces and short-link interfaces
Most of the time, our priority is how big the system is, how good the expansion is, and how much power we have internally and externally. Most of the time this thing is not a conclusion, more is the opportunity of business and team composition and decision.
The topic behind me is more about the design and implementation of both internal and external interfaces.
Implementation of the interfaceSomething to do.
Whether the interface is internal or external, we have to do the following several things:
Interface function definition is clear, if there is a duplication of function where
Interface upgrade mechanism to be compatible with previous data
What is the data volume of the interface and whether the transport compression mechanism is required
These are the things that we have to think about when we start designing and implementing interfaces. But don't think that we think of these things, we can rest easy, and then things will go as we expect. Most of the time, the interface becomes like an amoeba, not a round but irregular polygon.
Internal interface
The internal interface is simply SOA, but SOA also has a number of practices, like the Dubbo framework I mentioned earlier. In the Dubbo framework, what we do can be said to be in the framework of Dubbo business development, and define interface and then exposed, we do not seem to interface design at this time, but in fact, we are completely in accordance with the Dubbo specifications to complete the definition of the interface, Yes, that's the interface. It seems that the internal interface is very clear, there is nothing to say, but there is a lot of things to say, I first talk about our common.
Choice of scenarios for service discovery
With proactive push, each time the registry changes, the latest list is pushed to the users of the service
With a passive pull, the registry is saved every time the change is taken, and the user is queried by the registry every time the service is invoked.
Well, let me say these two options separately.
The first scenario, the use of proactive push, can let the user quickly update the service person information, the user calls the service only need to be in a local hash table to query, and the registry has been hung off, and does not affect the user to invoke the service, looks good. So let me say that the drawbacks of this scheme, first of all to achieve watch-notify mechanism, probably some people will say that there is no zookeeper it? With this mechanism and data redundancy mechanism, I would like to say, when the volume of business, Zookeeper's watch mechanism can really withstand it? Then, the server load balancer is not handled well. Then there is no workaround, this can be see how the Dubbo in the registration center play.
The second scenario, this seems very intuitive, but every time the registration center, this performance, the registry can handle it? You may wish to think about the DNS server, in fact, the solution can be completely using a simple internal DNS implementation. So the benefits of this program are self-evident, load balancing is good, and very simple. But the problem is that performance and stability are things to consider in depth.
Transport protocol
The rest is the transport protocol to consider, why consider the transport protocol? The reason is simple:
The average amount of data transmitted by the interface and the balance of its own intranet bandwidth
Do you want to collaborate across languages
Did you invade the business?
Why should we consider the first thing? Although the first step of the registration center solves our problem of rapid expansion, however, the intranet bandwidth is limited after all. As the number of services increases and calls increase, we sometimes find that the same service we obviously increased the N deployment response time has dropped a lot, according to the formula should be the response time is constant? At this time, we may suddenly see the monitoring on our intranet bandwidth is already running full.
Why should we consider the second thing? Isn't a company a back-end language? In fact, I have seen interviewing a company, internal business complex, language use a wide range. Many times, we need to consider this issue in the perspective of a company's development, rather than a purely technical detail.
Why should we consider the third thing? So do not invade the business, is as much as possible to encapsulate the underlying implementation, so that the line of business less to consider what happens at the bottom. Many people say that this is unfair to the people on the line of business and hinders their technological development. Actually, let line of business colleagues more, more in-depth thinking business development is very important thing, I personally think that research and development of two categories, one is playing algorithms and the bottom, and the other is in-depth business, they have their own strengths and weaknesses. In fact, to reduce the intrusion of the business is to achieve faster product features, so that the product on-line, so that the company's business quickly iterative, so that everyone is good.
Interface Upgrade
This is more about how to do different versions of data coexistence and A/B testing than upgrading. Generally in many of the SOA system, is already very perfect, I do not need to be in this much nonsense. But to say more, the data version is not easy, and rise and careful.
External interface
External interface, everyone will soon think of restful. With the rise of entrepreneurship now, it should be said that the rise of smartphones and Web2.0 (more should be said that the essence is that network bandwidth to become better, mobile phone traffic price reduction). But the external interface is not limited to restful, and there is no wish to talk about the pure socket interface. External interface can talk about a lot of things, but the idea is basically not very different from the internal interface, so I will mainly talk about why choose a pure socket interface.
Long links that we don't want to face
Many research and development, even the company level, are reluctant to try this technology. For reasons, see below:
Complex commissioning, high cost of research and development
The domestic network environment is complex, aggravating the first article
Domestic users of traffic-sensitive, long-link heartbeat control is not good, it is easy to be considered to steal traffic
There is the protocol design is more complex, many times, the requirements for research and development has risen a lot
But long links are really that hard, but they are not. More often, the product level is not used, generally only IM type of application or real-time battle class game will choose Long link. Of course occasionally we also want to provide some highly interactive interactions, if only for short-term use within the application, it is entirely possible to choose websocket (but in the face of China's strong high-speed rail and operator infrastructure Planning TT).
Protection of the interface
Security
When we face a lot of external interfaces, we need to consider the security of the data. Why you should consider security:
Contains user data
Include transaction data
And even the data you don't want users to know about.
The most basic way to protect an interface is SSL/TLS, and then:
The way of symmetric encryption
The Asymmetric encryption method
First of all, why do we need to encrypt it again under SSL/TLS? You may have heard of an Internet security company in Israel, in other words, if the root certificate is released, minutes can be done by SSL/TLS for the man in middle attack. At the same time, some slightly advanced users will brush the interface for your interface behavior.
The first way, simple and easy to use. But the problem is also obvious, once the secret key leaks or by the user strong guess, then the impact is still very large.
The second way, the implementation is slightly complex, the same also faces the first way of the problem. However, there can be a dedicated Key manager that generates public and secret key pairs, delivers the public key to the client, and delivers the secret key to the server side, greatly reducing the likelihood of leaks. At the same time, even if the user guesses the public key of the client, it cannot decrypt the data submitted by other users, but can only forge the request.
Measure-level protection
The interface to the interior seems to be unnecessary, but why do you still want to do this thing? We can assume that a scenario, server A has 10 servers, but because of the User B's algorithm error, always first select a server of service A, then we can imagine that a server of service A is very stressful, and then gradually lose the response, and then be considered offline, Then User B also knocked out the second server in the same way. The impact is that the light response is slow, serious is the entire system avalanche of one-by-one crash stop service.
So how do we protect our interfaces in magnitude? Whether internal or external, we can choose to use algorithms such as leaky buckets and token buckets to protect the interface. Externally, we can also protect against replay attacks and current-limit protection by using timestamps and the entire URL overall signature technique.
Let's talk about the interface design thing.