(2) Response flow--response spring's DAO spell device

Source: Internet
Author: User
Tags emit throwable ticket

This series of articles index: "Response Spring's word Wizard".
Previously summary: What is responsive programming

1.2 Response Flow

Left a hole in the previous section-why not use Java stream for data flow? The reason for this is that it has limitations if it is used in responsive programming. For example, the following two issues need to be faced:

    1. WEB applications have I/O-intensive features, I/O blocking can lead to significant performance loss or resource waste, and we need an asynchronous, non-blocking , responsive library, and Java stream is a synchronous API.
    2. Suppose we are going to build a change delivery pipeline from the data layer to the front end, we may encounter data layer per second thousands of data updates, and obviously do not need to pass each update to the front end, this time need a flow control capability, like our home faucet, can control the switching flow rate, and Java Stream does not have the ability to control the flow of traffic well.

A data flow with "asynchronous non-blocking" and "flow control" capabilities, which we call a responsive stream (reactive stream).

There are several Java libraries that implement the responsive flow specification, with a brief introduction to two:RxJava and Reactor.

To introduce Rxjava, we had to mention Reactivex (reactive EXTENSIONS,RX), which was originally an extension of LINQ, developed by a team of Microsoft architects Erik Meijer, in November 2012, open source, Rx is a programming model, The goal is to provide a consistent programming interface that helps developers more easily handle asynchronous data streams, and the RX Library supports. NET, JavaScript, and C++,rx in recent years, and has now supported almost all popular programming languages, including RXJS, Rxjava, and more.

Later, some Daniel in the Java community worked together to develop a response-flow specification. The Rxjava team then refactored the 1 release to form Rxjava 2, which is compatible with the response flow specification.

Reactor is pivotal's project, which is a brotherly relationship with the famous spring, and is therefore the "Queen" response stream of spring's recently launched responsive module WEBFLUX. Reactor supports the responsive flow specification, which has no historical burden compared to Rxjava and focuses on server-side responsive development, while rxjava more prone to responsive development on the Android side.

In the Java 9 release, the specification of the responsive flow is incorporated into the JDK, and the corresponding API interface is java.util.concurrent.Flow.

Spring Webflux is also a key element behind this series of articles. Since Webflux preferred reactor as part of its responsive technology stack, we are also mainly based on reactor, the current version is Reactor3.

We go back to the main line and discuss "asynchronous non-blocking" and "flow control". Note that this section does not have to be concerned with the reactor code details, just the "feel" of using a responsive flow.

1.2.1 Asynchronous non-blocking

In the context of today's Internet era, Web applications often face the challenge of high concurrency and massive data, and performance has always been a core factor to consider.

blocking is one of the performance killers.

From the perspective of the caller and the service provider, blocking, non-blocking, and synchronous, asynchronous can be understood as follows:

  • Blocking and non-blocking reflect the status of the caller, and when the caller invokes the service provider's method, the subsequent operation cannot be performed until the result is returned, which is the blocking state, which is understood to be non-blocking if it is returned directly after the call so that subsequent operations can continue.
  • Synchronous and asynchronous reflection is the ability of the service provider, when the caller invokes the method of the service provider, if the service provider is able to return immediately, and notifies the caller in some way after processing is complete, it can be understood to be asynchronous; Or if the caller needs to go to the active query processing is completed, it can be understood as synchronous.

For example, Lao Liu bought a washing machine, when he started the washing machine if he had been waiting for the washing machine to end good clothes, then he is blocked; if he started the washing machine and then went to watch TV, reckoned quickly after washing to see, he is non-blocking, because Lao Liu can do another thing. But Lao Liu can not know when the washing machine is finished washing/washing, then this washing machine is the synchronous way to work; Old Liu later changed a washing machine can play music in the laundry, so do not have to look at every now and then, although the washing machine can not immediately return to old Liu clean clothes, But you can notify old Liu on TV after the work is done, so the new washing machine is working asynchronously.

The HTTP service is essentially a resource operation, especially after the emergence of a restful one. The so-called resources, corresponding to the server side is the file and data.

    • On the document side, as the Internet infrastructure improves, WEB applications are embracing, processing, and delivering more and more documents, including images and audio and video formats. Access to files can cause blocking.
    • On the data side, with the advancement of Big Data technology, internet companies are increasingly keen to gather information from users on their operations, location, social relationships, and so on. Data is becoming more liquid and data volumes are growing significantly. Access to data can also cause blocking.
    • In addition, as the micro-service architecture is becoming more and more hot, communication between microservices is no longer like "boulder" applications through the use of object references and method calls, but through the network transmission of serialized data to achieve, network delay may also cause congestion.
    • In addition to the I/O blocking, some complex business logic can cause the caller to block because of the long processing time.

Most people do not think that blocking is a big problem, at least feel that in addition to network I/O, reading and writing files and databases is very fast, many developers have been writing blocking code. So let's focus on the I/O blocking problem and have an intuitive perception of its severity.

How slow is the 1.2.1.1 I/O?

In many cases, in large space and time dimensions, the order of magnitude is often larger than our cognitive category, and our intuition is always unreliable.

An example of two spatial dimensions:

On a large scale, the image of the Milky Way's central location shines like a market:

But in fact, if the stars are reduced to the size of the sand, then the density is equivalent to a stadium with one or two grains of sand. Once watching sci-fi films, always worried that the speed of light flying spacecraft too late to turn or brakes hit the planet, in fact, want to hit is quite difficult.

And on the small scale, the nucleus has the most of the mass of atoms, the impression, it should be like this:

But in fact, if an atom is also enlarged to the stadium so large, the nucleus is only equivalent to a table tennis so big, empty very!

Second, from the Time dimension:

If the Earth's 4.5 billion-year history is shortened to one year, then the history of the vast civilization recorded by mankind is only a few seconds, said the king.

To be small, the "instant" and "instantaneous" may be several orders of magnitude worse. We'll look at the "time in the CPU's eye" from the micro-time dimension, and you'll find that blocking in your computer may be more exaggerated than your intuition.

CPU time in the eye--

The CPU is definitely called "flash", because they have their own set of clocks. Our story of the protagonist is a 2.5GHz CPU, if its world also has a "second" concept, and its clock jumps for a second, then the CPU (a core of the CPU) in the eyes of the concept of time what kind of?

The group where the CPU is located is the hardware department calculation group. For it, a few small partners that work closely with each other can keep pace with it:

    • The CPU is very neat, it takes only a second to complete an instruction, and a complex action may require multiple instructions.
    • Fortunately, the "personal secretary" first-level cache response is faster, can understand the meaning of the CPU in seconds .
    • Level two cache from "Secretary" although it takes more than 10 seconds to "get" to the CPU's point, it's not too dull.
    • Cooperation with memory groups has become accustomed to, and memory requests for data usually take 4-5 minutes to find (memory addressing), but it is OK, after all, the first cache there can get 80% of the desired data, the rest of the two cache can also handle a large part, not much delay.

CPU is a typical workaholic, the task of many times, all night without complaining, but there is something to let it wait, it is to kill him. The other groups that are working together (especially the disks and network cards of the I/O Group) are relatively inefficient:

    • With regard to my I/O Group colleagues, the CPU has complained for a long time, it takes 4-5 days to find (address) The SSD to get something, and then the data is sent over the past few weeks. Mechanical disk is too outrageous, with him to get a data, unexpectedly to spend 10 months to find, if you want to read 1M of data, unexpectedly 20 months ! Why is this employee not laid off?!
    • About the network card, the CPU knows that they have done their best, after all, the million gigabit network cost is quite high. With the other small partners in the room with the gigabit network to communicate with each other is also smooth, to another machine CPU friends to send 1K of letters, the fastest seven or eight hours can be sent past. But the letters of the 1K are wrapped in layers, and the actual number of words cannot be written. What's more, the network card's communication procedures are complicated, before each internet communication, "Hello can you hear me?" --I can hear you, can you hear me over there? --I can hear you too, so let's get started! "This handshake confirmation will take a long time, but can not communicate in person, it can only do so." This is OK, the scariest thing is to communicate with other city's small partners, sometimes it takes years to deliver the message!

This shows that, for the CPU, it is not easy to enrich the work, but thanks to the memory group of small partners to help cache the data in batches to and from the I/O Group, contradictions have eased.

This figure can only be clearly seen when I am involved in the time bar, we convert to a logarithmic scale of the graph to see:

This figure is not an intuitive scale, and each tick on the horizontal axis is an order of magnitude, and visible I/O is a few orders of magnitude worse than CPU and memory. This shows how important caching is for Web applications in large, high-concurrency scenarios, and higher cache hit ratios mean performance.

(above time data from Http://cizixs.com/2017/01/03/how-slow-is-disk-and-network)

We usually have two ways to solve the performance loss caused by blocking:

    1. Parallelization: Use more threads and hardware resources;
    2. Async: Improve execution efficiency based on existing resources.
One of the 1.2.1.2 solutions: Multithreading

Because my I/O group colleagues are too ink, sometimes the CPU does not delay when he comes back to work.

For example, the blue is the time the CPU executes the instruction, and the gray is the time to wait for the I/O feedback results. Please do not tangle this picture in the proportion of time, so the painting has been given to the I/O group face.

The operating System department Daniel many, in order to let the CPU's work saturation, designed a multi-threaded working mode.

But "multithreading is not a silver bullet", there are some inherent drawbacks, and sometimes difficult to control (see " attached 1"):

    • In high-concurrency environments, multi-threaded switching consumes CPU resources (in the time bar of the CPU, the time for the context switch in sepia, you can imagine, high concurrency, the number of threads will be very much, the context switch on the resource consumption will become apparent.) Moreover, during the switching process, the CPU does not perform any business or meaningful computational logic.
    • Multi-threaded development for high concurrency environments is relatively difficult (requires mastering the principles and tools of thread synchronization, Executorservice, fork/join frameworks, concurrency sets, and atomic classes), and some problems are difficult to find or reproduce (such as command rearrangement);
    • In highly concurrent environments, more threads mean more memory consumption (the JVM defaults to allocating 1M of line stacks space per thread).

This is not to deny the credit of multi-threading, instead, multithreading plays an important role in high concurrency. Moreover, multithreading is still the mainstream high concurrency scenario, and before Servlet 3.1, the servlet container allocates a separate thread for each received request to process and respond.

With the Java version iteration, its support for concurrent programming is getting more and more power. Everyone seems to think that multithreading is the most natural way to deal with high concurrency, and the HTTP protocol is stateless, after the session is placed in the distributed cache, the Web server's horizontal expansion is a breeze, when the number of users climbed rapidly, horizontally increase the number of servers. Especially with the advent of cloud computing and devops, scaling and shrinking can be automated.

All is well, until the advent of node. JS, it brings new implications for Java Web development. Because, after all, in everyone's impression, JavaScript is only active in the browser-based interpretation of the performance of the language is not high, and "server-side, high concurrency," the two words as if unrelated ah. What's even more surprising is that node. js only needs a single thread (multi-threaded inside the engine) to handle high-concurrency requests, what a bone-surprise!

Could Java do that? The answer is YES! The secret is the same as node. JS-Asynchronous non-blocking.

1.2.1.3 Solution II: Non-blocking

Just like node. js, using "Asynchronous non-blocking" code can switch tasks to perform without changing the execution thread, based on the Java language features and SDKs, we typically have two scenarios:

    1. Callback.
    2. CompletableFuture

1) Non-blocking callbacks

We know that the front-end JavaScript code is limited to single-threaded when it runs on the browser, so JavaScript has a very early ability to do non-blocking, and calls that take longer to return results are usually asynchronous, or "poor kids are the first to be in charge."

An example of our most common asynchronous invocation is Ajax, such as the code for a jquery-based Ajax call:

$.ajax({   type: "POST",   url: "/url/path",   data: "name=John&location=Boston",   success: function(msg){     alert( "Data Saved: " + msg );   }});... // 后边的代码

Here we make a POST request to go out, and then register a callback method to respond to the success event, then we can continue to execute the code behind, the response will be returned successfully callback method of registration. OK, perfect, no blocking.

In the Java development process, we will also use callbacks from time to time, but for complex logic, it will lead to "Callback Hell". What is callback hell?

This is a good read in the picture, again such as the following (this example from Reactor 3 Reference guide). The requirement is to find the favorite of the TOP5 for a user, and if no favorite is returned for that user, 5 suggestions are given by default.

Userservice.getfavorites (UserId, New callback<list<string>> () {//<1> public void onsuccess (List <String> list) {//<2> if (List.isEmpty ()) {//<3> suggestionservice.getsuggestions (New Callba Ck<list<favorite>> () {public void onsuccess (list<favorite> List) {//<4> uiutils . Submitonuithread ((), {//<5> List.stream (). Limit (5). ForEach (uilist :: Show);        <6>});        } public void OnError (Throwable error) {//<7> Uiutils.errorpopup (Error);    }      }); } else {List.stream ()//<8>. Limit (5). ForEach (Favid, Favoriteservice.getdetails (favid                ,//<9> new callback<favorite> () {public void onsuccess (favorite details) {              Uiutils.submitonuithread ((), uilist.show (details)); } public void OnError (Throwable error) {uiutils.errorpopup (error);    }            }          ));  }} public void OnError (Throwable error) {uiutils.errorpopup (error); }});

This is indeed a complex logic, with multiple callbacks that are difficult to read. Even with lambda, there are still many lines of code.

    1. The callback-based service uses an anonymous Callback parameter. The latter two methods were successfully executed asynchronously
      Or an exception is called.
    2. Gets the callback method that invokes the first service after the list of favorite IDs onSuccess .
    3. Called if the list is empty suggestionService .
    4. A service is suggestionService passed List&lt;Favorite&gt; to the second callback.
    5. Now that we are dealing with the UI, we need to make sure that the consumer code runs on the UI thread.
    6. Use Java 8 Stream to limit the number of recommendations to 5 and then display them in the UI.
    7. At each level, we handle the error in the same way: An error message is displayed in a popup.
    8. Back to the favorite ID layer, if we return to list, we need to use it favoriteService to getFavorite
      Object. Because you want only 5, use stream.
    9. Callback again. This time for each ID, the Get Favorite object is pushed to the front-end display in the UI thread.

How do you write with a responsive stream? Use the Reactor3 library to express:

userService.getFavorites(userId) // <1>       .flatMap(favoriteService::getDetails) // <2>       .switchIfEmpty(suggestionService.getSuggestions()) // <3>       .take(5) // <4>       .publishOn(UiUtils.uiThreadScheduler()) // <5>       .subscribe(uiList::show, UiUtils::errorPopup); // <6>
    1. We get the stream to the favorite ID.
    2. We convert them asynchronously (ID) to an Favorite object (using flatMap ), and now we have the
      FavoriteFlow.
    3. Once Favorite empty, switch to suggestionService .
    4. We only focus on up to 5 elements in the stream.
    5. Finally, we want to do the processing in the UI thread.
    6. Trigger () by describing the final processing of the data (displayed in the UI) and the handling of the error (shown in the popup) subscribe .

What if you want to make sure that the "favorite ID" data is obtained within 800ms (if timed out, taken from the cache)? In callback-based code,
It feels complicated to think about it. But Reactor3 is simple, add one operator to the processing chain timeout .

Example of increasing timeout control in Reactor3

userService.getFavorites(userId)       .timeout(Duration.ofMillis(800)) // <1>       .onErrorResume(cacheService.cachedFavoritesFor(userId)) // <2>       .flatMap(favoriteService::getDetails) // <3>       .switchIfEmpty(suggestionService.getSuggestions())       .take(5)       .publishOn(UiUtils.uiThreadScheduler())       .subscribe(uiList::show, UiUtils::errorPopup);
    1. If the stream does not emit (emit) any value in the time-out period, an error signal is emitted.
    2. Once the error signal is received, it is cacheService processed.
    3. The content behind the processing chain is similar to the previous example.

It can be seen that the programming of the responsive flow not only reduces the amount of code effectively, but also greatly improves the readability of the code.

2) Asynchronous Completablefuture

CompletableFutureAlso added in Java 8, Future it has two highlights, relative to the original:

    1. Asynchronous callback, which provides more than 50 methods, in which the Async ending method can be called asynchronously without causing blocking;
    2. Declarative, in CompletableFuture the method, more or less can be seen similar to the above reactor code "declarative Programming" feeling, such as completableFuture.thenApplyAsync(...).thenApplyAsync(...).thenAcceptAsync(...) .

For example, we buy coffee in the coffee shop, after ordering we will first get a small ticket, this small ticket is Future , on behalf of you on this ticket after the coffee is ready to go to get. But Future.get() the method is still synchronous and blocking, which means that you can take the ticket to find friends to talk to the day, but do not know when their coffee is ready, may go to the counter to take the time to wait a while. and CompletableFuture the service of the coffee shop, not only a small ticket, there is a plate, we order after the table to sit down, the order of coffee once the good will be sent to our hands.

Future CompletableFuture It's a lot more powerful than callbacks and we're trying to use it to implement a requirement (this example comes from Reactor 3 Reference Guide): We first get a list of IDs, and then we get to each ID further to the " The ID corresponds to the name and statistics "Such a combination of attributes is a list of elements, the entire process is implemented asynchronously."

completablefuture<list<string>> ids = Ifhids (); <1>CompletableFuture<List<String>> result = Ids.thencomposeasync (L-, {//<2> stream< completablefuture<string>> zip = L.stream (). Map (I-, {//<3> Completablefutu re<string> Nametask = Ifhname (i); <4> completablefuture<integer> stattask = Ifhstat (i); <5> return Nametask.thencombineasync (Stattask, (name, stat), "name" + name + "has stat" S "+ stat);    <6>}); list<completablefuture<string>> combinationlist = Zip.collect (Collectors.tolist ()); <7> completablefuture<string>[] Combinationarray = Combinationlist.toarray (New completablefuture[    Combinationlist.size ()]); completablefuture<void> alldone = Completablefuture.allof (Combinationarray); <8> return alldone.thenapply (V-combinationlist.stream ()                                                 . Map (Completablefuture::join)//<9> . Collect (Collectors.tolist ())); list<string> results = Result.join (); <10>assertthat (Results). Contains ("Name Namejoe has stats 103", "name Namebart have stats 10 4 "," name Namehenry has stats "," name Namenicole have stats 106 "," Name Nameabslajnfoa JNFOANFANSF has stats 121 ");
    1. Start with a future, which encapsulates the list of IDs that will be fetched and processed later.
    2. Gets the asynchronous processing task that is further started behind the list.
    3. For each element in a list:
    4. Asynchronously gets the corresponding name.
    5. To get the corresponding statistics asynchronously.
    6. Combine two result one by one.
    7. We now have a list, the element is the future (representing the combined task, the type is CompletableFuture ), in order to perform these tasks,
      We need to convert the list (the flow of elements) to an array ( List ).
    8. Pass this array to CompletableFuture.allOf , return one Future , and when so the task is done, then thisFuture
      It's done, too.
    9. A bit of trouble is allOf that the return is CompletableFuture&lt;Void&gt; , so we traverse this future List ,
      And then use join() come to phone their results (will not cause blocking, because AllOf make sure these future all complete)
    10. Once the entire asynchronous pipeline is triggered, we wait for it to finish processing and then return to the results list.

You can see and CompletableFuture try your best, but even with all the tricks, it's a little hard for the collection to work. Since Reactor has many combined operations built into it, the above example can be simply implemented as:

  flux<string> ids = Ifhrids ();//<1>Flux<String> combinations = Ids.flatmap (ID, {// <2> mono<string> Nametask = ifhrname (ID); <3> mono<integer> Stattask = ifhrstat (ID); <4> return Nametask.zipwith (Stattask,//<5> (name, stat), "name" + name + "Ha    S stats "+ stat); }); mono<list<string>> result = Combinations.collectlist (); <6>List<String> results = Result.block ();  <7>assertthat (Results). containsexactly (//<8> "name Namejoe has stats 103", "name Namebart have stats  104 "," name Namehenry has stats "," name Namenicole have stats 106 "," name Nameabslajnfoajnfoanfansf has stats 121 ");  
    1. This time, we start with a ids sequence () that is provided in an asynchronous manner Flux&lt;String&gt; .
    2. For each element in the sequence, we handle it asynchronously ( flatMap within a method) two times.
    3. Gets the corresponding name.
    4. Get the appropriate statistic.
    5. Asynchronously combines two values.
    6. As the element values in the sequence are "in place", they are collected List in one.
    7. In the process of generating streams, we can continue to operate the Flux flow asynchronously, combining and subscribing to it (subscribe).
      In the end we are likely to get one Mono . Because it is a test, we block ( block() ), wait for the flow process to end,
      The collection is then returned directly.
    8. The Assert result.

This kind of non-blocking data flow feeling, let me think of "Let XXX Fly" inside the most classic paragraph: Jiang Wen played mace to the new county magistrate that "Mara's train snapped a continuous dozen n gun, next brother asked" hit no ", Mace said" let xxx fly a while ~ ", later on see Pull the train horse reins are all xxx Interrupted, the horses scattered, very 6+1! If mace every shot to see if the previous shot was shot, how can I install x?

As the above example shows, callbacks or completablefuture have similar dilemmas when dealing with complex logic, and the API provided by Reactor3 can significantly reduce the amount of code, improve code readability, and, in particular, provide some good functionality.

1.2.2 Flow Control--back pressure

In the response stream, the issuer of the data stream is called Publisher , and the listener is called Subscriber . We follow a unified literal translation called "publisher" and "subscriber".

The question is, what if the publisher sends the data at a different speed than the Subscriber is dealing with the data? Subscribers are fast, and that's fine, but if the processing speed doesn't keep up with the speed at which the data is emitted, it's like this:

If there is no traffic control, subscribers will be overwhelmed by the rapid flow of data generated by the publisher. Just like in an assembly line, if a station processing is relatively slow, and the upstream of the material is faster, the worker of this station is too much, this time he needs a way to tell the upstream lower material slower.

Similarly, subscribers need a mechanism to provide upstream feedback on traffic requirements:

This mechanism, which is capable of providing upstream feedback to flow requests, is called back pressure (backpressure, also translated as "Back Pressure").

In the specific use process, the processing of back pressure involves different strategies. Give two examples for easy comprehension:

Example: Cached Policies

, when a subscriber finishes processing an element, it request(1) then requests an element with the publisher. Because the publisher's data cannot be processed quickly by the subscriber, the Publisher caches the unhandled data elements.

This approach is somewhat similar to Message Queuing, where publishers need to maintain a queue to cache elements that have not yet been processed. Typically used for scenarios where data accuracy requirements are high, such as the publisher here is the sudden arrival of data peaks, are to be saved to the database, as the subscriber's data persistence layer is not so fast processing speed, then the publisher needs to temporarily cache the data.

Example: Discarded policy

, the Publisher does not need to cache the data that is too late to process, but discards it directly, and when the Subscriber requests the data, it gets the most recent data element from the publisher. For example, we are doing a monitoring system, the background of the monitoring data at 10 per second, and the front-end interface only needs to update the monitoring data every second, that as the background of the publisher is not cached data, because this time-sensitive scenario, the data can be discarded directly.

In the following practical stage, we will also understand the principle of back pressure.

1.2.3 Summary

These are the two core features of the reactive flow: asynchronous non-blocking, and flow control based on the "back-pressure" mechanism.

So we have responsive programming based on the "upgraded" version of the Responsive stream:

Reactor3 and RXJAVA2 are the concrete implementation libraries of the response stream with the above characteristics.

Responsive programming is often used as an extension of the "Observer pattern" (Observer design pattern) in object-oriented programming. The response flow (reactive streams) also has similarities to the "iterative sub-pattern" (Iterator design pattern), as there are iterable-iterator such correspondence. The main difference is that the Iterator is based on pull, and the response flow is based on push mode.

Using iterator is an "imperative" (imperative) programming paradigm, since when to get the next element depends on the developer. In a responsive flow, the relative role is publisher-Subscriber (Publisher-subscriber), which in turn notifies Subscribers (subscriber) when a new value arrives, and this "push" mode is the key to a responsive type. In addition, the operation of the pushed data is expressed in a declarative (declaratively) rather than imperative (imperatively) way: The developer defines the processing logic for the data flow by describing the "process flow".

Very sorry, the first two sections Luo Li bar so much but no actual combat, presumably you also long to see tired, then we will come together coding a warm warm-up.

(2) Response flow--response spring's DAO spell device

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.