Using deep learning for text generation

Source: Internet
Author: User
Tags keras



Ai write poetry?? Ai Creative fiction?? In recent years, people often hear this kind of news, it sounds very strange, so today we come to explore how this function is achieved through deep learning.

The basic strategy for text generation is to use a language model, a probabilistic model that predicts the next most likely word based on the input data, and the text as a sequence of data (sequence data), with a context between words and words, so using a recurrent neural network (RNN) Basically standard, such models are called Neuro-linguistic models (neural language model). After training a language model, you can enter an initial text, let the model generate a word, add the word to the input text, and then predict the next word. This keeps looping so that you can generate text of any length, given a sentence "The cat sat on the M" to generate the next letter "a":


The predictive output of the language model is actually the probability distribution of all the words in the dictionary, and usually chooses the word that generates the most probability. However, there is a sampling strategy (sampling strategy) in the figure, which means that sometimes we may not want to always generate the word with the greatest probability. Imagine a person's behavior if always strict adherence to the rules of the lack of change, easy to make people feel bored, the same language model, if always by the probability of the most generated word, then it is easy to become XX speech manuscript.

Therefore, in the process of generating words, sampling strategies are introduced, and some randomness is introduced in the process of selecting words from the probability distribution, so that words that are unlikely to be grouped together may be generated, and the resulting text can sometimes become interesting or even creative. The key to sampling is to introduce a temperature parameter to control randomness. Assuming that \ (p (x) \) is the original distribution of the model output, the new distribution after adding temperature is:
\[p (x_{new}) = \frac{e^{\,{log (P (x_i))}\,/\,{temperature}}}{\sum\limits_i E^{\,{log (P (x_i))}\,/\,{temperature}}} \ Tag{1.1}\]



The probability distributions obtained by different temperature were shown. The larger the temperature, the more uniform the new probability distribution, the greater the randomness, and the easier it is to generate some unexpected words.

def sample(p, temperature=1.0):  # 定义采样策略    distribution = np.log(p) / temperature    distribution = np.exp(distribution)    return distribution / np.sum(distribution)p = [0.05, 0.2, 0.1, 0.5, 0.15]for i, t in zip(range(4), [0.1, 0.4, 0.8, 1.5]):    plt.subplot(2, 2, i+1)    plt.bar(np.arange(5), sample(p, t))    plt.title("temperature = %s" %t, size=16)    plt.ylim(0,1)



This paper will test 3 kinds of neural network models, using the library is Keras:

    1. One-hot encoding + LSTM
    2. Embedding + bidirectional GRU
    3. Embedding + GRU + conv1d + reverse conv1d





One-hot encoding + LSTM

The corpus of training here chooses Lao She's posthumous "Zheng Hongqi".

First read the file, the text vectorization, in each word as a unit participle, and finally adopted one-hot encoding 3-dimensional tensor.

whole = open('正红旗下.txt', encoding='utf-8').read()maxlen = 30  # 序列长度sentences = []  # 存储提取的句子next_chars = []  # 存储每个句子的下一个字符(即预测目标)for i in range(0, len(whole) - maxlen):    sentences.append(whole[i: i + maxlen])    next_chars.append(whole[i + maxlen])print('提取的句子总数:', len(sentences))chars = sorted(list(set(whole))) # 语料中所有不重复的字符,即字典char_indices = dict((char, chars.index(char)) for char in chars)x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)  # 3维张量(句子数,序列长度,字典长度)y = np.zeros((len(sentences), len(chars)), dtype=np.bool) # 2维张量 (句子数,字典长度)for i, sentence in enumerate(sentences):    for t, char in enumerate(sentence):        x[i, t, char_indices[char]] = 1.0    y[i, char_indices[next_chars[i]]] = 1.0


First look at the size of the following data:

print(np.round((sys.getsizeof(x) / 1024 / 1024 / 1024), 2), "GB")print(x.shape, y.shape)# 6.11 GB# (80095, 30, 2667) (80095, 2667)

Only 80,000 rows of data are 6GB in size, due to the high-dimensional sparse problem commonly found with one-hot encoding.


Next, a neural network is built, with only one layer of lstm in the middle, followed by the probability of all the characters in the Softmax output dictionary with the full join layer:

model = keras.models.Sequential()model.add(layers.LSTM(256, input_shape=(maxlen, len(chars))))model.add(layers.Dense(len(chars), activation='softmax'))optimizer = keras.optimizers.RMSprop(lr=1e-3)model.compile(loss='categorical_crossentropy', optimizer=optimizer) model.fit(x, y, epochs=100, batch_size=1024, verbose=2)


Once you've trained 100 epochs, you can start generating text, mainly in the following steps:

    1. The generated text is one-hot encoded in the same way, using a trained model to derive the probability distribution of all characters.
    2. A new probability distribution is obtained based on the given temperature.
    3. The next character is sampled from the new probability distribution.
    4. Multibyte the resulting new word to the last, and removes the first character from the original text.


The following function converts the original distribution into a new distribution after adding temperature, and then randomly sampling from the new polynomial distribution to obtain the most likely character index.

def sample(preds, temperature=1.0):    if not isinstance(temperature, float) and not isinstance(temperature, int):        print("temperature must be a number")        raise TypeError            preds = np.asarray(preds).astype('float64')    preds = np.log(preds) / temperature    exp_preds = np.exp(preds)    preds = exp_preds / np.sum(exp_preds)    probas = np.random.multinomial(1, preds, 1)    return np.argmax(probas)


Finally, define a text generation function:

def write(model, temperature, word_num, begin_sentence):    gg = begin_sentence[:30] # 初始文本    print(gg, end='/// ')    for _ in range(word_num):        sampled = np.zeros((1, maxlen, len(chars)))        for t, char in enumerate(gg):            sampled[0, t, char_indices[char]] = 1.0            preds = model.predict(sampled, verbose=0)[0]        if temperature is None:  # 不加入temperature            next_word = chars[np.argmax(preds)]        else:            next_index = sample(preds, temperature) # 加入temperature后抽样            next_word = chars[next_index]                    gg += next_word        gg = gg[1:]        sys.stdout.write(next_word)        sys.stdout.flush()


The initial text is

begin_sentence = whole[50003: 50100]print(begin_sentence[:30])# 一块的红布腰带来。“有这个,我就饿不着!”说完,他赶紧把小褂


Do not use temperature generation:

write(model, None, 450, begin_sentence)

A piece of red cloth belt to come. "With this, I can't be hungry!" "After saying, he hurriedly put Duijinxiaogua////and buckle well."

"But the ER Mao son saw, and the men and the soldiers saw, not ..." "Yes! 10 smiled with a hearty smile.

"Didn't I fasten the buttons quickly?" Second brother, you are a good person! The officers and soldiers are like you, we are much smoother! Hum
Someday, we will ask the Emperor also to bow! ”

"10," two elder brother took out all the several hanging money to come, "take it, forbid not!" "Good!" "10 took the money

To. "I count!" Keep this account in mind! And so the foreigner all away, I go home farming, hit the grain to you! "His side

Say, count the money on the side. "Four hanging eight!" He stuffed his money in his arms. "Good-bye!" "He went eastward. Two elder brother Catch go,

"Do you know the way?" ”

Ten exponentially refers to the gate of the de-sheng: "Is that not the gates?" Go out of town and talk! ”

10 was gone, and the elder brother was still standing there. Here is the cooler place, with water, with trees, with reeds, also

There is a small Tu shan that is not very high. Second brother but feel more and more hot. He sat again on the stone. The more thought, the more wrong, the more afraid;
The head was sweating again. In any case, a flag soldier should not support the rebel people! He felt he was not shrewd at all, and made
A great mistake! If 10 were caught, what would he do if he came? Not to be beheaded, but also to remove the flag, sent to Xinjiang
or Yunnan Go!


Temperature = 0.5 Generation:

write(model, 0.5, 450, begin_sentence)

A piece of red cloth belt to come. "With this, I can't be hungry!" "After saying, he hurriedly put Duijinxiaogua////and buckle well."

"But the ER Mao son saw, and the men and the soldiers saw, not ..." "Yes! 10 smiled with a hearty smile.

"Didn't I fasten the buttons quickly?" Second brother, you are a good person! The officers and soldiers are like you, we are much smoother! Hum
Someday, we will ask the Emperor also to bow! ”

"10," two elder brother took out all the several hanging money to come, "take it, forbid not!" "Good!" "What about the pastor of the Reverend left, Reverend Cow?" In me, not to inquire! "Ten have been set up, often said" Kai-so!

"Not busy?" ”

"Why not?" Who's going to get you old ladies? Our point? "Cattle pastor also feel wine under, and told mum:" Small disciple, what hurried ①, three on one or two big no son! ”

"Well, Hello!" "Father mouth This" conscience law son, and a little eat some of the son bar

—。 His front and one or two elder brother if improvised. The payphone of those who have a big wind. Father Happy
To. "You guys, just use a little bit to get me to eat in your province, and I don't have anything to say.
You! "That's the good gas!" No learning! You see, I am still old white grandmother! I wash, I am a foreigner? ”

"That is not good, I do not understand you old son!" "I'll go back!" To say! ”


The text that does not use temperature is Orthodox, use temperature after the randomness increases, the language jumps, quite is the style of stream of consciousness.

"Under the red Flag" is Lao She's posthumous, did not finish drowned suicide, and thus the length is very short. But even so, using One-hot encoding still has a high dimension, and if you use a larger corpus it is easy to explode in memory. So below we use Word embedding to map text to low-dimensional word vectors.





Embedding + bidirectional GRU (birdectional GRU)

The second model has 3 different points from the previous one:

    • The above example is the character level (Character-level) language model, each sentence is in a single character unit, this example we train in terms of the phrase, so the first to use the Jieba participle to divide the sentence into phrases.

    • Word embedding (Word embedding) replaces one-hot encoding, saving memory space, while word embedding may be better than one-hot to express semantics.

    • Using bidirectional GRU (birdectional GRU) instead of LSTM, the bidirectional model uses both the forward sequence and the reverse sequence information, and then combines the two, as shown in the following:


The corpus of the training selected the author Donggui, "The White Nights".

import jiebawhole = open('白夜行.txt', encoding='utf-8').read()all_words = list(jieba.cut(whole, cut_all=False))  # jieba分词words = sorted(list(set(all_words)))word_indices = dict((word, words.index(word)) for word in words)maxlen = 30sentences = []next_word = []for i in range(0, len(all_words) - maxlen):    sentences.append(all_words[i: i + maxlen])    next_word.append(all_words[i + maxlen])print('提取的句子总数:', len(sentences))x = np.zeros((len(sentences), maxlen), dtype='float32') # Embedding的输入是2维张量(句子数,序列长度)y = np.zeros((len(sentences)), dtype='float32')for i, sentence in enumerate(sentences):    for t, word in enumerate(sentence):        x[i, t] = word_indices[word]    y[i] = word_indices[next_word[i]]


To view the size of the data:

print(np.round((sys.getsizeof(x) / 1024 / 1024 / 1024), 2), "GB") print(x.shape, y.shape)0.03 GB(235805, 30) (235805,)

230,000 rows of data 0.03 GB, much smaller than the One-hot encoding.


Next, a neural network is built, with two-layer bidirectional GRU in the middle, followed by an all-connected layer to output the probabilities of all phrases with Softmax:

main_input = layers.Input(shape=(maxlen, ), dtype='float32') model_1 = layers.Embedding(len(words), 128, input_length=maxlen)(main_input)model_1 = layers.Bidirectional(layers.GRU(256, return_sequences=True))(model_1)model_1 = layers.Bidirectional(layers.GRU(128))(model_1)output = layers.Dense(len(words), activation='softmax')(model_1)  model = keras.models.Model(main_input, output)optimizer = keras.optimizers.RMSprop(lr=3e-3)model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer)model.fit(x, y, epochs=100, batch_size=1024, verbose=2)


Define text generation functions:

def write_2(model, temperature, word_num):    gg = begin_sentence[:30]    print(''.join(gg), end='/// ')    for _ in range(word_num):        sampled = np.zeros((1, maxlen))         for t, char in enumerate(gg):            sampled[0, t] = word_indices[char]            preds = model.predict(sampled, verbose=0)[0]        if temperature is None:            next_word = words[np.argmax(preds)]        else:            next_index = sample(preds, temperature)            next_word = words[next_index]                    gg.append(next_word)        gg = gg[1:]        sys.stdout.write(next_word)        sys.stdout.flush()


The initial text is:

begin_sentence = whole[50003: 50100]print(begin_sentence[:30])begin_sentence = list(jieba.cut(begin_sentence, cut_all=False))# 且不全力挥杆,先练习击球。# 最初还有些生涩,但感觉慢慢回来了。打完二十球左右


Do not use temperature generation:

write_2(model, None, 300, begin_sentence)

And do not play the full swing, first practice batting.

There was some jerky at first, but it came back slowly. After playing 20 or so, he can go back to that direction.
However, such a possibility can make the content between them, which is a kind of sensory information. And when it came to life, it was no wonder they couldn't find the most important opportunity.
The bright division left the hotel, but these said to him: "Would you like to find me?" ”
Well She nodded, and the expression on her face made her more distorted. "I am at the most, and how to say, although I do not care, will be cousin investigation, will be very happy." ”
"But you don't need to." And he often suspects you, and tells you to mention it. ”
"But I don't know his body anymore." ”
"No, I don't want to mean it here." ”
"Well, then I'll thank you for your name. ”
"Well," Kang Lizi nodded, "you just opened the door that day." ”
"It's strange, I don't think you have any plans?" ”
Well The receiver showed such a heavy smile on his mouth. "Well, there's nothing better then." ”
"No, I won't wait for him, I'll tell you." ”
"Since then, I


Temperature = 0.5 Generation:

write_2(model, 0.5, 300, begin_sentence)

And do not play the full swing, first practice batting.

There was some jerky at first, but it came back slowly. After playing 20 or so, he will be able to rediscover the corpse at the same time, he also detailed and dark, "Only said, you must have a lot of phone calls?" ”

Well "Friend Yan nodded."

"May I ask ...?" Did she say she was together? ”
Well ”
"That's it." ”
Oh "She said to him again," I was thinking that she did not seem to know Mr. ”
Her question is unknown in the meaning. He listens to you with a nervous face, that's probably it. ”
"Well ..."
Well ”
"There's one more thing. "Snow spike."
"I thought, since I had this feeling, I would think I was OK." ”
Oh "The snow has a wry smile," but I listened to her, and there was a place almost past the lock, and I couldn't tell how many times. ”
Oh ”
"Dancing in the Heart?" Oh, it's her job! Mr. Yicheng, your intuition about Miss Dang Zeshe. ”
"Yes," he was. "That's a ..." said the picture.
"Well ... I think


There are some strange sentences in it:

But, I don't know his body.

Dancing in my Heart? Oh, it's her job!





Embedding + GRU + conv1d + reverse conv1d
    • Convolutional neural networks are generally used in the field of image, mainly because of its unique local feature extraction function. However, one-dimensional convolutional neural Networks (CONV1D) are found to be equally suitable for the processing of sequential data, as they can extract local information from long sequences, which is useful in specific NLP areas (such as machine translation, auto-quiz, etc.). It is also worth mentioning that CONV1D's training is much faster than using RNN to process sequential data.

    • Inspired by the two-way model in the previous example, I've also used both forward and reverse sequence information, and the final model is basically this:



This training corpus is "journey to the Monkey".

 whole = open (' Journey to. txt ', encoding= ' utf-8 '). Read () MaxLen = 30 # forward sequence length Revlen = 20 # reverse Sequence length sentences = []reverse_sente NCEs = []next_chars = []for i in range (MaxLen, Len (whole)-Revlen): Sentences.append (whole[i-maxlen:i]) reverse _sentences.append (whole[i + 1:i + Revlen + 1][::-1]) next_chars.append (Whole[i]) print (' Total number of positive sentences extracted: ', len (sentences)) PR Int (' Total number of reversed sentences extracted: ', Len (reverse_sentences)) chars = sorted (list (set (whole))) Char_indices = Dict ((char, Chars.index (char )) for char in chars) x = Np.zeros ((len (sentences), maxlen), dtype= ' float32 ') reverse_x = Np.zeros (len (reverse_sentences) , Revlen), dtype= ' float32 ') y = Np.zeros ((len (sentences),), dtype= ' float32 ') for I, sentence in enumerate (sentences): for T, char in enumerate (sentence): x[i, T] = Char_indices[char] y[i] = char_indices[next_chars[i] "For I, rever Se_sentence in Enumerate (reverse_sentences): for T, Char in Enumerate (reverse_sentence): reverse_x[i, t] = Char_ Indices[char] 



Establishing a neural network model:

Normal_input = layers. Input (Shape= (MaxLen,), dtype= ' float32 ', name= ' normal ') Model_1 = layers. Embedding (Len (chars), Input_length=maxlen) (normal_input) Model_1 = layers. GRU (return_sequences=true) (model_1) Model_1 = layers. GRU (model_1) Reverse_input = layers. Input (Shape= (Revlen,), dtype= ' float32 ', name= ' reverse ') model_2 = layers. Embedding (Len (chars,), Input_length=revlen) (reverse_input) model_2 = layers. CONV1D (5, activation= ' Relu ') (model_2) model_2 = layers. MAXPOOLING1D (2) (model_2) model_2 = layers. CONV1D (3, activation= ' Relu ') (model_2) model_2 = layers. GLOBALMAXPOOLING1D () (model_2) normal_input_2 = layers. Input (Shape= (MaxLen,), dtype= ' float32 ', name= ' normal_2 ') Model_3 = layers. Embedding (Len (chars), Input_length=maxlen) (normal_input_2) Model_3 = layers. CONV1D (7, activation= ' Relu ') (model_3) Model_3 = layers. MAXPOOLING1D (2) (model_3) Model_3 = layers. CONV1D (5, activation= ' Relu ') (model_3) Model_3 = layers. GLOBALMAXPOOLING1D () (model_3) combine = Layers.concatenate ([mOdel_1, Model_2, Model_3], axis=-1) output = layers. Dense (len (chars), activation= ' Softmax ') (combine) model = Keras.models.Model ([Normal_input, Reverse_input, Normal_ Input_2], Output) optimizer = Keras.optimizers.RMSprop (lr=1e-3) model.compile (loss= ' Sparse_categorical_ Crossentropy ', Optimizer=optimizer) model.fit ({' Normal ': x, ' reverse ': reverse_x, ' normal_2 ': x}, Y, epochs=200, Batch_ size=1024, verbose=2)


In the prediction process, you need to constantly delete elements at the end of the list, insert elements in the head, and use the deque in the collections module instead of list for efficient operation:

 from Collections Import Dequedef write_3 (model, temperature, word_num): GG = begin_sentence[:30] Reverse_gg = Deque (Begin_sentence[31:51][::-1]) print (GG, end= '///') for _ in range (Word_num): sampled = Np.zeros ((1, max Len)) reverse_sampled = Np.zeros ((1, Revlen)) for T, Char in Enumerate (GG): sampled[0, t] = Char_ Indices[char] for T, Reverse_char in Enumerate (REVERSE_GG): reverse_sampled[0, t] = Char_ind Ices[reverse_char] Preds = model.predict ({' Normal ': sampled, ' Reverse ': reverse_sampled, ' normal_2 ': sampled}, Verbose=0) [0] If temperature is None:next_word = Chars[np.argmax (preds)] Else:next_i Ndex = sample (Preds, temperature) Next_word = Chars[next_index] Reverse_gg.pop () Reve Rse_gg.appendleft (Gg[0]) GG + = Next_word GG = gg[1:] Sys.stdout.write (Next_word) sys.stdout.fl Ush () 


The initial text is:

begin_sentence = whole[70000: 70100]print(begin_sentence[:30] + " //" + begin_sentence[30] + "// " + begin_sentence[31:51])# ,命掌生死簿判官:“急取簿子来,看陛下阳寿天禄该有几何?”崔 //判// 官急转司房,将天下万国国王天禄总簿,先逐


Do not use temperature generation:

write_3(model, None, 500, begin_sentence)

, Life and Death book Judge: "Hurry to take notebooks to see your Majesty due Time Heaven Lu should have geometry?" Cui///Judge anxious Soul has been sent out of the palace, the waist bowed, into the inside, mixed with Tang Tao: "Babe, you asked me:" Now that is to learn, other than the monk on the back of the mountain, than the gold Hoop Iron bar, become an old demon children. "The Sand Monk Way:" Both so, you two each bosom a mouthful, although wear this brocade cloth straight piecemeal, a meal palladium built a pour body, pour in that hole, call: "Small!" "The Sand Monk Way:" elder brother, this little demon, is not a good person. "That nerd each one to his general appearance, will he a count, drink voice shouted:" There came! "That strange smile:" This monkey is what people are! Where did you come from? "That nerd each bite to hate way:" This monkey! You see that: sneer sneer, can sigh write. "The nerd dared not ask him, but cried," little men! "That nerd is a very good enlightenment:" You This old man, have reason? "The nerd taught them to arrange for the fasting." The elder asked: "Goku, you wait here, this occasion." "The Monk Way:" You this monk, you sit there, waiting for me to hang under the tree for you, only listen to shout shouted: "Elder brother, do not go!" "Diabolism:" I'll go with you. And the monk, with two of Arhat, was in the hole, and heard the Lord not knowing. "Sanzang Way:" I am The Apprentice of the Tang Dynasty monk. "Where did you come from?" cried the When the man looked up, he heard only the sound of the wind, and he was not the man of the Fairy family. Drifting saw a big surprise, dare not to head, and asked: "Who are you?" You go. "That nerd is home already know, urgent turn to come, sand monk others easily may, dare not on sand, dare not high called


Temperature = 0.5 Generation:

write_3(model, 0.5, 500, begin_sentence)

, Life and Death book Judge: "Hurry to take notebooks to see your Majesty due Time Heaven Lu should have geometry?" Cui///judge then search for love, holding a kam robe, bad heroic appearance. This Monkey King also dare not long to stop, but will this sentiment on all fetal, how to get disaster still law to go down demon staff! "It is that: the channel of the Golden Fire:" The holy Sage need not dare, waiting for you to hang my disciples. "The nerd took off his hand and taught him to carry it, and it was easy to hear the water ring and get down on his knees." The sage heard this remark, namely Zoonosis teaches: "Mo busy!" MO, Move! Please scattered fire, dead and alive in the interest. "Two people smell words, and fast cloud step and sit." The great saint surprised the way:

"Goku, Baby Baby Baby, no very baby, you come here, is blessed enough." The Goblin took the runner, thought to himself: "That nerd also do not know, to women, I am afraid Ah, the original is black bioasia old people, must be ghosts." It's violet! "Walker laughed:" Nerd! Do not know, you still cry a feeding, we do not have, I also rare, if he will come, I am good to throw flowers, inevitably hit a what? "Walker way:" The weak! No! I didn't hit you, did you see him sitting there? The nerd fell to the ground, and asked me if I had some sperm, and I knew what was a false head? "Eight commandments:" I know, although not good, but not good? I'll go to Huazai mouth. "What the fools say, they must live," cried the mother-in-law, "you see what the men there are, there will come, I speak to you." "Good Monkey King, he drove Cloud head, will be a longitudinal, jump on the peak, the way:" This guy Hugh nonsense! You are there Huazai, you hang us in the hole, we press, the body lifted, tight rope, "The Demon king exultation, namely



With three examples, we can see that the model has been able to generate some meaningful sentences, but once a few sentences are connected to the mind, especially the total sense of the plot or dialogue has a fault. This also makes sense, what is done here is essentially from the statistical model of sampling, and the whole book generally have a lot of more similar sentences, but the context of these sentences is not the same, the model from these contexts randomly extracted and then generated text, the later nature is more and more crooked.

So statistical-based natural language understanding differs greatly from the way we humans understand language, which means that the model itself does not understand what these contextual words mean. But it is also because of this, sometimes do produce some unexpected expression, give a person with some kind of "enlightenment", so-called brain loop Qing Qi probably so. Of course, there is one more obvious reason-the lack of training corpus, but the first step is to improve the computer hardware.




Full code




Reference:
    • The beauty of mathematics
    • "Deep learning with Python"





/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.