Adaptive color attributes for tracking translation

Source: Internet
Author: User
Tags color representation

This article is based on target tracking translation, for everyone to learn the reference!
Real-time visual tracking of adaptive selection of color properties
Summary
Visual tracking is a challenging problem in computer vision, and now the Best (State-of-art) Visual Tracker uses only the light information of a picture or uses a simple color representation (three channels such as RGB) to represent a picture. Different from visual tracking, the combination of illumination information and complex color features can provide very good performance in target recognition and detection problems. Due to the complexity of tracking problems, the required color characteristics should be calculated more efficient, and with a certain degree of optical non-deformation, at the same time have a relatively high discriminant ability.
This article investigates the contribution of color attributes in the tracking by detection framework, and our results show that color properties are very good for visual tracking problems. We then propose a variant of adaptive low-dimensional color properties. The methods described in this article were evaluated on 41 challenging color datasets, 24% more than those based on light intensity, and we proved that our approach is better than the best available, and at speeds of up to 100fps.
1. Introduction
Target tracking is the location of the target in the image sequence, (the target is indicated in advance, that is, given in the first frame), which is the most challenging problem in computer vision and plays a vital role in many applications, such as human-computer interaction, video surveillance, and robotics. The tracking problem is complicated because of the possibility of illumination change, occlusion, background disturbance and deformation of the tracking target. This article investigates how color information can be used to reduce the impact of these problems.
Most state-of-the-art trackers are now either using light intensity (RGB values) or using texture information [7,27,11,5,20]. Although much progress has been made in visual tracking now, the use of color information is limited to simple color space conversion [19,17,18]. and visual tracking is different, in terms of target detection, complex, cleverly designed color features show a very good effect, and the use of color information to do visual tracking is a difficult thing. The color measurement results vary greatly throughout the image sequence, including changes in illumination, shadows, camera and target geometric positions. The evaluation of the robustness of color images in these cases has been used in image classification and behavior recognition, so we use the current evaluation method to evaluate the color conversion of the target tracking problem.
There are two methods for dealing with visual tracking problems, called generation methods and Discriminant methods. The generation method continues to search for the most similar areas of the target, either based on template matching or based on a subspace model. The purpose of discriminant method is to distinguish the target from the background, which is to change the tracking problem into two classification problem. The generation method uses only the information of the target, and the Discriminant method uses both the target information and the background information to find a classification boundary that distinguishes the target. This method is used in many algorithms based on the tracking by detection framework to train an online classifier using the environment near the target and target. In the previous years, there is an excellent tracker evaluation, there is a CSK tracker, ranked in the top 10, and has a very high speed, the tracker found a density sampling strategy, that is, a frame of multiple sub-window processing, can be classified as a cyclic matrix processing, due to CSK good performance and fast , our approach is based on the CSK tracker.
Contribution: This article uses the color attribute to extend the CSK tracker, which has achieved a good performance in target tracking due to its balance in illumination invariance and discernment, [14]. The model updating mechanism of CSK is suboptimal when dealing with multi-channel signals, so we adjust the updating mechanism of the original CSK, and prove the effectiveness of the new mechanism in multichannel tracking in the experiment. The high dimensional color attribute leads to an increase in computational complexity, which may limit the use of trackers in real-time applications and robots, so we propose an adaptive dimensionality reduction method that reduces the original 11-dimensional to 2-dimensional, so that the frame rate of the tracker can reach above 100fps without affecting the accuracy. Compared to other color representations we have done a lot of evaluations to prove that our color properties have a better effect. Finally, we demonstrate a comprehensive evaluation of 41 image sequences, and have achieved good results. Figure1 shows that our approach has better results than other advanced methods.

2.CSK Tracking Device
Our approach is based on the CSK Tracker [9], CSK in the most recent tracker evaluation of the speed in the top ten [25],csk Tracker has learned the goal of the least-squares nucleating classifier in some image fragments. The point is that it is very fast, and CSK uses a looping mechanism that appears to burden local images periodically. Here we provide a brief preview of [9].
We train the classifier with a simple grayscale image fragment (whose size is the target in the middle of the image). The classifier changes all the shift {(M,N) as a training sample of the classifier, which is labeled as a high-function tag, which is trained by a function (1) that minimizes the structure risk.

Here the kernel function is mapped to the Hilbert space, the inner product is defined as a normalization parameter, the above cost function (1), and its coefficient A is:

Here F is the discrete Fourier transform DFT operator, and we define the DFT result to be uppercase, such as: Y=f (Y),
, where the output of the kernel function equation 2 has a shift invariance, for all m,n,f,g, the CSK Tracker uses a Gaussian radial base core.
The first step in detection is to crop the m*n of a small grayscale image in a new frame. The detection score is calculated, which is the output of the Fourier transform kernel of the sample fragment Z, which refers to the target's grayscale template, which is learned from the multi-frame image. When the highest score is found, the target location is estimated in the new frame image. Work [9] shows the nuclear output and can be computed by FFT. For more details, see [9].
3. Color Visual Tracking
To be able to combine color information, we extend the CSK trace to multidimensional color features by defining a suitable kernel. This expands the paradigm into multidimensional features in radial basis functions. The target extracted from the image template is represented by function x: A D-dimensional vector consisting of all the characteristics of the position (m,n). In the traditional CSK tracker, the template of the grayscale image is pre-processed by the previous Hann window. For each feature channel, we use the same step-outs. The final expression we get with the stacked brightness and color channels.
3.1 Color properties for visual tracking
The selection of color features is essential for visual tracking. Recently, the color attribute [23] has achieved crucial results in target recognition, detection, and behavior recognition [14,13,12]. Here, we study them for object tracking. Color attributes, or color names (CN), are assigned color tags to represent the world's colors. The color studies presented by Berlin and Kay[3] include these basic color terms in English: Black,blue,brown,grey,green,orange,pink,purple,red,white, Yellow. In the field of computer vision, color naming is the distribution of colloquial labels on RGB observations, and we use this mapping [23] (obtained automatically by Google image search) to map RGB values to a possible 11-D color representation, with 11 colors totaling 1.
The traditional CSK the gray value rule to the [ -0.5,0.5] interval. Due to the use of windowing operation can be reversed deformation, but also affect the L2 distance. In the first case, the color name only relies on extracting 1/11 of each color. This maps the color to 10-D subspace, and the color is 0. In the second example, normalization is used to map the color name to the orthogonal bias of the 10-dimensional subspace, which determines the center of the color attribute and reduces the number of dimensions from 11 to 10. The selection of orthogonal bias has little effect on the CSK tracker, which is discussed in 3.3. We found that the second technique yielded good performance, so it was used to standardize color properties.
3.2 Establishing a robust classifier for color features
It is necessary to update the target in real time in order to realize the visual tracking and make it robust to the appearance change. In the CSK Tracker, the model consists of a target shape that has been learned and a coefficient a that has been converted. These calculations only take into account the current appearance model. The tracker then uses a temporary simple linear interpolation method to change the classifier coefficients: p is the index number of the current frame and is a learning rate parameter. This leads to suboptimal performance because not all previous frames are used to update the current model at the same time. Comparing the CSK method, the Mosse tracker employs a robust tracking mechanism: all frames are considered before the current frame is calculated. However, this mechanism applies only to linear nuclei and one-dimensional features. Here, we generalize this update mechanism to characterize the classifier and multidimensional colors.
To update the classifier, we consider extracting the appearance of all targets, from the first frame to the P frame. The loss function is reconstructed by the weighted mean variance of these frames. To simplify the task of training and detection, the solution is limited to the set of classifier coefficients, and each frame J has a restriction condition. The overall loss is:
(3)
The loss function is minimized by the following formula:
(4)
For example (2), we define the Fourier output, where its weight is set by a learning parameter. The overall model is updated with (5). In (4) the numerator and denominator are updated independently. Its object appearance is updated by the traditional CSK tracker.
(5a)
(5b)
(5c)
This mechanism allows model updates to not require the appearance of storage. The current model simply needs to be stored
。 This model is updated with (5) in each new frame. This also ensures that the increase in computational volume affects the speed of the tracker. In traditional CSK, the learning appearance is used to calculate the detection score for the next frame p+1.
3.3 Low-dimensional adaptive color properties
The computational time of the CSK tracker is linearly related to the feature dimension. This is a problem for high-dimensional color features such as color properties. We propose an adaptive dimensionality reduction technique to preserve useful information when the dimension is sharply reduced, so that the speed is significantly improved.
We standardize this problem by finding a suitable dimensionality reduction strategy to match the current frame p and minimizing a loss function structure:
(6)
Which is a data item that relies only on the current frame, but is used for smoothing, in relation to frame J. These items are controlled by weights.

In order to study the appearance of dimension, dimensionality reduction technique is orthogonal by looking for a mapping matrix and its column vectors. The matrix is used to calculate the new dimension feature passed by the mapping. The data item includes a rebuild error for the current shape.
(7)
The minimization of the data item (7) corresponds to the operation of the PCA pair. However, updating this mapping matrix with only (7) only worsens the quality of these target models, since the classifier coefficients previously learned are obsolete.
In order to acquire a robust mapping matrix, we added a smoothing term in (6). The mapping matrix () that is computed as the previous frame. If the column vectors do not extend to the same feature space in the new mapping matrix and before the mapping matrix, then the smoothing item is appended with a loss term. This is done by the inner product and radial basis, but is excited by the operator invariant. So when a bias is extended to the same feature space, the only choice for biasing is not important. The Smoothing items are:
(8)
Equation 8 is reconstructed by the error of the early biased vectors in the new bias vectors. Each of its biased vectors is determined by weight.
With characteristic thinking (7) and smoothing term (8), the overall loss is minimal under =i conditions. This is done by the eigenvalue of the decomposition matrix. Here is the covariance matrix of the current appearance matrix (D2*D2, whose weight is). The mapping matrix is selected by the characteristic vector D2, which is normalized by the corresponding characteristic value. Detailed steps are described in Algorithm 1.

4. Experiments
Here we introduce the results of our experiment. First, we evaluate the color characteristics to make visual tracking (in object recognition). Secondly, we evaluate the proposed color characteristics of the learning mechanism. Thirdly, we evaluate adaptive low-dimensional color properties. Finally, we provide a more advanced tracker for quantitation and attributes.
4.1 Experimental Steps

Our approach is done on MATLAB. These experiments are configured with 2.66 GHz INTELCPU and up to GB of RAM. In our approach, we use the phase

The same parameter value as the traditional [9] proposed CSK tracking. Adaptive Learning rate parameter The μ color attribute is fixed and is 0.15 for all sequences. The data set we use: All 35 color sequences are used for the most recent evaluation tracking method [25]. In addition, we use other color sequences, namely: Kitesurf,shirt, Surfer, Board, Stone and Panda. The sequence used in our experiment to mention challenging motion blur, illumination changes, scale changes, heavy occlusion, plane, plane outer rotation , distortion, background occlusion and low resolution.

Evaluation method: We propose a method to verify the performance that we follow in protocol 2 using [25]. Results are given using three evaluation indicators: Center position error (CLE), distance accuracy (DP) accuracy and overlap (OP). The average Euclidean distance between the CLE is calculated, and the target's center position is estimated and true. The DP is relative to the number of frames in the sequence, and the center position error is less than a certain threshold value. We report a threshold value of DP20 pixels [9,25]. The results summarize the use of median CLE and DP value 41 sequence. We also report the speed of the tracker at intermediate frames per second (FPS). We provide the overall performance of a robust estimate of the median result.
We also demonstrate precision and success [25]. The precision plotted in the distance precision draws a threshold value within a range. The tracker is ranked at 20 pixels using DP fractions. The details of success contain overlapping accuracy in the threshold range (the Phoenix). The OP is defined as a percentage of the frame that exceeds the bounding box overlap where the threshold value t∈[0,1]. The tracker is a usable area ranking based on the curve (AUC). Accuracy and success show the accuracy of the average score on all sequences.
4.2 Color characteristics
In addition to evaluating color-based tracking, we use a widely evaluated representation of other colors. These color features vary with the luminosity invariance and resolution, and are biologically inspired color representations.
RGB: As a basic algorithm, we use the standard 3-channel RGB color space.
The Lab:lab color space is visually uniform, which means that the color at equal distances is visually the same distance.
The YCBCR:YCBCR is generally visually uniform and is commonly used in image compression algorithms.

The RG:RG color channel is the first color feature that we consider to be the same brightness. Calculated by the formula (r,g) = (r/(r + G + B), g/(r + G + B)), and the shadow remains invariant.
HSV: In the HSV color space, H and s are invariant to shadows and H high brightness.

Figure 2: Compare the original update scheme and use our proposed learning Method (DP) (%). Our approach improves performance in a number of ways. The best result is to use our suggested learning methods.

Table 2: Compare adaptive Color names (CN2) and color names (CN). We offer medium DP (%) and
The result of CLE (pixels). Note that the CN2 provides rapid growth and minimal loss of precision.
Oppoent: Based on image conversion:

This expression is to remain invariant to the high light.
C: The color expression C adds luminosity invariance to the shaded portion to the inverse descriptor through the equilibrium density. This is the basis for the completion.
Hue:hue is a 36-dimensional histogram representation, and the update of Hue is done by a saturated-tone histogram to deal with the unstable hue representation. The expression is invariant to shadow-shading and high light.
Opp-angle:opp-angle 36-dimensional Histogram
expression [22], based on.
The x of the subscript represents the spatial derivative. The expression is invariant to shadow-shading and high light and blur.
So: Finally, we consider the Bionics study descriptor Zhang et al. [26]. This color representation is based on the filter around the center on the opposite color channel.

Table 3: A quantitative comparison of our trackers with 15 advanced methods through 41 sequences. The results show medium distance accuracy (DP) and center position error (CLE). We also provide intermediate frames per second (FPS). The best two results are shown in red and blue fonts. The two methods presented by CN and CN2 achieve the best performance. Note that we CN2 the second method to get the best speed and precision.
4.3 Experiment 1 Color feature evaluation
Table 1 shows the results5 color characteristics discussed in section 4.2. All colors indicate proper normalization. We add a color that does not have a intensity channel to represent the component. The strength channel uses MATLAB to calculate the "Rgb2gray" function.
The traditional CSK tracking strength only provides a medium distance accuracy (DP) of 54.5%. 36-D tones and opp-angle get poor results. The best result is a significant increase of 19.5% regular CSK tracking by using a 10-D color name (CN). Similarly, grayscale CSK tracing provides a median of a central positional error (CLE) of 50.3 pixels. Again, the best result is to get the middle CLE using the color name 16.9 pixels.
Overall, the color and combination of brightness improves performance. However, careful selection of color characteristics to achieve significant performance improvements is critical. Use CN to get the best results.
4.4 Experiment 2: Update mechanism for robustness
This experiment shows the effect of multichannel on the color characteristics of the proposed update scheme. Let's start with the color feature as a combination of color and intensity channels. Figure 2 shows the performance improvement in the median precision obtained using the update scheme5 distance. The 9 11 color characteristics are evaluated, and an update scheme is proposed to improve the tracking performance. The improvement of the high dimensions is particularly evident in the color tones and opp-angle characteristics. As a result, the best performance again represents the use of CN, resulting in a new update mechanism that increases from 74% to 74%.
4.5 Experiment 3: low-dimensional adaptive color properties
As mentioned earlier, the computational cost of trackers is a critical factor for most real-world applications. However, low computational cost without significant loss of precision is desirable. In this article, we also propose low-dimensional adaptive color properties.

The dimensionality reduction technique was introduced in 3.3 to compress 10-D color names to 2 dimensions. Table 2 shows the result using the proposed low-dimensional Adaptive color attribute (CN2) and its contrasting color names. The results clearly show that CN2 provides performance that significantly increases speed while remaining competitive.

4.6 Comparison with other advanced tracking technologies
Compared with 15 different advanced trackers, we have shown good results. The tracker used for comparison is: CT [+], TLD [11],DFT [], edft [6], ASLA [5], L1APG [2], CSK [9],SCM [], Lot [], CPF [[], CXT], Frag [1], struck[7], Lsht [8] and LSST [24]. All tracker codes or binaries are evaluated in addition to Lsst,lsht and EDFT.

Table 3 shows the comparison of advanced tracking methods in 41 challenging sequences. We also report the speed (FPS) of the intermediate frames per second. The best two results are shown in red and blue fonts, respectively. Our method CN significantly improved the baseline gray-level head tracking with the median CLE relative decrease of 72%. In addition, our CN tracking improves the average DP method of the baseline, from 54.5% to 81.4%.
This shows that getting the best performance in a recent assessment [25], in our assessment is also superior to the other methods available. Despite the simplicity of our CN tracker, it outperforms the value of 10% more than 7 times higher frame rate in DP operation. Finally, the results also show that the CN2 tracking further increases the speed (over the FPS) with no significant loss of precision in the median value.

Figure 3 shows the accuracy of the included average distance and the success of the draw over all 41 sequence overlap errors. The average DP in the figure is 20 pixels and AUC respectively. Show only the top 10 trackers clearly. The precise plot, the best two methods are CN and CN2 presented. Our CN method is better than 3.5% and the average distance baseline countersunk 14.8% precision threshold of 20 pixels. It is worth mentioning that baseline CSK tracking does not estimate scale changes. Despite this inherent limitation, our two methods provide promising results compared to the most advanced methods that imply overlapping accuracy (successful episodes). Figure 4 shows a frames-by-frame comparison of our CN2 tracking with the existing tracker's central pixel error in 5 sequence. Our approach is stronger for these sequences than for other trackers.
Robustness to initialize: it is well known that visual trackers can initialize sensitivity. To evaluate the robustness of the initialization, we follow the baseline evaluation proposed by the agreement [25]. Trackers are evaluated by initializing in different frames (called temporal robustness, chaotic relationships), in different locations (known as spatial robustness, behavior). Behavior, 12 different initializations are evaluated for each sequence, as for the Treeach sequence divided into 20 segments. We select the top 5 existing trackers in the distance and overlap the precise plot chaos relationship and behavioral experiments (Figure 3). Compare our methods with the selected trace results of 5 shown. In both of these evaluations, our CN and CN2 trackers get the best results.

Figure 6: Different properties of precision: illumination change, plane rotation, motion blur and background clutter (preferably in high resolution display). The values that appear in the caption indicate the number of videos associated with their respective properties. The two methods presented in this paper are advanced algorithms.

We evaluated the tracker based on the Vot evaluation method, which is similar to the Tre relationship criterion. On 41 sequences, the average of our method tracking failure is lower than struck (1.05).
Attribute-based evaluation: Several factors can affect the performance of visual tracking. In a recent benchmark evaluation [25], the sequence notes 11 different properties, namely: illumination change, scale change, occlusion, deformation, motion blur, fast motion, plane rotation, outside the rotation plane, when invisible, background clutter and low resolution. We perform comparisons with other methods 35 sequence annotations to the above attributes [25]. , our method overcomes 11 attributes: Background clutter, motion blur, deformation, illumination change, in-plane rotation, plane rotation and occlusion.
Figure 6 shows an example of a different density of properties. Displays only the top 10 tracker sharpness. The results provided by CN and CN2 are superior to the existing methods for illumination variation sequences. This is due to the fact that the color attribute has a certain degree of luminosity invariance while preserving the recognition force. At present, our tracking provides the best results in a number of situations.
5. Conclusion
We recommend using color attribute tracking. We extend the CSK tracking Learning plan to multi-channel color features. In addition, we propose an extension of a low-dimensional adaptive color attribute. Several existing chases provide good accuracy but at the expense of the rate. And, speed is a key factor for many real-world applications, such as robotic technology and real-time monitoring. Our approach maintains advanced accuracy while the pieces operate over the FPS. This makes it particularly useful for real-time applications.

Although color attributes are often used in early tracking of literature, recent works mainly use simple color conversion. This article demonstrates the importance of a well-chosen color conversion, and we hope that this work encourages researchers to see the color as a component of the design of the tracking.
Thanks: This work has been supported through the Social Security Fund-financed project CUAs, through the virtual reality grant Project Ett, through strategic regional ICT research ELLIIT and Cadics.

Reference documents

[1] A. Adam, E. Rivlin, and Shimshoni. Robust fragments-basedtracking using the integral histogram. In CVPR, 2006. 6
[2] C. Bao, Y. Wu, H. Ling, and H. Ji. Real time robust L1 trackerusing accelerated proximal gradient approach. In cvpr,2012. 1, 6
[3] B. Berlin and P. Kay. Basic Color terms:their Universalityand Evolution. UC Press, Berkeley, CA, 1969. 3
[4] D. S. Bolme, J. R. Beveridge, B. Draper, and Y. M. Lui.visual object tracking using adaptive correlation filters. In CVPR, 2010. 3
[5] T. B. Dinh, N. Vo, and G. Medioni. Context tracker:exploring supporters and distracters in unconstrained environments. In CVPR, 2011. 1, 6
[6] M. Felsberg. Enhanced distribution field tracking Usingchannel representations. In ICCV Workshop, 2013. 1, 6
[7] S. Hare, A. Saffari, and P. Torr. Struck:structured output tracking with kernels. In ICCV, 2011. 1, 2, 6
[8] S. He, Q. Yang, R. Lau, J. Wang, and M.-h. Yang. Visual tracking via locality sensitive histograms. In CVPR, 2013. 1,6
[9] J. Henriques, R. Caseiro, P. Martins, and J. Batista. Exploiting the circulant structure of tracking-by-detection with kernels. In ECCV, 2012. 1, 2, 4, 5, 6
X. Jia, H. Lu, and M.-h. Yang. Visual tracking via adaptive structural local sparse appearance model. In CVPR, 2012. 6
[One] Z. Kalal, J. Matas, and K. Mikolajczyk. P-n learning:bootstrapping binary classifiers by structural constraints. INCVPR, 2010. 1, 6
F. S. Khan, R. M. Anwer, J. van de Weijer, A. bagdanov,a. Lopez, and M. Felsberg. Coloring action recognition instill images. IJCV, 105 (3): 205–221, 2013. 1, 3
F. S. Khan, R. M. Anwer, J. van de Weijer, a. Bagdanov,m Vanrell, and a. Lopez. Color attributes for object detection. In CVPR, 2012. 1, 3
F. S. Khan, J. van de Weijer, and M. Vanrell. Modulating shape features by color attention for object recognition. IJCV, 98 (1): 49–64, 2012. 1, 2, 3
[J]. Kwon and K. M. Lee. Tracking by sampling trackers. INICCV, 2011. 1
[J] B. Liu, J. Huang, L. Yang, and C. Kulikowski. Robust tracking using local sparse appearance model and k-selection. INCVPR, 2011. 1
K. Nummiaro, E. Koller-meier, and L. J. V. Gool. An adaptive color-based particle filter. IVC, 21 (1): 99–110, 2003. 1
S. Oron, A. Bar-hillel, D. Levi, and S. Avidan. Locally orderless tracking. In CVPR, 2012. 1, 6
P. Perez, C. Hue, J. Vermaak, and M. Gangnet. Color-basedprobabilistic tracking. In ECCV, 2002. 1, 6
[C] L. Sevilla-lara and E. g. Learned-miller. Distribution fieldsfor tracking. In CVPR, 2012. 1, 6
K. Van de Sande, T. Gevers, and C. G. Snoek. Evaluatingcolor descriptors for object and scene recognition. PAMI,32 (9): 1582–1596, 2010. 1, 5
J. van de Weijer and C. Schmid. Coloring local feature extraction. In ECCV, 2006. 1, 5
J. van de Weijer, C. Schmid, J. Verbeek, and D. Larlus. Learning color names for Real-world applications. TIP,
18 (7): 1512–1524, 2009. 2, 3
D. Wang, H. Lu, and M.-h. Yang. LEAST soft-thresholdsquares Tracking. In CVPR, 2013. 6
Y. Wu, J. Lim, and M.-h. Yang. Online Object Tracking:abenchmark. In CVPR, 2013. 2, 4, 5, 7
J. Zhang, Y. Barhomi, and T. Serre. A new biologically inspired color image descriptor. In ECCV, 2012. 1, 5
K. Zhang, L. Zhang, and M. Yang. Real-time compressivetracking. In ECCV, 2012. 1, 6
W. Zhong, H. Lu, and M.-h. Yang. Robust Object Trackingvia sparsity-based collaborative model. In CVPR, 2012. 6

Adaptive color attributes for tracking translation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.