Android & iOS Video recording technology Solutions

Source: Internet
Author: User
Tags strcmp

Repeatedly want to run a technical blog, always for a variety of reasons to evade. This time determined to write, not for self-marketing, not to expand social, just to urge themselves to learn. In recent months, regular update of two lines, one is a short video processing technology, one is "Introduction to the algorithm" notes, also take the opportunity to restudying. Energy is limited and learning is not too much, so only one article per week.

warning Many websites: original blog, without my permission may not be reproduced.

Mobile video recording technology solutions, I can think of and tried, there are several:

Scenario One: The interface recorded with the System development SDK.
Disadvantages:

1): You cannot change the video scaleGenerally have the phone screen resolution of the corresponding video recording resolution, in addition to the mobile phone system provides a resolution generally 4:3 or nearly 4:3, that is, full-screen recording. In order to ensure that the video video that you see when you record is consistent with the resulting video, you need to display it in full screen on the recording screen. (iOS can be regenerated into a video after calling the system interface clipping, but in the Iphone6 last copy of the video also want the user to wait about 10 seconds to crop up, after all, decoding + processing + encoding is very time-consuming operation. Later, scenario 345 can be resolved.
2): (Android only, iOS is not pure in this issue) the video generated by the vertical screen is taken to the PC side and played with most players is rotated. In fact, not the video is rotated by the player, but the video itself is rotating. Because the phone by default is a horizontal screen recording, that is, no matter how you hold the phone, transfer to the video compression, each frame is horizontal, so if the video is vertical screen recording, is transmitted in the past will be rotated, the degree of rotation depends on the direction of the user's camera. It is normal to play on the phone, because Android will call the gravity sensor when recording the direction, and in the video stream of the header file to fill in the number of degrees, when calling the Android Development SDK to provide the interface to play the video, it will parse the angle, and play the time to make rotation, Most players ignore this parameter, for example: Mplayer VLC does not rotate. The Open Source Library parsing the video header file should be able to parse the parameter, listing the code ffmpeg parsing the parameter:

/**read rotation angle by video param and key*return angle in [0,360]*/char* get_video_metadate (char* key, Avformatcontext *ic, Avstream *video_st) {char *value; if (!ic) {return 0;} if (key) {if (Av_dict_get (Ic->metadata, Key, NULL, Av_dict_ignore_suffix)) {value =av_dict_get (Ic->metadata, Key, NULL, Av_dict_ignore_suffix)->value;} else if (Video_st && av_dict_get (Video_st->metadata, Key, Null,av_dict_ignore_suffix)) {value = Av_dict_get ( Video_st->metadata, Key, Null,av_dict_ignore_suffix)->value;}} return value;} /**read rotation angle by param* @param ic:the context of video* @param video_st:the video Avstream * @return angle in [0,3 60]*/int Get_rotation_byparam (Avformatcontext *ic, Avstream *video_st) {char * value;int ret;value = get_video_metadate (    "Rotate", ic,video_st);    if (value ==null) return 0; else if ((Value, "strcmp") &&strcmp (Value, "&&strcmp"))) ret = 0;else {ret = Chartoin            t (value); }if (ret!=90&&ret!=180&&ret!=270) ret = 0; Dmprint ("[@metadate]the rotation angle is", ret), return (Ret > 0 && RET < 360)? ret:0;}
In fact, the parsing header file "rotate" corresponding value. In addition, there is a lot of content in the header file, such as resolution, duration, date, address, and so on. This third-party jar package, which uses Mp4info on Ffmpeg,android, can also be implemented, and lightweight, much smaller than ffmpeg. (Scenario 2345 can be resolved).

3) The file is too large. The resolution used by the optional camera on the Android side, via: camera.parameterssetpreviewsize(W,h); method definition。 However, the premise is that the system must have this resolution option, not the resolution is optional, you need to obtain the supported resolution in advance through the getsupportedpicturesizes () method, and then choose the most ideal. But some phones support a few resolution combinations, such as Meizu's that the charm of the blue, if not mistaken only support one: 1920*1280, plus the average mobile phone recorded video frame rate is 30 and so on the default values of these parameters, although Android default is h264 high-efficiency compression encoding, However, the video recorded with the system default parameters is also very large. Many phones record 1 minutes of video size near hundred megabytes. If you want to upload through the network, this kind of program can basically shoot off. (Use scenario 345 to resolve).

4) (This problem does not exist with iOS) does not support breakpoint recording. Now the mobile side of most short video recording app support breakpoint recording, and using the Android system to encapsulate the interface recording, each recording to redirect an output file (using scenario 2345 can be resolved).

Advantages:
1) Because of the hardware acceleration, so fast, and good quality.
2) Development is easy, there is a ready-made interface to call.

Realize:
Off-the-shelf interface, a lot of online information, as well as official documents, no longer listed.

Scenario Two: (for Android does not support breakpoint recording and video rotation issues) using the System recording interface +mp4parser

When users click Pause, they redirect the mp4 file and then merge each segment mp4 through Mp4parser.

Mp4parser is a lightweight jar package that can be imported directly on Android. The provided interface can realize stitching split mp4 file. It does not change the contents of the frame, just re-parse the various box mp4, then change the PTS and header files, split, merge. So the speed is still acceptable.

Disadvantages:
1) There is still a problem with the file too large.
2) is still limited to the system-provided resolution.

Advantages:
1) the imported libraries are small.
2) speed is ideal.


Realize:
Save the recorded video code by listing the stitching MP4 code:
list<string> fileList = new arraylist<string> (); list<movie> movieslist = new linkedlist<movie> () Filelist.add ("/1387865774255.mp4"); FileList.add ("/ 1387865800664.mp4 "); Try{for (String file:filelist) {movieslist.add (moviecreator.build (file));}} catch (IOException e) {e.printstacktrace ();} list<track> videotracks = new linkedlist<track> (); list<track> audiotracks = new linkedlist<track> (); for (Movie M:movieslist) {a track t:m.gettracks ()) {if (T.gethandler (). Equals ("Soun")) {Audiotracks.add (t);} if (T.gethandler (). Equals ("vide")) {Videotracks.add (t);}}} Movie result = new movie (); Try{if (audiotracks.size () > 0) {result.addtrack (new Appendtrack (Audiotracks.toarray) (new Track[audiotracks.size ()])));} if (videotracks.size () > 0) {result.addtrack (New Appendtrack (Videotracks.toarray (New Track[videotracks.size ()))) ;}} catch (IOException e) {//TODO auto-generated catch Blocke.printstacktrace ();} Container out = new Defaultmp4builder (). Build (Result); Try{filechannel fc = new Randomaccessfile ("Output.mp4", "RW"). Getchannel (); Out.writecontainer (FC); Fc.close ();} catch (Exception e) {//TODO auto-generated catch Blocke.printstacktrace ();} Movieslist.clear (); Filelist.clear ();
This part of the code is derived from: http://cstriker1407.info/blog/android-application-development-notes-mp4parser/.
Simply looked at, with so many movie and collection, inefficient, have time to change their own optimized code.

In addition, he can also achieve video rotation:

Isofile isofile = new Isofile (Getcompletefilepath (i));        Movie m = new Movie ();        list<trackbox> trackboxes = Isofile.getmoviebox (). Getboxes (                trackbox.class);        for (Trackbox trackbox:trackboxes) {            trackbox.gettrackheaderbox (). SetMatrix (matrix.rotate_90);            M.addtrack (New Mp4trackimpl (Trackbox));        }        Inmovies[i-1] = m;


Scenario Three: (iOS not available) Mp4parser+mp4v2 + x264+ system recording

Compress each frame of data with x264, record AAC through the system, use MP4V2 to merge each audio video, and then stitch each segment mp4 by Mp4parser.
Disadvantages:

1) Develop complex, both the UI and the NDK are cumbersome and involve audio and video synchronization issues.

Advantages:
1) imported libraries are small
2) all the disadvantages mentioned in solution one

Realize:

Step One:
The first is to cross-compile mp4v2 and x264.
Step Two:
Android from the camera timing (the time interval control is not good, the video will appear false lag phenomenon) to obtain the image data, get no yuv420p, so need to format conversion, in order to make the user to see the video and generated video consistent, need to mask the UI, and then crop. The Java layer rotates by gravity sensing to determine the direction of the camera. The existing algorithms on the Internet are too inefficient and will give an optimized algorithm for Android in the future.
Step Three:
Encode the JNI key code by transmitting it to x264 via JNI:
Get ready:
Jniexport jint jnicall Java_com_dangyutao_test_encode_startencode (jnienv *env, Jclass class, Jstring Jstr, Jint W, Jint H, Jint O,jboolean INV) {//define screen recording direction if (o!=3&&o!=0) {isinversion = Inv;orientation = o;} Initialize encoder width = w; HEIGHT = H;yuv_size = w * H * 3/2;x264_param_t Param;x264_param_default (@param); X264_param_default_preset (@param, "ultra Fast "," zerolatency ");p aram.i_threads = Encode_thread;param.i_width = Width;param.i_height = Height;param.i_fps_num = Fps;param.i_fps_den = 1;param.i_frame_total = 0;PARAM.I_CSP = Csp;param.i_keyint_min = FPS*3;param.i_keyint_max = FPS*10 ;p Aram.i_bframe=30;param.i_bframe_bias = 100;param.rc.i_qp_min = 25;param.rc.i_qp_max =50;param.rc.i_rc_method = X264 _rc_crf;//parameter I_rc_method for bitrate control, CQP (constant mass/video is large, bitrate and image effect parameters are invalid), CRF (constant bitrate/will be defined according to parameters), ABR (average bitrate/will be set according to parameters) Param.rc.i_bitrate = 2000000;//picture loss of quality loss less clear, default 23 min 0param.rc.f_rf_constant = 3;//Flow parameter *//* param.i_bframe = 5; PARAM.B_OPEN_GOP = 0; Param.i_bframe_pyramid = 0; Param.i_bframe_adaptive = X264_b_adaPt_trellis; PARAM.B_ANNEXB = 1; */x264_param_apply_profile (&param, "Baseline"); encoder = X264_encoder_open (&param);  //Initialize input file descriptor  outf = open (jstringtostring (env, JSTR), O_creat | O_wronly, 444);  if (Outf < 0) { x264_encoder_close (encoder);  free (Yuv_buffer);  close (INF);  return-1; } //the cache memory  yuv = (uint8_t *) malloc (WIDTH * HEIGHT * 3/2);  return 0;}


To add frame data:

Jniexport jint jnicall java_com_dangyutao_test_encode_adddetailframebybuff (jnienv *env, Jclass class, JbyteArray JB, Jint NW, Jint NH, Jint w,jint H, Jboolean isfrontcamera) {jbyte* dataptr = (*env)->getbytearrayelements (env, JB, NULL); uint8_t* buffer = (uint8_t*) dataptr;detailyuvpic (buffer, YUV, NW, NH, W, H, Isfrontcamera);//Initialize Pic--inx264_picture_ Alloc (&pic_in, CSP, WIDTH, HEIGHT);//Buff from Java, fill yuvbuff, Yuv_buffer = (uint8_t*) yuv;/*//rgb:pic_in.img.i_ plane = 1; Pic_in.img.plane[0] = Yuv_buffer; PIC_IN.IMG.I_STRIDE[0] = 3 * WIDTH; *///yuv420: Fill yuvbuff into pic_inpic_in.img.plane[0] = yuv_buffer;pic_in.img.plane[1] = &yuv_buffer[width * HEIGHT]; PIC_IN.IMG.PLANE[2] = &yuv_buffer[width * HEIGHT * 5/4];p Ic_in.img.i_plane = 3;/* pic_in.img.i_stride[0] = WIDTH; PIC_IN.IMG.I_STRIDE[1] = WIDTH/2; PIC_IN.IMG.I_STRIDE[2] = WIDTH/2; */PIC_IN.IMG.I_CSP = Csp;//pic_in.i_type = x264_type_auto;//encoded x264_nal_t *nals;int nnal;pic_in.i_pts = i_pts++;x264_ Encoder_encode (Encoder, &nals, &nnal, &pic_in, &pic_out) x264_nal_t *nal;for (nal = nals; nal < nals + nnal; nal++) {Write (out F, Nal->p_payload, nal->i_payload);} Free excess Memory (*env)->releasebytearrayelements (env, JB, Dataptr, Jni_abort),//log ("ENCODE over"); return 0;}
Outstanding
Jniexport jint jnicall Java_com_dangyutao_test_encode_finishencode (jnienv *env, Jclass Class) {//MOP up X264_encoder_close (encoder); free (YUV); close (OUTF);//free (Yuv_buffer); Memory has released return 0 when adding buff;}

Step Four: Merge each audio and video with Mp4v2
Reference: http://blog.csdn.net/yaorongzhen123/article/details/8467529
http://www.cnblogs.com/lidabo/p/3832634.html;

Step five: Merge each piece of MP4 file with Mp4parser.
The code is given in scenario two.


Scenario Four: System recording +ffmpeg direct compilation using (Android iOS is available)


FFmpeg's features are really powerful, and with the new filter library you can do most of the video processing, but the library is also very large. Come on, ffmpeg. The source code is slightly modified after compiling, can be called through a very simple interface, the specific call command can see FFmpeg official website. The system records the video, after recording completes through the FFmpeg to do the post-processing, the rotation transcoding clipping and so on.

Advantages
1) Powerful features
2) Easy to develop
Disadvantages:
1) Library too large
2) The essence is after the recording is finished and then processed with FFmpeg, so It takes a long time for the user to wait., Android on the test with NEXUS4, basically with the length of the video, that is, a clock video transcoding takes about a clock (just such a time concept, not accurate.) )

Realize:
Cross-compile.

Program five: FFmpeg based on two development.
Encode each frame of video by FFmpeg. Re-Audio Video merge.
Advantages:
1) Powerful features

Disadvantages:
1) very difficult to develop, FFmpeg have too much head file than x264.
2) less efficient than x264。 FFmpeg encoding module is also called x264, but in the test found that compared to the direct use of x264 efficiency is much worse, mainly because the FFmpeg is object-oriented architecture, which has a lot of each necessary modules are constantly initialized (just guess, the source is not looking at), Each module has a corresponding context to use.
3) Library Large
4) faster transcoding than system recording。 Since this is developed in two times, it is possible to generate video directly from FFmpeg, NEXUS4 is basically 0 latency. When the recording is complete, the ideal video is generated instantly, without the user waiting.
5) High Efficiency。 Compared to my implementation of the algorithm, FFmpeg scale and filter speed is very fast, I am curious to dig into the source, found that the original algorithm is similar, but it is directly using the assembly instructions, skip the system directly call the CPU.
Realize:

Code too much to give in detail later, the key process:

Initializes the Avcodeccontext, and the parameter details are given later.
Configure the input and output.
Coded Avcodec_encode_video2 (Temp->pcodecctx, &encode_pkt, Writeframe, &got_picture);
Closes the stream. Frees memory.



Programme VI: ffmpeg+x264


This kind of scheme is best in my own experience. But the difficulty of development is followed up. This development program has many, audio can record PCM directly, to FFmpeg FAAC compression, you can also record AAC with the system, and then use x264 to encode each frame, and then use FFmpeg to do audio and video merge, FFmpeg can also provide post-processing.

Advantages:
1) to address all the disadvantages mentioned above

Disadvantages:
1) very difficult to develop
2) Library Large
Implementation scenario: Refer to the above.
List FFmpeg merged audio and video code:


int Dm_mux (char* h264file,char *aacfile, char* mp4file,int usefilter) {Avoutputformat *ofmt = NULL;//Input Avformatcontext and Output Avformatcontextavformatcontext *ifmt_ctx_v = null, *ifmt_ctx_a = NULL,*OFMT_CTX = null; Avpacket pkt;int ret, i,retu =0,filter_ret=0;//int fps;int videoindex_v=-1,videoindex_out=-1;int audioindex_a=-1, Audioindex_out=-1;int frame_index=0;int64_t cur_pts_v=0,cur_pts_a=0;//set file pathconst char *in_filename_v = H264file;const char *in_filename_a = aacfile;const char *out_filename = mp4file; avbitstreamfiltercontext* Aacbsfc;//register before Useav_register_all ();//open Input and set avformatcontextif (ret = Avformat_open_input (&ifmt_ctx_a, in_filename_a, 0, 0)) < 0) {Retu = -1;//-1 mean audio file opened Faileddmprint ("O Pen audio file failed ", ret); goto end;} if (ret = Avformat_open_input (&ifmt_ctx_v, in_filename_v, 0, 0) < 0) {Retu =-2;//-2 mean video file opened fail Eddmprint ("Open video file Failed", ret); goto end;} if (ret = Avformat_find_stream_info (ifmt_ctx_v, 0)) < 0) {Retu =-3;//-3 mean get video info faileddmprint ("Get video Info failed", ret); goto End;} if (ret = avformat_find_stream_info (ifmt_ctx_a, 0)) < 0) {Retu = -4;//-4 mean get audio info faileddmprint ("Get Audio Info failed ret = ", ret); goto end;} Open outputavformat_alloc_output_context2 (&ofmt_ctx, NULL, NULL, out_filename), if (!ofmt_ctx) {dmprint ("open Output file failed ", ret); retu = -5;goto end;}  OFMT = ofmt_ctx->oformat;//find All video stream input typefor (i = 0; i < ifmt_ctx_v->nb_streams; i++) {//create Output Avstream According to input avstreamif (ifmt_ctx_v->streams[i]->codec->codec_type==avmedia_type_ VIDEO) {Avstream *in_stream = ifmt_ctx_v->streams[i]; Avstream *out_stream = Avformat_new_stream (Ofmt_ctx, In_stream->codec->codec); Videoindex_v=i;if (!out_stream) {dmprint_string ("Failed Allocating output stream"); Retu = -6;goto end;} Videoindex_out=out_stream->index;//copy the settings of Avcodeccontextif (AVCodec_copy_context (Out_stream->codec, In_stream->codec) < 0) {dmprint_string ("Failed to copy the context from Input to output stream codec context "); retu = -7;goto end;} Out_stream->codec->codec_tag = 0;if (Ofmt_ctx->oformat->flags & Avfmt_globalheader) out_stream-> Codec->flags |= Codec_flag_global_header;break;}} Find all audio stream input typefor (i = 0; i < ifmt_ctx_a->nb_streams; i++) {//create output Avstream according t o input avstreamif (ifmt_ctx_a->streams[i]->codec->codec_type==avmedia_type_audio) {AVStream *in_stream = ifmt_ctx_a->streams[i]; Avstream *out_stream = Avformat_new_stream (Ofmt_ctx, In_stream->codec->codec); Audioindex_a=i;if (!out_stream) {dmprint_string ("Failed Allocating output stream"); Retu = -8;goto end;} Audioindex_out=out_stream->index;//copy the settings of Avcodeccontextif (Avcodec_copy_context (out_stream-> Codec, In_stream->codec) < 0) {dmprint_string ("Failed to copy context from input to outputStream codec context "); Retu =-9;goto end;} Out_stream->codec->codec_tag = 0;if (Ofmt_ctx->oformat->flags & Avfmt_globalheader) out_stream-> Codec->flags |= Codec_flag_global_header;break;}} Open output Fileif (!) ( Ofmt->flags & Avfmt_nofile) {if (Avio_open (&AMP;OFMT_CTX-&GT;PB, Out_filename, Avio_flag_write) < 0) { Dmprint_string ("Could not open output file"); retu = -10;goto End;}} Write file Headerif (Avformat_write_header (Ofmt_ctx, NULL) < 0) {dmprint_string ("Error occurred when opening output File "); retu = -11;goto end;} if (usefilter) AACBSFC = Av_bitstream_filter_init ("Aac_adtstoasc"); while (is_going) {Avformatcontext *ifmt_ctx;int stream_index=0; Avstream *in_stream, *out_stream;//get an avpacketif (Av_compare_ts (cur_pts_v,ifmt_ctx_v->streams[videoindex_v]- >time_base,cur_pts_a,ifmt_ctx_a->streams[audioindex_a]->time_base) <= 0) {IFMT_CTX=IFMT_CTX_V;STREAM_ Index=videoindex_out;if (Av_read_frame (Ifmt_ctx, &AMP;PKT) >= 0) {Do{in_stream = Ifmt_ctx->streams[pkt.stream_index];out_stream = Ofmt_ctx->streams[stream_index];if (pkt.stream_index== Videoindex_v) {//simple Write ptsif (pkt.pts==av_nopts_value) {//write ptsavrational time_base1=in_stream->time_ Base;//duration between 2 frames (US) int64_t calc_duration= (double) av_time_base/av_q2d (in_stream->r_frame_rate) ;//parameterspkt.pts= (double) (frame_index*calc_duration)/(double) (av_q2d (TIME_BASE1) *av_time_base);p kt.dts= pkt.pts;pkt.duration= (Double) calc_duration/(double) (av_q2d (TIME_BASE1) *av_time_base); frame_index++;} Cur_pts_v=pkt.pts;break;}} while (Av_read_frame (Ifmt_ctx, &AMP;PKT) >= 0);} Else{break;}} Else{ifmt_ctx=ifmt_ctx_a;stream_index=audioindex_out;if (Av_read_frame (Ifmt_ctx, &AMP;PKT) >= 0) {Do{in_stream = Ifmt_ctx->streams[pkt.stream_index];out_stream = Ofmt_ctx->streams[stream_index];if (pkt.stream_index== audioindex_a) {//simple Write ptsif (pkt.pts==av_nopts_value) {//write ptsavrational time_base1=in_stream->time_ Base;//duration between 2 frames (US) int64_t calc_duration= (double) av_time_base/av_q2d (in_stream->r_frame_rate);//parameterspkt.pts= (Double) ( frame_index*calc_duration)/(double) (av_q2d (TIME_BASE1) *av_time_base);p kt.dts=pkt.pts;pkt.duration= (double) Calc _duration/(Double) (av_q2d (TIME_BASE1) *av_time_base); frame_index++;} Cur_pts_a=pkt.pts;break;}} while (Av_read_frame (Ifmt_ctx, &AMP;PKT) >= 0);} Else{break;}} if (usefilter) Filter_ret = Av_bitstream_filter_filter (AACBSFC, Out_stream->codec, NULL, &pkt.data,& Pkt.size, Pkt.data, pkt.size, 0); if (Filter_ret) {dmprint_string ("failt to Use:filter"); retu = -10;goto end;} Convert pts/dtspkt.pts = Av_rescale_q_rnd (pkt.pts, In_stream->time_base, Out_stream->time_base, (AV_ROUND_ Near_inf| Av_round_pass_minmax));p Kt.dts = Av_rescale_q_rnd (Pkt.dts, In_stream->time_base, Out_stream->time_base, (AV_ Round_near_inf| Av_round_pass_minmax));p kt.duration = Av_rescale_q (pkt.duration, In_stream->time_base, out_stream->time_base );p Kt.pos = -1;pkt.stream_index=stream_indeX;//writeif (Av_interleaved_write_frame (Ofmt_ctx, &AMP;PKT) < 0) {Av_free_packet (&AMP;PKT);d mprint_string ("Error Muxing packet "); break;} Av_packet_unref (&AMP;PKT);//av_interleaved_write_frame (Ofmt_ctx, &AMP;PKT); Av_free_packet (&AMP;PKT);} if (is_going) {//write file Trailerav_write_trailer (ofmt_ctx);} Elseretu =ret_close;//-77 mean is CLOSE by Userif (usefilter) av_bitstream_filter_close (AACBSFC); end:avformat_close_ Input (&AMP;IFMT_CTX_V); Avformat_close_input (&ifmt_ctx_a); /* Close Output */if (Ofmt_ctx &&!    Ofmt->flags & Avfmt_nofile) avio_close (OFMT_CTX-&GT;PB); Avformat_free_context (OFMT_CTX);    Avformat_free_context (IFMT_CTX_V); Avformat_free_context (ifmt_ctx_a); if (Ret < 0 && ret! = averror_eof) {dmprint_string ("Error occurred.");} Dmprint ("Return is", Retu); return retu;}
If the audio is in the package format you need to use that filter.


The above is the mobile video recording I have tried the technical solution, if there are other options, hope to inform, thank you. Welcome to the Exchange.

Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Android & iOS Video recording technology Solutions

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.