WebGL Introductory Tutorial 1th--Six colors cubic

Source: Internet
Author: User

WebGL Introductory Tutorial 1th--Six colors cubic

WebGL, a technique that allows developers to manipulate the GPU in a browser to display graphics. Let's go into the world of WebGL together.

Reader Object

This series is suitable for developers who have basic JavaScript knowledge.

Preparatory work

We should build a Web server locally or install an IDE with a preview feature. If you have installed visual Studio,nivk shoes for us to develop the WEBGL code hints feature, you can use the following steps to enable visual Studio to support WEBGL code hints: Open the Visual studio--Click Tool--click Options-- Expand text Editor--Expand javascript--Expand intellisense--Click Reference-Toggle Reference Group to Implicit (WEB)-Add Http://www.nivkgames.com/WebGL-vs-doc.js reference to the current group. It is important to note that now the citation is a little flawed and nivk will soon fix it.

In addition, we should also install a WEBGL-enabled browser, this series will all take chrome as an example.

A complete example of this article can be downloaded here (access password f6b0).

Objective of this article

While learning the basics of WebGL, we'll draw a six-sided cube with different colors using what we've learned. In the end we will get a similar cube:

Ideal Experiment

How should we draw the cube? We can define six vertices in a three-dimensional space, define the connection order between vertices, and then command the GPU to draw the graphs in the order we give them, and to paint different colors on different faces.

So how do we define vertices? Using a coordinate system to define a point is common sense. Let's take a look at how to build a standard Cartesian three-dimensional coordinate system.

WebGL coordinate system

In practice, we set up the right-hand coordinate system, the coordinate origin in the center of the canvas, the x-axis unit length is the original point to the right edge of the canvas, y-axis unit length is similar. We will find that the x-axis is inconsistent with the unit length and the y-axis, and this problem will be solved by the projection matrix introduced later. Let's define the six vertices of the cube first, assuming that the coordinate axes are of the same unit length.

Vertex coordinates

If the center of the cube is at the origin of the coordinates and assumes that the side length is 2, we can get v0= (1.0,1.0,1.0), v1= ( -1.0,1.0,1.0), v2= ( -1.0,-1.0,1.0), v3= (1.0,-1.0,1.0), v4= (1.0,1.0 , -1.0), v5= ( -1.0,1.0,-1.0), v6= ( -1.0,-1.0,-1.0), v7= (1.0,-1.0,-1.0).

How to draw polygons

Because WebGL can only draw triangles (and also a bit and a line), in order to draw polygon V0-v1-v2-v3, we can plot the polygon v0-v1-v2-v3 by drawing triangular v0-v1-v2 and triangular v0-v2-v3. Other faces are similar.

Winding Order

For triangular v0-v1-v2, we draw by connecting V0 and V1 and then connecting V1 and V2 to the last connection V2 and V0 in this counterclockwise winding order, or by connecting V0 and V2 and then connecting V2 and V1 in the clockwise winding order of the last connection V1 and V0. In practice, we use a counterclockwise winding sequence.

If the winding order of each vertex in the front of a triangle is counterclockwise, then if we stand on the back of the triangle we will feel the winding order of each vertex clockwise.

WebGL uses the winding order of the triangles to determine the positive and negative. If we use a counterclockwise winding order, then the winding order of each vertex of a polygon is counterclockwise and the reverse is the back. The back side is not visible, and we can remove the back triangle by activating WebGL's face culling function.

Drawing order

Let's take a look at the drawing order of each vertex of this cube, and note that all the triangles are drawn in a counterclockwise winding order.

(from now on, the triangular v0-v1-v2 is drawn by connecting V0 and V1 and then connecting V1 and V2 to the last connection V2 and V0 in this counterclockwise winding order.) )

The face V0-v1-v2-v3 can be made by triangular v0-v1-v2 and triangular v0-v2-v3.

The face v4-v5-v6-v7 can be made by triangular v4-v6-v5 and triangular v4-v7-v6. (note here that the face of the V4-V5-V6-V7 is facing the negative direction of the z-axis, when the counter-clockwise in the end is not easy to judge.) But the side of the face is facing the z-axis positive direction, that is, we stand in front of the cube to look at the z-axis negative direction, just see the back of the face, so we are in a clockwise order to wrap, to ensure that the face of the front of the winding order is counterclockwise)

The face v4-v5-v1-v0 can be made by triangular v4-v5-v1 and triangular v4-v1-v0.

The face V7-v6-v2-v3 can be made by triangular v7-v2-v6 and triangular v7-v3-v2.

The face v4-v0-v3-v7 can be made by triangular v4-v0-v3 and triangular v4-v3-v7.

The face v5-v1-v2-v6 can be made by triangular v5-v2-v1 and triangular v5-v6-v2.

GPU Processing mode

CPUs are made up of several cores optimized for sequential serial processing. Based on the CPU traversal of an array, we usually finish processing the first element to handle the second one. The GPU is made up of thousands of smaller, more efficient cores that can handle parallel tasks efficiently. Traversing an array based on the GPU, we can process all the elements at the same time. We put all the vertex information into an array and pass it on to GPU processing.

WebGL Rendering Pipeline

How does the GPU handle the array after it receives it?

Let's look at a simplified model:

Vertex array: Contains the vertex information we want to submit to the GPU.

Vertex shader: A program that handles vertices. The GPU will run the vertex shader on each vertex in parallel. One of the functions of the vertex shader is to get the position information of the vertex.

ELEMENT assembly: After the vertex shader we get the vertex position, the element assembly stage joins the vertex into a triangle (or joins it into a line segment, or is described as a point), and then examines whether the new shape is in the area visible on the canvas. The graphics inside the visible area go to the next step and other deletions.

Rasterization: After element assembly we get the shape of the triangle, and the rasterization phase fills the triangle with pixels. After rasterization, we get the triangles described by the pixels, not the triangles described by the vertices.

Fragment shader: A program that handles pixels. The GPU will run the fragment shader in parallel on each pixel that it is rasterized to get. One of the functions of the fragment shader is to specify the color of each pixel.

Depth test: Test the front and back relationship of the pixel. pixels obscured by other pixels are not visible and will be discarded in the test.

Frame buffers: pixels that reach the frame buffer are displayed on the screen.

where element assembly, rasterization, and depth testing are done automatically, we really care about vertex shaders and fragment shaders.

Vertex shader

Let's look at a complete vertex shader code First:

Attribute vec3 avertexposition;attribute vec3 avertexcolor;uniform mat4 umodelviewmatrix;uniform mat4 Uprojectionmatrix;varying vec4 vcolor;void Main () {  gl_position = Uprojectionmatrix * Umodelviewmatrix * VEC4 ( Avertexposition, 1.0);  Vcolor = VEC4 (Avertexcolor, 1.0);}
The first thing we'll obviously find is that this is not JavaScript code. This is the GLSL language dedicated to writing OpenGL shaders. Unlike JAVASCRIPT,GLSL, it is a strongly-typed and compiled language.

Starting with the first line, attribute is the storage qualifier, VEC3 is the data type, and avertexposition is the variable name. The next four lines are the same.

So what does attribute,uniform,varying mean?

attribute indicates that the variable it modifies every time the vertex shader runs at each vertex is different.

Uniform means that the variable it modifies every time the vertex shader runs at each vertex is the same.

Varying means that the value of the variable it modifies will ultimately need to be passed to the fragment shader when the vertex shader runs at each vertex. Because three vertices can define a triangle, and many pixels can assemble a triangle, the value of the varying-modified variable is interpolated and assigned to the fragment shader.

What does that vec3,mat4 stand for?

VEC3 indicates that the data type of the modified variable is a three-dimensional vector.

MAT4 indicates that the data type of the modified variable is the 4*4 matrix.

So what are these variables going to do?

The avertexposition will be used to store the position coordinates of each vertex. Because vertex coordinates require only three component xyz, we define avertexposition as the VEC3 type. Because the vertex position is generally different when the vertex shader is running, we define it as the attribute type.

The Avertexcolor will be used to store the color information for each vertex. Because the color information requires only three component RGB (assuming the object is completely opaque), it is basically the same as the Avertexposition case.

Umodelviewmatrix will be used to store the Model view matrix, Uprojectionmatrix will be used to store the projection matrix. Because 4*4 matrices are needed for 3D transformations of graphics, we define their types as MAT4. Because we generally transform the entire 3D object, these matrices are invariants for each vertex, so we define them as uniform variables.

Vcolor will be used to store color information. The color information is eventually passed from the vertex array through the vertex shader and interpolation operations to the fragment shader. We're going to define a different color for each face of the cube, which needs to be passed to the fragment shader, to pass this changing value to the fragment shader, and only use the varying variable.

Next we'll see the main function, and the children's shoes with C or the like will be familiar with this is the entrance to the program. void indicates that the function has no return value.

The next one to be greeted is gl_position. We have not defined this variable, which is a variable built into the vertex shader that is used to pass the vertex's position coordinates to the GPU. We get a four-dimensional vector through VEC4 (avertexposition, 1.0), and 1.0 means its fourth dimension is 1.0. Why do we need a four-dimensional vector? Because the 4*4 matrix can only be multiplied by a four-dimensional vector. Uprojectionmatrix * Umodelviewmatrix * VEC4 (Avertexposition, 1.0) transforms the original vertex coordinates into a matrix and then deposits the gl_position. Why do we do matrix transformations? Because we use the three-dimensional coordinates to define the graph, using the matrix transformation will make it easier for us to translate the rotation and scaling of the graph, and finally can project three-dimensional coordinates to the two-dimensional screen up.

Next is Vcolor = VEC4 (Avertexcolor, 1.0), we get a four-dimensional vector through VEC4 (Avertexcolor, 1.0), the fourth dimension represents transparency, 1.0 is completely opaque, and 0.0 is completely transparent. We assign the value of VEC4 (Avertexcolor, 1.0) to Vcolor, and the interpolated value is eventually passed to the fragment shader.

Fragment shader

Precision HIGHP float;varying vec4 vcolor;void Main () {  gl_fragcolor = Vcolor;}
The first line indicates that the fragment shader uses a high-precision floating-point value. We can change the precision of floating-point values by replacing the HIGHP with Mediump (medium precision) or LOWP (low precision). Choose what precision depends on your needs, the higher the accuracy calculation, the more slowly the more memory consumes the power.

The second row and the fifth row of the vertex shader match, and the value of Vcolor comes from the vertex shader, but is interpolated (auto-complete).

Gl_fragcolor is a variable built into the fragment shader that passes the color value of each pixel to the GPU. Here we simply assign the value of Vcolor to Gl_fragcolor.

It is important to note that the attribute variable cannot be defined in the fragment shader.

Compiling links

The GPU does not directly understand the GLSL language, we need to compile the shader source code and link to the shader program for use by the GPU. Conveniently, we can call JavaScript API in the browser to complete the work of compiling links.

Personally

Now it's time to do the actual combat. We read the code and explain the details.

Let's look at our HTML structure first:

<! DOCTYPE html>
Because JavaScript does not directly support the operation of matrices and vectors, we refer to the Gl-matrix library to help us simplify such operations. Gl-matrix's github address is https://github.com/toji/gl-matrix.

If you use a Chrome browser and comment out promise-0.1.1.js, this file is just for other browsers to fully support the Promise API. (now is a good time to learn promise API, Chrome full support, Firefox close to full support, Microsoft early on to Winjs introduced promise, I believe IE will support. A nice Promise API tutorial here)

Get 3D Drawing Context

var gl = webgl.getcontext ("WebGL");

This is about the same method as getting the 2D drawing context, except that the parameters are changed from 2d to WebGL.

Initializing shaders

Shader source code is just a string, we define a JS variable Vertexsource to store the vertex shader source code, define a JS variable Fragmentsource to store the fragment shader source code. The simplest way is to assign the string of the shader source code directly to the two variables. This method is good except that it is not easy to read. Here we use another method of loading the shader source code text file via Ajax, and then extracting the string.

function get (URL) {    return new Promise (function (resolve) {        var xhr = new XMLHttpRequest ();        Xhr.onload = function () {            resolve (This.responsetext);        };        Xhr.open ("get", url);        Xhr.send ();    });} Promise.all ([Get ("Source.vert"), Get ("Source.frag")]). Then (function (sources) {    var vertexsource = sources[0];    var fragmentsource = sources[1];});
Source.vert and Source.frag are our two text files, and if you use visual Studio, when you start debugging you will find that both files failed to load, and we need to allow the Web server to load both formats vert and Frag. So you can change the Web. config to this:

<?xml version= "1.0"?><configuration>  <system.web>    <compilation debug= "true" targetframework= "4.5.1"/>    

After getting the source code string, now we're going to compile it, for the vertex shader:

var vertexshader = Gl.createshader (gl. Vertex_shader); Gl.shadersource (VertexShader, Vertexsource); Gl.compileshader (vertexshader);
We finally compile by creating a vertex shader object, then specifying its source code.

Fragment shaders are the same process:

var fragmentshader = Gl.createshader (gl. Fragment_shader); Gl.shadersource (Fragmentshader, Fragmentsource); Gl.compileshader (Fragmentshader);
Next we link them to the shader program:

var program = Gl.createprogram (); Gl.attachshader (program, vertexshader); Gl.attachshader (program, Fragmentshader); Gl.linkprogram (program), Gl.useprogram (program);

We created the shader program and then included the vertex shader and fragment shader into the program, followed by the link, and finally we notified WEBGL that the shader program we were going to use was it.

Passing in a matrix for a vertex shader

Our screen is two-dimensional, but it can show the shape of the three-dimensional. Just like the perspective in painting, the projection matrix is our perspective, which helps to transform the three-dimensional to the two-dimensional process. What we really care about is how to build and build a 3D scene without having to care how to change the scene into a plane. Gl-matrix done all the dirty work for us.

We know that the angles of observation are often different. To describe a 3D scene, we also need to specify the observer's angular position.

var Modelviewmatrix = Mat4.create (); Mat4.lookat (Modelviewmatrix, [4, 4, 8], [0, 0, 0], [0, 1, 0]);
We have created a 4*4 unit matrix, which will act as our model view matrix to describe the transformation of objects and the way observers observe them. Because we're not going to transform the cube, we just call the LookAt function to specify where our observer stands (4,4,8) coordinates, the eye (0,0,0) coordinates, and the head (0,1,0) coordinates. It should be noted that the MAT4 in the JS context is the global variable introduced by Gl-matrix, which is different from the MAT4 in the shader source code.

Remember the Umodelviewmatrix we defined in the vertex shader? We've got the model transformation matrix Modelviewmatrix and now we need to pass it to the umodelviewmatrix of the vertex shader. The WebGL API does not provide a way to assign values directly to a variable in the shader, so we need to get the address of the variable in memory and write the value in the corresponding memory.

var Umodelviewmatrix = gl.getuniformlocation (program, "Umodelviewmatrix"); GL.UNIFORMMATRIX4FV (Umodelviewmatrix, False, Modelviewmatrix);

We first call getuniformlocation to get the address of the uniform variable Umodelviewmatrix, and then assign the value to the JS variable Umodelviewmatrix. UNIFORMMATRIX4FV is used to write the Model view matrix Modelviewmatrix to the appropriate memory. The false parameter indicates that the matrix does not need to be transpose (row-to-column, column-row), and WebGL requires this parameter to be set to false.

Next, let's set the projection matrix.

var ProjectionMatrix = Mat4.create (); Mat4.perspective (ProjectionMatrix, MATH.PI/6, Webgl.width/webgl.height, 0.1, 100 );

We have created a 4*4 unit matrix, which will act as our projection matrix. The MATH.PI/6 indicates that the angle of view is 30°,webgl.width/webgl.height, which represents the aspect ratio of the viewport, and 0.1 represents the distance between the frustum and the Observer, and 100 represents the distance from the viewing cone to the observer point. Graphics outside the frustum of the Near and far cross sections are discarded during the element assembly phase.

The angle of the two red lines is the angle of view, the blue point is the observation point, the orange section is the near section, the purple section is the far section.

After setting the projection matrix, we need to pass it to the uniform variable Uprojectionmatrix of the vertex shader.

var Uprojectionmatrix = gl.getuniformlocation (program, "Uprojectionmatrix"); GL.UNIFORMMATRIX4FV (Uprojectionmatrix, False, ProjectionMatrix);

This section is identical to the previous practice of passing the Model view matrix.

Initializing buffers

Now that you're passing the vertex coordinates and color values of the cube, remember the two attribute variables avertexposition and avertexcolor in the vertex shader? The avertexposition is used to receive vertex coordinates, and Avertexcolor is used to receive vertex colors.

We will draw two triangles to synthesize a cube's face, six polygons we need to draw 12 triangles.

Define an array vertices to hold all the vertex information.

var vertices = [//First 1.0, 1.0, 1.0, 0.0, 0.8, 0.0,-1.0, 1.0, 1.0, 0.0, 0.8, 0.0,-1.0,-1.0, 1.0, 0.0, 0.8, 0.0, 1.0,-1.0, 1.0, 0.0, 0.8, 0.0,//1.0, 1.0,-1.0, 0.6, 0.9, 0.0,-1.0 , 1.0,-1.0, 0.6, 0.9, 0.0,-1.0,-1.0,-1.0, 0.6, 0.9, 0.0, 1.0,-1.0,-1.0, 0.6, 0.9, 0.0,//on 1.0, 1.0,-1.    0, 1.0, 1.0, 0.0,-1.0, 1.0,-1.0, 1.0, 1.0, 0.0,-1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, Under 1.0,-1.0,-1.0, 1.0, 0.5, 0.0,-1.0,-1.0,-1.0, 1.0, 0.5, 0.0,-1.0,-1.0, 1.0, 1.0, 0.5, 0.0, 1.0,-1. 0, 1.0, 1.0, 0.5, 0.0,//Right 1.0, 1.0,-1.0, 0.9, 0.0, 0.2, 1.0, 1.0, 1.0, 0.9, 0.0, 0.2, 1.0,-1.0, 1.0, 0.9, 0    .0, 0.2, 1.0,-1.0,-1.0, 0.9, 0.0, 0.2,//left-1.0, 1.0,-1.0, 0.6, 0.0, 0.6,-1.0, 1.0, 1.0, 0.6, 0.0, 0.6, -1.0, -1.0, 1.0, 0.6, 0.0, 0.6, -1.0, -1.0, -1.0, 0.6, 0.0, 0.6]; 
Each row represents a vertex, the first three elements represent the vertex's coordinate xyz, and the last three elements represent the color RGB for that vertex. We will find that many of the values in WebGL are between 0.0~1.0 (or -1.0~1.0), and a uniform range of values facilitates computational optimization. For a polygon of a cube, four vertices are the same color, but the position coordinates of the four vertices are different. For a vertex of a cube, the coordinates of the vertex are the same in the three polygons bordering it, but because the color of the three faces is different, the color value of the vertex is not the same in three faces. So while the ideal requires only 6 vertices and 6 colors, we actually have to define four different vertices for each polygon.

The figure shows three color components in parentheses. Instead of taking a vertex as a vertex of a tri-state overlay, consider it as a different vertex with three overlaps.

Vertices is just an array, WebGL does not manipulate the JS array directly, we need to convert it into a typed array and load the buffer.

var vertexbuffer = Gl.createbuffer (); Gl.bindbuffer (GL. Array_buffer, VertexBuffer); Gl.bufferdata (GL. Array_buffer, new Float32array (vertices), GL. Static_draw);
We created a buffer and then bound it to GL. Array_buffer, after which the buffer operation will be based on the current bound buffer. This is similar to the FillStyle in the context of a 2D drawing, after binding a color to FillStyle, the fill operation will use this color until the FillStyle is changed again.

The Bufferdata method is used to pass data to the current buffer, where we wrap the vertices into a float32array 32-bit floating-point typed array and then pass in. Gl. Static_draw means that our data is loaded only once and then used more than once in the drawing.

After we load the vertex information into the buffer, WEBGL has been able to manipulate it directly. It's time to pass the vertex information into the vertex shader.

var avertexposition = gl.getattriblocation (program, "Avertexposition"); Gl.vertexattribpointer (Avertexposition, 3, GL . FLOAT, False, 0); Gl.enablevertexattribarray (avertexposition);

We get the address of avertexposition through Getattriblocation, which is similar to the way we get the Umodelviewmatrix address. Avertexposition needs to receive the position information of the vertex, but our vertex array contains the position and color information, so we need to call Vertexattribpointer to extract the location information. The argument 3 represents a vertex in the vertex array, with 3 floating-point values, GL. Float indicates that our data is a floating-point type value, and False indicates that our data is already a -1.0~1.0 of GL. Float type, no normalization is required. 24 represents the span from the same type of element to the next element of the same type, the first line of the vertex array is the position coordinate, the first element of the second row is the position coordinate, and they span 6 elements, because our vertex array is stored with the Float32array type, So an element takes up 4 bytes, and 6 elements need to span 24 bytes. 0 represents the offset of the first element of our position coordinate type in each span, because we first row the position information and then row the color information, so it has an offset of 0.

Enablevertexattribarray tells WebGL that we want to use a vertex array. By default, constant vertex data is used, but constant vertex data cannot be used because each of our vertices has its own different location information.

After the vertex position information is passed in, we pass in the vertex color information:

var avertexcolor = gl.getattriblocation (program, "Avertexcolor"); Gl.vertexattribpointer (Avertexcolor, 3, GL. FLOAT, False, Gl.enablevertexattribarray (Avertexcolor);
This is similar to the previous operation of passing in vertex position information. Our color information is represented by the value of the 0.0~1.0 range, and if we use the value of the 0~255 range to represent the color information, then we need to change the vertexattribpointer gl.float argument to Gl.unsigned_byte, False arguments modified to TRUE,WEBGL internally will automatically map the value of the color information to the range of 0.0~1.0. Because the first three in a span is positional information, the offset is set to 3*4 bytes = 12 bytes.

Once the vertex information is passed, it is time to pass the drawing order information.

var indices = [    0, 1, 2, 0, 2, 3,    4, 6, 5, 4, 7, 6,    8, 9,    , 8, ten, one, ten, +, 16,    1 7,    20, 22, 21, 20, 23, 22];
We define an array indices, and the value I of the element represents the I-vertex (counted from zero) represented by the vertex array vertices. Each of the three elements of an array indices represents the drawing order of a triangle. So 0,1,2 represents the order in which the triangles v0-v1-v2 are drawn, and 0,2,3 represents the order in which the triangles are drawn v0-v2-v3. Each row of the array indices represents a polygon that is synthesized using two triangles.

Specifies an array of drawing order that we call an indexed array.

Next, we load the index array into the buffer:

var indexbuffer = Gl.createbuffer (); Gl.bindbuffer (GL. Element_array_buffer, IndexBuffer); Gl.bufferdata (GL. Element_array_buffer, new Uint8array (indices), GL. Static_draw);

This is similar to the step of loading the vertex information into the buffer. Vertex array we bind to GL. Array_buffer, and the indexed array needs to be bound to GL. Element_array_buffer. Because the elements of our indexed array are very small positive integers, we wrap the array indices as a 8-bit unsigned typed array.

Draw

By now, we have all the information we need, just draw it out.

Gl.enable (GL. Depth_test); Gl.enable (GL. Cull_face); Gl.clearcolor (0.0, 0.0, 0.0, 1.0); Gl.clear (GL. Color_buffer_bit | Gl. Depth_buffer_bit); Gl.drawelements (GL. Triangles, Indices.length, GL. Unsigned_byte, 0);
We enabled the deep test and polygon culling feature. We then set the clear color to black, and then we cleared the color caches and depth caches left over from the previous drawing. Although here we only draw one frame, do not clear the same, but it is a good habit to clear the cache before each drawing.

Finally we call drawelements to draw the graph. Drawelements draws a graphic based on an indexed array of WebGL's current bindings. The first parameter represents what element to render, which is specified as a triangle, and the second parameter represents how many indexed array elements are used to render the graphic. Here we specify as indices.length that we need all the indexed array elements to render; The third parameter represents the type of the indexed array element, because we wrap the array indices as Uint8array, so the type of the element is specified as Gl.unsigned_ BYTE, if we are wrapping with Uint16array, this parameter is specified as GL. Unsigned_short; The last parameter we want to render from the first element of the indexed array, all elements need to be rendered, so this parameter is specified as 0.

Conclusion

After these steps, we finally got a six-color cube. You can load the example into a browser and then try to change the observer's position angle or the position color of the vertex array, and finally refresh the browser to see what happens to the graphic.

WebGL More details This article is not detailed, such as lighting textures, and will continue to be described in subsequent articles in the WebGL series.

Welcome to Message exchange.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.