DirectX BASICS (4)

Source: Internet
Author: User
For more information about the principle of concave-convex ing, see the implementation principle of concave-convex ing (Bump Map.
Concave-convex texture ing is a texture mixing method that can create complex texture exterior surfaces of 3D objects. Ordinary texture ing can only simulate a relatively smooth 3D object surface, it is difficult to show the surface ups and downs, uneven effects. The concave-convex texture ing can interfere with the texture coordinates of the texture map of another texture map that represents the concave-convex degree of the object's surface, the disturbed texture coordinates are applied to environment ing to generate uneven display. A concave-convex texture map is usually composed of three texture maps. The first texture map represents the original texture color of the object surface, and the second concave-convex texture map represents the height Fluctuation value of the object surface, it is used to interfere with the coordinates of the next Environment texture map. The third texture map represents the Environment Illumination ing of the surrounding mirror reflection or diffuse reflection.
The concave-convex texture of direct3d is used to indicate the height difference between adjacent pixels on the object surface, each of its texture elements is composed of DU indicating horizontal adjacent pixel height difference, DV indicating vertical adjacent pixel height difference, and l indicating the brightness of the point (some concave and convex texture pixel formats can not contain L)
Concave-convex texture ing usually uses three layers of texture: the original texture of the object, the concave-convex texture generated by the original texture height graph, and the environmental texture, which correspond to layers 0, 1, and 2 of Multi-Layer Texture mixing. Specify the status of the current texture layer to d3dtop_bumpenvmap or d3dtop_bumpenvmapluminance. You can set the current texture layer to a concave-convex texture. For example:
Pd3ddevice-> settexture (1, g_bump_map_texture );
Pd3ddevice-> settexturestagestate (1, d3dtss_colodrop, d3dtop_bumpenvmap );
Or
Pd3ddevice-> settexture (1, g_bump_map_texture );
Pd3ddevice-> settexturestagestate (1, d3dtss_colodrop, d3dtop_bumpenvmapluminance );
Texture status d3dtop_bumpenvmap and d3dtop_bumpenvmapluminance indicate two different concave-convex texture ing methods. Texture status d3dtop_bumpenvmapluminance indicates that the convex texture contains the convex texture brightness value L, and the texture color of the next texture layer is multiplied as the final output texture color. The default brightness of the texture state d3dtop_bumpenvmap is 1, that is, the texture color of the next texture layer is not changed.

Volume texture is a set of 3D texture elements applied to two-dimensional elements (such as a triangle or a straight line). It can be used to achieve some special effects, such as fog and explosion. When a stereoscopic texture is used for an element, each vertex needs a set of ternary texture coordinates. When this graphic element is drawn, each pixel in the middle is filled with the color values of some texture elements in the stereo texture, which is similar to the two-dimensional texture ing.
We need to specify three texture coordinates for each vertex in the flexible vertex format, as shown below:
Struct scustomvertex
{
Float x, y, z;
Float U, V, W;
};
# Define d3dfvf_custom_vertex (d3dfvf_xyz | d3dfvf_tex1 | d3dfvf_texcoordsize3 (0 ))

In direct3d, the display of a three-dimensional object is achieved through a grid model. The key to displaying a three-dimensional object is to generate the grid model. 3D text is no exception. to display a 3D text, the grid model corresponding to the text is also required. Direct3d provides the function library function d3dxcreatetext (), which allows you to easily create a grid model containing specific text
HDC = createcompatibledc (null );
If (HDC = NULL)
return false;
hfont = createfont (0, 0, 0, 0, fw_bold, false, default_charset, delimiter,
delimiter, default_quality, default_pitch | ff_dontcare, l "Arial");
SelectObject (HDC, hfont);
d3dxcreatetext (g_device, HDC, L "3D font", 0.001 F, 0.4f, & g_text_mesh, null, null);
deleteobject (hfont);
deletedc (HDC);
after creating a grid model for text, you can use the interface function drawsubset () of id3dxmesh to draw it out. Before drawing it, you need to set a proper world matrix. Although 3D text is drawn, but in essence, it is to draw a three-dimensional object, so it is essential to set the world matrix for the three-dimensional text.
when you use the d3dxcreatetext () function to create a grid model for text, the source of the grid model is at the bottom left, so you need to translate the text grid model, display it in the center of the window.

When drawing complex 3D scenes, it is inevitable that objects are blocked from each other. In this case, in order to correctly draw scenes, we need to use deep tests. Unlike opaque objects, direct3d uses Alpha hybrid to draw translucent objects. Deep testing simplifies the drawing of complex scenarios, while Alpha mixing makes the 3D scenes more complete and realistic.
In complex scenarios, multiple objects need to be drawn. These objects usually have occlusion relationships, objects that are far away from the observation point will be invisible or partially visible due to the occlusion of the objects near them. The direct3d graphics system provides a deep test function to achieve this effect.
To understand deep testing, you must first understand the deep buffer. The depth buffer is a memory buffer used by direct3d to store the depth information of each pixel drawn to the screen. When direct3d renders a scene to the target surface, it uses the depth buffer to determine the pre-and post-occlusion relationships of pixels of each polygon after the raster, and ultimately determines which color value is drawn. That is to say, direct3d determines whether to draw the current pixel by comparing the depth of the currently drawn pixel point and the depth value of the corresponding depth buffer point. If the depth test result is true, the current pixel is drawn and the depth of the current pixel is used to update the depth buffer. Otherwise, the current pixel is not drawn. Generally, the depth buffer corresponds to a two-dimensional area of the screen size.
When you raster a scenario with deep buffer enabled, you must perform a deep test on each point on the rendering surface. At the beginning of the deep test, the depth value of the depth buffer is set to the maximum value that may occur in this scenario (in the depth template buffer in idevice3ddevice9: Clear ), the color value on the rendering surface is set to the background color value. Then, test each polygon to be drawn in the scenario to see if it is smaller than the depth value stored in the depth buffer. If the Polygon Depth value is smaller, the depth value is updated to the depth buffer, and the color value of the current vertex on the rendering surface is replaced with the color of the polygon. If the polygon has a larger depth value at this point, the next polygon in the test list will continue.
To create a graph in direct3d Program To apply the deep test, you must first create a depth buffer when creating the direct3d rendering device. Code As follows:
D3dpresent_parameters d3dpp;
Zeromemory (& d3dpp, sizeof (d3dpp ));
D3dpp. paiwed = true;
D3dpp. swapeffect = d3dswapeffect_discard;
D3dpp. backbufferformat = d3dfmt_unknown;
D3dpp. enableautodepthstencel = true; // it indicates that a depth buffer is created and managed by direct3d.
D3dpp. autodepthstencilformat = d3dfmt_d16; // indicates that the depth of each pixel in the depth buffer is represented by a 16-bit binary number.
If (failed (g_d3d-> createdevice (d3dadapter_default, d3ddevtype_hal, hwnd, d3dcreate_software_vertexprocessing,
& D3dpp, & g_device )))
{
Return false;
}

After the depth buffer is created with the direct3d rendering device, call the direct3d rendering status setting function idirect3ddevice9: setrenderstate () to set the first parameter to d3drs_zenable, and the second parameter to true, activate the deep test:
G_device-> setrenderstate (d3drs_zenable, true );
Generally, the depth test function is set to d3dcmp_less, indicating that when the depth value of the test point is smaller than the corresponding value in the depth buffer, the related pixels are drawn through the depth test, in this way, objects that are not blocked are displayed, but those that are blocked are not displayed. The sample code is as follows:
G_device-> setrenderstate (d3drs_zfunc, d3dcmp_less );
// Left hand? The smaller the depth value, the closer it is to the observer?
After the deep test function is set, you also need to set how to operate the depth buffer when the deep test is successful, whether to keep the original depth value or update the corresponding value with the depth value of the current pixel.
G_device-> setrenderstate (d3drs_zwriteenable, true );
Indicates that if the test is passed, the corresponding value in the depth buffer is updated with the depth value of the current pixel. This is the most common setting and the default setting.

alpha Mixing Principle
by defining an Alpha value indicating the transparency of an object and a formula for calculating the transparency, you can mix the color of the object to be drawn with the color in the color buffer to draw an object with a translucent effect. Direct3d uses the following method to calculate the Alpha color mixing:
color = (rgbsrc * ksrc) OP (rgbdst * kdst)
where color indicates the color value after Alpha mixing, rgbsrc indicates the source color value, which is the color value of the elements to be drawn. ksrc indicates the source mixing coefficient, which is usually assigned an Alpha value indicating the degree of transparency, it can also be any value of the enumeration type d3dblend, Which is multiplied by rgbsrc. Rgbdst indicates the target color value, that is, the color value in the current color buffer. kdst indicates the target mixing coefficient. It can be any value of the enumeration d3dblend, Which is multiplied by rgbdst. OP indicates the mixed method between the source computing result and the color buffer computing result. By default, op is d3dblend_add, which adds the source computing result and the color buffer computing result.
in graphic display, the most common use of alpha mixing is to assign ksrc to d3dblend_srcalpha, that is, the Alpha value of the currently drawn pixel, and assign kdst to d3dblend_invsrcalpha, that is, 1 minus the Alpha value of the currently drawn pixel. Assign op to d3dblend_add to add the source computing result and the color buffer calculation result. In this way, the formula of the Alpha mixed color is changed:
color = (rgbsrc * ksrc) + (rgbdst * kdst)
the above settings can better simulate the effects of most translucent objects.

Enable Alpha hybrid
To draw a translucent object, first activate the Alpha hybrid operation of direct3d, call the direct3d rendering status setting function idirect3ddevice9: setrenderstate (), and set the first parameter to d3drs_alphablendenable, the second parameter is set to true to activate Alpha mixing. The Code is as follows:
G_device-> setrenderstate (d3drs_alphablendenable, true );
Since Alpha is a mixture of the currently drawn pixel colors and colors in the color buffer, before you draw a translucent object, make sure that the object located after the translucent object is drawn before the translucent object, that is, the opaque object is drawn first, and then the translucent object is drawn.
(Alpha has strict requirements on the drawing sequence !)

The Alpha source mixing coefficient is usually set to d3dblend_srcalpha, that is, the Alpha value of the currently drawn pixel. Set the target mixing coefficient to d3dblend_invsrcalpha, that is, 1 minus the Alpha value of the currently drawn pixel. How can we get the Alpha value of the currently drawn pixel? If materials and textures are not used, the Alpha value of the currently drawn pixels comes from the Alpha value set for each vertex color. If light and material are used, the Alpha value of the current pixel comes from the object surface material; if texture is used for the object surface, the Alpha value is also related to the texture.
If you directly specify the color of each vertex in the program, you can directly give the Alpha value of each vertex color. You can directly declare the Alpha value of the vertex when defining the vertex, you can also dynamically modify the Alpha value of the vertex when the program is running. With the Alpha value of the vertex, the Alpha value of each pixel in the rendering object is determined by the Alpha value and the coloring mode of the object. When the coloring mode is flat, the alpha of all pixels in each polygon that constitute the object is equal to the Alpha value of the first vertex of the polygon. When the coloring mode is Gouraud, the Alpha value of each pixel on each polygon is obtained by linear interpolation of the Alpha value of each vertex.

Material alpha
Vertex Alpha does not use light and materials. If you add light and materials to objects in the scenario without adding textures, the Alpha value of the vertex depends on the alpha coefficient of the diffuse color in the material attribute and the alpha coefficient in the light color. The Alpha value of the vertex is calculated based on the illumination. The vertex lighting algorithm is for red, green, blue, and alpha respectively. The Alpha illumination calculation result is the Alpha value of the vertex. With the Alpha value of the vertex, the Alpha value of each pixel can be calculated based on the coloring mode,

Texture alpha
After the texture is applied to the object surface, the Alpha value of the pixel is the value after the texture Alpha mixture. Therefore, it depends on the texture alpha blending method, the texture alpha blending method determines whether the Alpha value after the texture alpha blending is taken from the material, texture, or some operation of the two. The specific calculation process of the pixel Alpha value is as follows. First, the vertex Alpha value is obtained. The vertex Alpha value may be directly specified or calculated by illumination, then, the Alpha value of the vertex is interpolated Based on the coloring mode. The obtained result is then specified based on the texture Alpha mixing method and the Alpha value obtained from the texture sampling, and the Alpha value of each pixel is obtained.

looking at other objects with very high transparency, for example, looking at other objects through almost completely transparent glass, it may feel that the glass does not exist, when rendering in a 3D graphics program, you can not render these objects with high transparency, which can increase the rendering speed, which can be achieved through Alpha testing.
the Alpha test controls whether to plot the pixel based on whether the current pixel meets the Alpha test condition (that is, whether the pixel has reached a certain degree of transparency, the graphics program uses the Alpha test to effectively mask certain pixel colors. Compared with Alpha, the Alpha test does not mix the color of the current pixel with the color of the pixel in the color buffer. The pixel is either completely opaque or completely transparent. Since you do not need to perform read operations and color mixing in the color buffer, Alpha testing is faster than alpha mixing.
the Alpha test is set by activating the rendering State d3drs_alphatestenable. The sample code is as follows:
g_device-> setrenderstate (d3drs_alphatestenable, true );
the rendering State d3drs_alpharef is used to set the Alpha test reference value. The Alpha test function compares the Alpha value and reference value of the currently drawn pixel. If true is returned, the system tests and draws the pixel, otherwise, the image is not drawn. The value range of the reference value is 0x00000000 to 0x000000ff.
the rendering State d3drs_alphafunc is used to set the Alpha test function. The Alpha test function belongs to the d3dcmpfunc Enumeration type and the default state is d3dcmp_always. The following code sets the Alpha test function to d3dcmp_greater, indicating that true is returned if the Alpha value of the test point pixel is greater than the set Alpha reference value:
g_device-> setrenderstate (d3drs_alphatestenable, true );
g_device-> setrenderstate (d3drs_alpharef, 0x00000081);
g_device-> setrenderstate (d3drs_alphafunc, d3dcmp_greater );
it is known that the alpha value of blue glass is a floating point value of 0.5f, which is equivalent to two hexadecimal numbers 0x80 and smaller than the Alpha test reference value 0x81 set in the program, and the Alpha test function is set to d3dcmp_greater, so the blue glass color is not drawn.

After you call the d3dxcreatemesh () function to create a grid model object, you also need to load model data for it. Loading Model data is complicated. Therefore, this function is not directly called in most cases. It is encapsulated in the direct3d extended utility library function. direct3d completes the creation of Grid Model objects and loading of model data internally. (For example, d3dxloadmeshfromx function) a complex 3D model is actually composed of many polygon. Therefore, we need to first obtain the polygon that constitute the model. You can use the direct3d extension library function d3dxloadmeshfromx () to extract polygon information (including vertex coordinates, colors, normal vectors, and textures) from the. X file to generate a grid model. The return value of this function, lpd3dxbuffer, was born because of the convenience of data operations. Its advantage is that it can store multiple types of direct3d data, such as vertex coordinates, materials, and textures, instead of declaring a function interface type for each type of data. You can use the interface function id3dxbuffer: getbufferpointer () to obtain the data in the buffer, and use id3dxbuffer: getbuffersize () to obtain the data size of the buffer.
A 3D mesh model is usually composed of several submodels. When creating a model, the material and texture are usually set for each submodel. Therefore, these submodels may use different materials and textures, therefore, the direct3d program needs to save materials and textures for all sub-models respectively. In addition, because each submodel may have different materials and textures, it is necessary to render the 3D model individually when rendering the 3D model.

Rendering Grid Model
The grid model interface id3dxmesh is actually a set of 3D object Vertex buffers. It encapsulates functions such as creating Vertex buffers, defining flexible vertex formats, and drawing Vertex buffers into a COM object, this greatly facilitates the drawing of 3D objects. For a 3D object represented by id3dxmesh, You can traverse all its vertex buffers and render them separately according to the corresponding vertex format. You can also directly call its interface function id3dxmesh: drawsubset () draw a graph.

When direct3d renders a graph element, it must be mapped to a two-dimensional screen through coordinate transformation. If the element has a texture, direct3d uses the texture to generate the color of each pixel in the two-dimensional rendering image of the element. For each pixel of a two-dimensional screen image, a color must be obtained from the texture. The process of obtaining a color for each pixel in the texture is called texture filtering ).

Illumination Computing Model
In the real world, light passes through multiple reflections on the surface of an object before reaching the eyes. During each reflection, the surface of an object will absorb some light, and some will be randomly reflected and diffuse, the rest reach the surface or eyes of the next object. In the real world, the effect of light reflection is light tracing.AlgorithmIt needs to be simulated. Although ray tracing algorithms can create extremely lifelike scenes similar to those observed in nature, there is no real-time program to complete these operations. Considering the needs of real-time rendering, direct3d uses a simpler method for illumination computing. The direct3d illumination computing model includes four types: ambient light, diffuse light, mirror light, and self-illumination. They work together to flexibly and efficiently solve the illumination problem in 3D graphics programs.

The materials created with the self-luminous property do not emit light that can be reflected by other objects in the scene. That is to say, the light it emits is not involved in the light operation. To reflect light, additional light must be added to the scene.
The calculation results of ambient light, self-emitting light, and diffuse light are used as the diffuse reflection color output of vertices, and the calculation results of the reflected mirror light are used as the color output of the vertices.
Three colors of light from the source, which act as the corresponding part of the current material and generate the final color used for rendering. The effect of the light source's diffuse color on the current material's diffuse reflection attributes, the light source's mirror reflection color, and the current material's mirror reflection attributes.
Because of the large amount of calculations on the mirror reflection, direct3d does not perform the mirror reflection operation by default. If you want to get the mirror reflection effect, you can first set the specular members of the d3dlight9 struct and the specular and power members of the d3dmaterial9 object surface material struct, and then use the following code to activate the mirror reflection operation.
G_device-> setrenderstate (d3drs_specularenable, true );
If you need to perform a diffuse reflection or mirror reflection operation, the vertex buffer must contain the vertex's normal vector information, because direct3d uses the vertex's normal vector for illumination calculation.
For light computing, the light source and material are indispensable. The surface material properties of an object determine the color of light that it can reflect and the amount of light that can be reflected. In direct3d, the surface material properties of an object are defined by the structure d3dmaterial9.

The environment light in a scenario has two sources: one is the global environment light set through the rendering state, and the other is the environment light set through the environmental light attribute in each light source. We recommend that you set an overall environmental light by rendering the state, and do not set the environmental light attribute for each light source in the scenario, because in the same scenario, for each object, the received ambient light should be the same. Therefore, it is convenient to set an overall ambient light through the rendering state, which is also in line with the actual situation.
In direct3d, the light source and material are mutually independent and interact with each other. The light source is relative to the entire scene, and the material is relative to each object. The two interact with each other to jointly determine the final rendering result. Although this is flexible but not easy to control, the setting of the light source and the object surface material should be as realistic as possible. For example, you can set the light source to white light and the material color of each object to a real color. Of course, in order to get special effects, you can exaggerate in some aspects.

If you use the d3dpool_default memory pool when creating a resource, direct3d usually saves the resource to the memory of the video or the AGP for higher performance. However, after the direct3d device is lost, before you call idirect3ddevice9: reset () to restore a device, you must release the resources created with d3dpool_default and recreate the resources after the device is restored.
Resources created using the d3dpool_managed flag are called managed resources. For managed resources, direct3d will automatically back up the resources in the system memory. when the device is lost, direct3d will automatically release these resources. You do not need to create a new one when restoring the device, direct3d automatically restores these resources from the system memory.
The d3dpool_systemmem memory pool flag is usually used to create resources that are not frequently accessed by devices. These resources will be resident in the system memory, so after the device is lost, such resources will not be lost, and you do not need to re-create the device when restoring the device.
D3dpool_default and d3dpool_managed resources are the two most commonly used resources. Remember the differences between the two resources in terms of device loss and recovery. Multiple Memory pools cannot be used for the same object. When a memory pool is selected for a resource, the memory pool cannot be changed.

Scenario submission Overview
Scenario submission: The scenario drawn in the background buffer is submitted to the foreground buffer to display on the screen. The submit interface function is a set of methods to control the status of specific rendering devices that affect display.
(1) Front-end buffer zone: This is a rectangular storage area converted from a video card. The content of this rectangular storage area is displayed on the display or other output devices.
(2) Background Buffer: the background buffer is a surface and its content can be submitted to the foreground buffer.
(3) Switching chain: a set of backend buffers that are submitted to the front-end buffer sequentially. Generally, a full-screen switching link submits subsequent display content through the flip Device Driver Interface (DDI), and the window switching link sends DDI content through bit blocks.
Front-end buffers cannot be used directly in direct3d APIs. Therefore, the application cannot lock or render the front-end buffer. DirectX 9.0 applications do not have the primary Surface Concept and cannot create an object with the primary surface.

Multi-view in window mode)
Direct3d Device objects have and control their own switching chains. In addition, applications can use the function idirect3ddevice9: createadditionalswapchain () to create additional switching chains, which are used to submit multiple views in the same device. Generally, an application creates a switching link for each view, and each switching link corresponds to a specific view. The application renders the graphics in the backend buffer of each view, and then uses the function idirect3ddevice9: Present () to submit them separately. Note: For any direct3d device object, only one switching link can be displayed in full screen at a time.

Multi-Monitor Operation
When a device is successfully set to full screen, the direct3d object created for the device is identified as all the graphics cards of the system. This state is called exclusive mode, that is, the direct3d object is in exclusive mode. Exclusive Mode means that all devices created by other direct3d objects cannot perform full screen operations or apply for resource space. In addition, when an object is in exclusive mode, all devices that are not in full screen mode are set to lost. When the last full screen device of the direct3d object is set to window mode or destroyed, the exclusive mode is canceled.
When a direct3d device is in exclusive mode, the device is divided into two categories. The first type of device has the following attributes:
(1) they are all created by the same direct3d object that creates a full screen device.
(2) Because devices are full-screen, they have the same focus window.
(3) They represent graphics cards different from any full screen device.
For devices of this type, you do not need to worry about whether they can be reset or created because they are not lost. Even devices of this type can be set to full screen.
A device that does not belong to the first type is a device created by another direct3d object, or does not have the same focus window as the current full screen device, or uses a different video card than the current full screen device. This type of direct3d device cannot be reset and will remain in the lost State until the exclusive mode of the current full screen device is canceled. In this way, a multi-monitor application can have multiple devices in full screen mode, but these devices must be created by the same direct3d object, corresponds to different physical graphics cards and shares the same focus window.

Operation depth buffer
The deep buffer is related to the device. When an application sets a rendering target, it needs to access the depth buffer. You can use the idirect3ddevice9: getdepthstencilsurface () and idirect3ddevice9: setdepthstencilsurface () functions to operate the depth buffer.

Access front-end Buffer
You can use the idirect3ddevice9: getfrontbufferdata () function to access the front-end buffer. This is the only way to get a screen snapshot for the anti-sawtooth scenario.

Antialiasing)
The image pixel represents the current position in the color buffer or in a two-dimensional coordinate (x, y) on the screen. If the calculated pixel value is a floating point number, it will be converted to an integer coordinate display. This raster processing method may make the image look like a saw tooth. In Graphics, this distortion is called alisasing due to insufficient sampling frequency. direct3d uses image anti-aliasing (through multiple sampling) to improve the image's sawtooth effect, increase the smoothness of the image edge.

Full Screen Display
Games are usually running in full screen mode. The key to full screen display is to use a full screen rendering device. The device that creates the full-screen rendering mode is basically the same as the rendering device in the window mode. The difference is to set d3dpp. javaswed to false, telling the direct3d system that a full-screen rendering device will be created. In addition, you must specify the size and format of the background buffer, which is different from that of the rendering device in the window mode, when creating a window-mode rendering device, you can set the background buffer format to d3dfmt_unknown. You can also set the background buffer size to the default value, which must be explicitly specified when creating a full-screen rendering device.
D3dpresent_parameters d3dpp;
Zeromemory (& d3dpp, sizeof (d3dpp ));
D3ddisplaymode display_mode;
G_d3d-> getadapterdisplaymode (d3dadapter_default, & display_mode );
D3dpp. incluwed = false;
D3dpp. swapeffect = d3dswapeffect_discard;
D3dpp. backbufferwidth = display_mode.width;
D3dpp. backbufferheight = display_mode.height;
D3dpp. backbufferformat = display_mode.format;
If (failed (g_d3d-> createdevice (d3dadapter_default, d3ddevtype_hal, hwnd, d3dcreate_software_vertexprocessing,
& D3dpp, & g_device )))
{
Return false;
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.