Simply test Windows S2D (2) in a vsphere environment

Source: Internet
Author: User
Tags failover refs file system

After understanding the basic concepts and architecture of S2D, we will then do some specific configuration and testing. The environment of this experiment is set up on VCENTER6.0U2, and the configuration of four virtual machines as S2D nodes, each virtual machine is configured as follows:

Os:windows Datacenter

4 vcpu& 8GB RAM

4 VNics

1 40GB disk-mounted OS, plus 2 x 50GB (Analog PCIe SSD), 2 x 100GB (Analog SSD), 4 x 300GB (HDD)

The idea of this test is to use the simulated NVMe PCIe SSD disk as a read-write cache, and SSD and HDD as the capacity layer. The s2d itself is flexible enough to support all-flash configurations or hybrid disk configurations, depending on the overall performance, capacity, and price considerations that customers combine with their applications. Personally feel that the two-tier disk configuration is appropriate for the actual application. Here the simulation of the three-tier configuration is expected to be more testing to explore its working mechanism. This blog post in Microsoft explains the S2d caching principles and best practices, and it's not a good idea. As long as the hardware in the Microsoft Authentication list is used, the system automatically sets the highest-level disk to read-write cache when s2d is enabled (the default is for SSD disks only as write cache; for HDD disks as read-write cache). However, when testing on a virtual machine, the type of disk and how it is used can sometimes be manually specified. The following steps have specific commands and are available for everyone's reference.

Https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/understand-the-cache

Below we transition to the specific configuration steps:

1. We will use the PowerShell command in the next step to specify the 50GB disk type as SCM; here, first edit the VMX configuration file for each VM, adding: SCSI X:X.VIRTUALSSD = "true". or open the Virtual machine settings option-"VM option--" Advanced settings--"configuration parameters--" Edit configuration, in the following interface directly add, Set the corresponding two-block 100GB disk to the SSD type:

650) this.width=650; "Width=" 401 "height=" 295 "title=" Virtualssd.png "style=" WIDTH:572PX;HEIGHT:304PX; "alt=" Wkiol1kvvhmh4-ykaaa62u7t40c074.png-wh_50 "src=" https://s5.51cto.com/wyfs02/M02/97/90/ Wkiol1kvvhmh4-ykaaa62u7t40c074.png-wh_500x0-wm_3-wmp_4-s_1714736731.png "/>

2. After installing the four virtual machine W2016, add the required "File and storage Services" role and "Failover Cluster" function, do the basic configuration such as network, join the domain. You can use the settings of the Vm-host affinity to place virtual machines on separate physical machines to increase high availability as needed. Two virtual network cards can be configured as team, configure different network segments, respectively, for the production network and cluster node communication.


3. The setting of the system clock is the most important detail that is easy to ignore when configuring a VM to cluster. After the virtual machine has VMware Tools installed, the default is to automatically calibrate the VM's time and host clock in the following cases: (1) when the virtual machine system restarts or resumes from the suspended state, (2) when the virtual machine vmotion to another host, and (3) creates or restores the snapshot, or other commands that cause such an action to be triggered automatically, (4) After restarting the VMware Tool service. If the host clock is not allowed, it can cause many problems. It is recommended that you turn off the VMware Tools Clock Synchronization service for these s2d nodes according to VMware KB1189, and turn on the Windows Time service for the system to automatically synchronize the clock with the domain control. It's better if you have a precise clock server on your network.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1189

4. Prepare the cluster before it is created, such as the configuration of the quorum disk. Next, in the graphical interface or running PowerShell as an administrator, enter the following command to add failover clustering functionality to each host (if not added) and create a new cluster:

Add-windowsfeature-name Failover-clustering-includemanagementtools-computername Xxx.yourdomain.com-verbose- Credential Yourdomain\administrator

New-cluster-name xxxx-staticaddress X.x.x.x-node node1.yourdomain.com,node2.yourdomain.com,node3.yourdomain.com, Node4.yourdomain.com-verbose


5. Open the PowerShell ISE in the built-in cluster node and enter a command similar to the following to see the physical disk details contributed by all nodes:

Get-physicaldisk |select Friendlyname,serialnumber,canpool,operationalstatus, Operationaldetails,healthstatus, Usage,size,bustype,mediatype|ft

Small reminder: If the test is performed on a physical platform, it is best to use the Clear-disk-removedata-removeoem command to clean up all the information on the hard disk where all nodes are used to build the storage pool. This command performs correctly if the corresponding disk needs to remain online. The output is similar, in the Partition style column, the disks are all raw states:

650) this.width=650; "Width=" 1112 "height=" 438 "title=" Clear-diskresults.png "style=" WIDTH:737PX;HEIGHT:289PX; "src= "Https://s3.51cto.com/wyfs02/M02/98/8C/wKiom1k-IQ6SpmvfAANUGb1Pg3s727.png" alt= " Wkiom1k-iq6spmvfaanugb1pg3s727.png "/>


6. When building s2d in the production environment, make sure that all types of hardware meet Microsoft's official Compatibility List. In this test environment, after looking at the number of physical disks and having no problem with the status, if you find that some types of disks are not recognized correctly, you can try to manually specify their disk type by command, but the disk type is one of its properties in the storage pool, so you must first add all the disks to the storage pool before you can manually set their types Here we turn on s2d and temporarily close the cache and skip disk detection:

Enable-clusters2d-cachestate disabled-autoconfig:0-skipeligibilitychecks


7. Create your own storage pool with the New-storagepool command, and you can place different disks in different pools when you create them. Here we put all the disks into a storage pool "mys2dpool1".

New-storagepool-storagesubsystemfriendlyname *cluster*-friendlyname Mys2dpool1-provisioningtypedefault Fixed- Physicaldisks (get-physicaldisk|? Canpool-eq "true")

Use the Get-storagepool command to view its status when you are finished. Or go to the Server Manager graphical interface to view, as shown:

650) this.width=650; "width=" 1057 "height=" 382 "title=" Storagepoolguipropoties.png "style=" width:730px;height:208px ; "Src=" Https://s1.51cto.com/wyfs02/M02/98/9A/wKiom1k-h7Cgagv_AAC-qf8fg44859.png "alt=" wkiom1k-h7cgagv_ Aac-qf8fg44859.png "/>

Status in Failover Manager:

650) this.width=650; "Width=" 1034 "height=" 146 "title=" Storagepoolguiclustermgr.png "style=" width:727px;height : 130px; "src=" Https://s1.51cto.com/wyfs02/M00/98/9A/wKiom1k-h3KjAiDlAAA2Efaqhc8709.png "alt=" Wkiom1k-h3kjaidlaaa2efaqhc8709.png "/>

is the physical disk details after joining the storage pool, you can see that in addition to 8 SSD disks, 300GB HDD and 50GB nvme PCIe SSD type disks are not recognized:

650) this.width=650; "Width=" 1034 "height=" 414 "title=" Get-physicaldiskbeforedefinemediatype.png "style=" width : 821px;height:237px; "src=" Https://s4.51cto.com/wyfs02/M00/98/9A/wKiom1k-iBLwiGxsAARFHoz--u4348.png "alt=" Wkiom1k-iblwigxsaarfhoz--u4348.png "/>


8. Next use the following command to designate an unrecognized 50GB disk as the SCM type:

Get-physicaldisk |where {$_.mediatype-eq "unspecified"-and $_.canpool} | Set-physicaldisk-mediatype SCM

This command can be flexibly combined, as specified by the size of the disk, and we then set the 300GB disk type to HDD using the following command:

Get-physicaldisk|where{$_.size-eq 322122547200}| Set-physicaldisk-mediatype HDD

After the completion of the Get-physicaldisk command output as shown:

650) this.width=650; "Width=" 1038 "height=" 448 "title=" Get-physicaldiskafterdefinemediatype.png "style=" width:728px ; height:295px; "src=" Https://s2.51cto.com/wyfs02/M00/98/9A/wKioL1k-ibWh8x4yAAS5-ABzajI283.png "alt=" Wkiol1k-ibwh8x4yaas5-abzaji283.png "/>


The 9.S2D system automatically uses the highest-level disk as a read-write cache when creating a storage pool. In this example, manual adjustments are required to designate the SCM as the cache Layer (journal):

Get-physicaldisk|where{$_.mediatype-eq "SCM"}| Set-physicaldisk-usage Journal

When you are finished, use the Get-physicaldisk command to view the last physical disk status. Make sure it is correct.

650) this.width=650; "Width=" 1033 "height=" 404 "title=" Get-physicaldisknvmeascache.png "style=" width:744px;height : 312px; "src=" Https://s5.51cto.com/wyfs02/M01/98/9A/wKioL1k-iuizsJfMAASxOi8ua0c125.png "alt=" Wkiol1k-iuizsjfmaasxoi8ua0c125.png "/>


10. Then turn on the S2D cache:

(Get-cluster). S2dcachedesiredstate= 2

available get-clusters2d to view its status after completion:650) this.width=650; "title=" Enables2dcache.png "src=" https://s5.51cto.com/ Wyfs02/m01/98/9a/wkiol1k-i-braubwaabcy9b8on8891.png "alt=" Wkiol1k-i-braubwaabcy9b8on8891.png "/>

11. Similar to VMware Vsan, s2d can also support different types of fault domains to enhance high availability in a production environment. The fault domain types are: Node, Rack, Chassis, site, and so on. Here we create 4 rack-based fault domains and place 4 nodes in different fault domains:

1..4| Foreach-object {new-clusterfaultdomain-name fd0$_-faultdomaintype rack}

1..4| Foreach-object {set-clusterfaultdomain-namedths2sofsnode$_-parent fd0$_}

When you are done with Get-clusterfaultdomain, you see a result similar to the following:

650) this.width=650; "title=" Get-clusterfaultdomain.png "src=" https://s1.51cto.com/wyfs02/M01/98/9A/ Wkiom1k-jkethyayaaa5w5eknh4605.png "alt=" Wkiom1k-jkethyayaaa5w5eknh4605.png "/>


12. The process of creating virtual Disk (Storage Space) is simple and can be done through the Servermanager graphical interface or via PowerShell commands. It is important to note that there are few options that can be customized with the graphical interface, and the wizard can be done in a few simple steps, and the creation of PowerShell gives the customer more flexibility. For example, below we use the New-virtualdisk command to create a VD named "Testshrink1", size 20gb,dural parity layout.

New-virtualdisk-storagepoolfriendlyname mys2dpool1-friendlyname testshrink1-size 20gb-resiliencysettingname Parity -physicaldiskredundancy 2

The options for creating VD inside the Server Manager are as follows:

650) this.width=650; "width=" 950 "height=" 410 "title=" Guinewvirtualdisk.png "style=" width:730px;height:253px; "src=" Https://s2.51cto.com/wyfs02/M00/98/9A/wKiom1k-javzRCCOAACmxzOhstc743.png "alt=" Wkiom1k-javzrccoaacmxzohstc743.png "/>


When you are finished, you can view its status in Server Manager, or you can use the following command to view it in PowerShell:

Get-virtualdisk-friendlyname TESTSHRINK1 |select *

The output information is as follows, and you can see that the write cache for the volume is 1GB by default:

650) this.width=650; "Width=" 1105 "height=" 557 "title=" Get-virtualdiskdetails.png "style=" width:1044px;height:489px ; "Src=" Https://s3.51cto.com/wyfs02/M00/98/9A/wKioL1k-jjmz2v9aAAMaIALSN7A343.png "alt=" Wkiol1k-jjmz2v9aaamaialsn7a343.png "/>

The information that is seen in the Failover Cluster management interface can be initialized from the disk management inside the VD owner node, formatted as NTFS or refs, and then converted to CSV format in the Cluster Manager.

650) this.width=650; "Width=" 808 "height=" 214 "title=" Shrinkdiskclustermgr.png "style=" WIDTH:728PX;HEIGHT:206PX; " Src= "Https://s5.51cto.com/wyfs02/M01/98/9B/wKiom1k-kZ3Cq1M-AAGwaB_0UKM240.png" alt= "wkiom1k-kz3cq1m-aagwab_ 0ukm240.png "/>

13. We can also one step, create a refs volume "myvol3" directly in PowerShell and make a CSV with a layout of two copies of Mirror:

New-volume-storagepoolfriendlyname mys2dpool1-friendlyname myvol3-filesystem csvfs_refs-size 25GB- Resiliencysettingname mirror-physicaldiskredundancy 1

Similarly, use the Get-volume command to view its details:

650) this.width=650; "title=" Get-volumedetails.png "src=" https://s1.51cto.com/wyfs02/M00/98/9B/ Wkiom1k-msdaae9maaegfp-1ud4662.png "alt=" wkiom1k-msdaae9maaegfp-1ud4662.png "/> below is in Faliover Status as seen in Manager:

650) this.width=650; "Width=" 960 "height=" 351 "title=" Myvol3failovermgr.png "style=" width:814px;height:285px; "src=" Https://s2.51cto.com/wyfs02/M00/98/9B/wKiom1k-mXjyrCmaAAE0gjSskeg204.png "alt=" Wkiom1k-mxjyrcmaaae0gjsskeg204.png "/>

The careful classmate may find that we define at the time of creation the size is 25GB, how to generate after the size is 32GB?

Then we'll use Get-virtualdisk to see the details of the volume corresponding to VD:

650) this.width=650; "Width=" 651 "height=" 555 "title=" Get-vvirtualdisk-myvol3.png "style=" WIDTH:733PX;HEIGHT:554PX; " Src= "Https://s4.51cto.com/wyfs02/M01/98/9C/wKioL1k-mcCyOEs3AAKBwireypo956.png" alt= " Wkiol1k-mccyoes3aakbwireypo956.png "/>

Note The parameter configuration in the red box. Here are some important concepts in storage space:

Slab: The basic unit in the storage pool that makes up virtual disk. The disks in the storage pool are divided into blocks of slab, and then a virtual disk is allocated to the host using a user-defined data protection method (mirror or parity). The slab size of each block in the s2d is 256MB.

Column: It can be simply understood as the stripe bandwidth, that is, how many physical disks are included in storage space when writing data in a striped way to VD. In theory, there is more column, and the number of disks working at the same time, and the corresponding increase in IOPS. In fact, because of the existence of read-write cache, it is necessary to further test the performance difference between column.

Interleave: can be simply understood as the stripe depth, that is, storage space in the stripe way to write data in VD, the final landing to the data on each disk, S2d Rimmer think 256KB.

In S2d, Microsoft recommends that when creating VD, the column and Interleave values should not be set manually, and the system will automatically be optimally configured. And the slab size is not adjustable. However, the reason to see the VD "Myvol3" is that the actual size is more than the defined size is the choice of different column. As shown, the system automatically for two mirror layout "MYVOL3" configured "Numbersofcolumn" for 8, I guess each copy of the data will be written across 8 disks, and each disk in 256MB of slab pre-allocated space, Will inevitably result in the allocation of extra space. The smaller the size defined when creating VD, the more obvious this phenomenon is. I tried to create a 1GB two-way mirror VD, the size is 8GB, but if you create a large VD, but there is no obvious allocation of extra space. Therefore, in the real production environment, the impact should be small. For more information on 1TB VD:

650) this.width=650; "Width=" 671 "height=" 559 "title=" 1tbvd.png "style=" width:740px;height:493px; "src=" https:// S1.51cto.com/wyfs02/m00/98/9c/wkiol1k-mm-xynduaalqsqnp8zg859.png "alt=" Wkiol1k-mm-xynduaalqsqnp8zg859.png "/>

In addition, the system automatically configures the "faultdomainawareness" value "Storagescaleunit" for Vd "Myvol3", which is the basic fault domain of the expansion unit. In the actual production environment, it is necessary to define the fault domain which can improve the system fault tolerance according to the scene condition. The fault domain type currently has five choices of "PhysicalDisk", "storagechassis", "Storageenclosure", "Storagerack" and "Storagesacleunit". In conjunction with the "Storagerack" fault domain we created earlier, we can also create a VD with a command similar to the following to fix each copy of the data in each fault domain, in this case we created a new VD named "3mirrorvd8", The fault domain type is the storagerack defined in the previous step, with a limit of numberofcolumns value of 4 and no more than the number of HDD per node:

New-virtualdisk-storagepoolfriendlyname mys2dpool1-friendlyname 3mirrorvd8-size 10gb-resiliencysettingname Mirror- Numberofdatacopies 3-faultdomainawareness Storagerack-numberofcolumns 4

650) this.width=650; "Width=" 712 "height=" 554 "title=" 3mirrorvd8details.png "style=" WIDTH:710PX;HEIGHT:516PX; "src=" Https://s5.51cto.com/wyfs02/M02/98/9C/wKioL1k-nB7B33P5AAKK8jHffZ0404.png "alt=" Wkiol1k-nb7b33p5aakk8jhffz0404.png "/>


14. Through PowerShell, it is also possible to easily secure a VD to a different type of disk, depending on the business performance needs. For example, let's create a mirror volume "ssdvol1" and pin it to the SSD disk:

New-volume-storagepoolfriendlyname mys2dpool1-friendlyname ssdvol1-filesystem csvfs_refs-mediatype ssd-size 15gb-re Siliencysettingname mirror–physicaldiskredundancy 1

Here is the detailed information you see with Get-virtualdisk, you can see that the volume is the disk is SSD disk, the same way, we can also be some large capacity, low performance requirements of VD directly fixed on the HDD.

650) this.width=650; "Width=" 1056 "height=" 167 "title=" Get-virtualdisk-ssdvol1.png "style=" width:752px;height:145px ; "Src=" Https://s4.51cto.com/wyfs02/M00/98/9C/wKiom1k-nY2QTdePAAK3mv9o7BY895.png "alt=" Wkiom1k-ny2qtdepaak3mv9o7by895.png "/>

We can easily expand the volume online in the Server Manager or with the following command:

Resize-virtualdisk-friendlyname ssdvol1-size 25gb-verbose

The VD file system can be expanded online by the disk Manager of the SSDVOL1 volume's owning node after completion:

650) this.width=650; "width=" 1198 "height=" 271 "title=" Extendssdvol1diskmgr.png "style=" WIDTH:738PX;HEIGHT:223PX; " Src= "Https://s1.51cto.com/wyfs02/M00/98/AF/wKiom1k_jKCjnCxoAABWeJZtrxo603.png" alt= "Wkiom1k_ Jkcjncxoaabwejztrxo603.png "/> small reminder:s2d VD currently can only support online expansion, does not support size reduction. In addition, for the S2D storage pool in the cluster, it can only support the fix format VD, and it does not support the automatic thin volume which allocates space according to the actual use.


15. Since there are two types of disks in the storage pool that act as capacity tiers, we then try to create volumes that span multi-tiered storage (Multi resilient Volume), which is limited to W2016 s2d and is limited to the refs file system. The purpose of this type of volume is to automatically balance the layout of hot and cold data to optimize the performance applied on it, while saving high-level disk space usage. The data is written to a pre-defined mirror layer (SSD) in a mirror manner, and then the "cooled" data is automatically dispatched to the parity layer (HDD) as needed to conserve space on the SSD, freeing up SSD space for hot spot data that really needs performance. From official Microsoft documentation, the type of VD-tolerant layout is well-summed:

650) this.width=650; "Width=" 764 "height=" 411 "title=" Resilienttypes.png "style=" WIDTH:665PX;HEIGHT:317PX; "src=" Https://s4.51cto.com/wyfs02/M02/98/B0/wKiom1k_k-ayW3XyAACOOHV6Uv4791.png "alt=" Wkiom1k_ K-ayw3xyaacoohv6uv4791.png "/>


Next we define the mirror layer and the parity layer respectively in the storage dominant, the mirror layer is named "perf", the data uses 2 copies of the mirror layout, the writing penalty is small, the performance is better; the parity layer is named "cap", with a more space-saving dual Parity layouts (similar to RAID6), while providing better security. The specific commands are as follows:

New-storagetier-storagepoolfriendlyname mys2dpool1-friendlyname perf-mediatype Ssd-resiliencysettingname Mirror- Physicaldiskredundancy 1

New-storagetier-storagepoolfriendlyname mys2dpool1-friendlyname cap-mediatype hdd-resiliencysettingname parity- Physicaldiskredundancy 2

The output results are as follows:

650) this.width=650; "Width=" 1104 "height=" 662 "title=" createstoragetiers.png "style=" width:788px;height:471px; "src = "Https://s5.51cto.com/wyfs02/M01/98/B0/wKiom1k_mL6C6L6KAAJ2x6CZltA504.png" alt= "Wkiom1k_ Ml6c6l6kaaj2x6czlta504.png "/>

Below we create a volume named "Mrvol1" with the following command, the volume size 60gb,mirror part of the 10gb,parity section is 50GB:

New-volume-storagepoolfriendlyname mys2dpool1-friendlyname Mrvol1-filesystem Csvfs_refs-storagetierfriendlynames Perf,cap-storagetiersizes 10gb,50gb-verbose

Use Microsoft's script tool "Show-prettyvolume" to see the following information about the volume:

650) this.width=650; "Width=" 611 "height=" 155 "title=" Showprevol-mrvol1.png "style=" WIDTH:737PX;HEIGHT:166PX; "src=" Https://s5.51cto.com/wyfs02/M02/98/B1/wKiom1k_nInyGASRAAFzh4oWiC0428.png "alt=" Wkiom1k_ Ninygasraafzh4owic0428.png "/>

You can also use the following command to view detailed configuration information for the volume on a two-tier disk:

Get-virtualdisk Mrvol1 | Get-storagetier |select *

650) this.width=650; "Width=" 1112 "height=" 640 "title=" Mrvol1storagetiersdetails.png "style=" width:789px;height : 522px; "src=" Https://s1.51cto.com/wyfs02/M01/98/B1/wKioL1k_ndPx3TwJAAQxqk9_OsE627.png "alt=" Wkiol1k_ Ndpx3twjaaqxqk9_ose627.png "/>

As for the MRV type of volume at the time of creation mirror layer and the parity layer corresponding to the capacity ratio is probably appropriate, and does not have a hard rule. The most recent write hot data is stored on SSDs, so the mirror layer cannot be less than this value, depending on how much new data is generated per day as a benchmark for the business on which it runs. In addition, if the mirror layer occupies more than 60% of the defined level will also trigger data moving (to the parity layer), it is recommended to appropriately relax the mirror layer of the defined capacity to avoid excessive data handling of the additional system burden. Based on best practices from Microsoft, it is advisable to reserve two times the size of the hotspot data in the mirror layer, while the entire volume definition is preferably about 20% more than the required capacity. However, similar to the experimental steps previously done, you can easily expand the mirror layer or parity layer through PowerShell, such as the following command to two layers of 10GB respectively, completed after entering the volume of the node's Disk Management tool, extend its file system can, no longer repeat.

Get-virtualdisk-friendlyname Mrvol1 | Get-storagetier|? Friendlyname-eq mrvol1_perf| Resize-storagetier-size 20GB

Get-virtualdisk-friendlyname Mrvol1 | Get-storagetier|? Friendlyname-eq mrvol1_cap| Resize-storagetier-size 60GB


16. Finally, the multi-resilient volume can be optimized on demand with the following command. In addition, the system also configures the corresponding data scheduling task by default, which can be found from the Task Scheduler, as shown in. Users can adjust the start time according to their business needs.

Optimize-volume-filesystemlabel Mrvol1-tieroptimize-verbose

650) this.width=650; "Width=" "height=" 508 "title=" Tieroptimizescheduler.png "style=" WIDTH:728PX;HEIGHT:339PX; " Src= "Https://s1.51cto.com/wyfs02/M02/98/B3/wKioL1k_qQuR_H1QAADTDXpm9kI667.png" alt= "Wkiol1k_qqur_ H1qaadtdxpm9ki667.png "/>

As can be seen from the above steps, the configuration and use of s2d is relatively easy and flexible enough to meet the different needs of different usage scenarios. However, administrators are required to familiarize themselves with PowerShell commands. Microsoft product experts on the Internet to share some useful information about s2d script, links below, after downloading according to their own environment simple modification can be used.

To view the usage status scripts for storage pools and volumes:

Http://cosmosdarwin.com/Show-PrettyPool.ps1

Http://cosmosdarwin.com/Show-PrettyVolume.ps1

Completely clear the script for s2d configuration:

https://gallery.technet.microsoft.com/scriptcenter/Completely-Clearing-an-ab745947


This article is from the "servers in the Cloud" blog, so be sure to keep this source http://yddfwq.blog.51cto.com/4016432/1935117

Simply test Windows S2D (2) in a vsphere environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.