Mysql Learning Notes Summary _mysql

Source: Internet
Author: User
Tags benchmark compact datetime hash mysql in mysql manual mysql query table definition

Slow sql: Queries that run longer than a given time range are called slow queries.
How do I record slow sql in MySQL?
A: You can set the following information in MY.CNF:

[Mysqld]
; Enable the slow query log, default seconds
log-slow-queries
log queries taking longer than 5 seconds
long _query_time = 5
; log queries that don ' t use indexes even if they take less than long_query_time
; MySQL 4.1 and newer only
log-queries-not-using-indexes

These three settings mean that you can record a query that has more than 5 seconds of execution time and no indexes.
MySQL in the log category:
1. Error log MySQL errors logging
2. Bin log records the Quer that are generated when the data is modified and stored in binary mode
3. The Mysql-bin.index record is the absolute path of recording all binary logs, ensuring that all MySQL threads can successfully find all the required binary log files according to it.
4. Slow query log records slow SQL, a simple text format that you can view through a variety of text editors. It records the moment the execution of the statement, the time it takes to execute, and the execution of the user.
5. InnoDB redo log records all physical changes and transaction information made by INNODB to ensure transaction security.

SQL architecture can be divided into: SQL layer and Storage engine layer
multiple child modules are included in SQL Layer:
1, initialization module
As implies, initialization module is in the MySQL Server startup, the entire system to do a variety of initialization operations, such as the various Buffer,cache structure initialization and memory space applications, various system variables initialization, various storage engine initialization settings, and so on.
2. Core API
The core API module is mainly to provide some highly efficient implementation of the underlying operating functions, including the implementation of a variety of low-level data structures, special algorithm implementation, string processing, digital processing, small file I/O, formatted output, and the most important part of memory management. All of the source code for the core API module is concentrated under the Mysys and Strings folders, and interested readers can study it.
3. Network Interaction Module
The underlying network interaction module abstracts the interface APIs used by the underlying network interaction, and realizes the reception and sending of the underlying network data, so as to facilitate the invocation of other modules and the maintenance of this part. All the source code is under the Vio folder.
4. Client & Server Interactive Protocol module
Any software system of C/s structure, will certainly have its own unique information interaction protocol, MySQL is no exception. MySQL's Client & Server Interactive Protocol module, which implements all the protocols in the process of interacting with MySQL. Of course, these protocols are based on existing OS and network protocols, such as TCP/IP and UNIX sockets.
5, User module
The function that user module realizes, mainly includes user's login connection right control and user's authorization management. Like MySQL's Gate guard, he decides whether to "open the door" to visitors.
6. Access Control Module
When you visit a visitor, you can do what you want. For security reasons, it must not be so casual. At this time, the access control module is needed to monitor every movement of guests and give different guests different privileges. The function of the access control module is to control the user's access to the data according to the authorization information of each user in the user module and the special constraints of the database itself. The user module and the Access control module combine to form the function of the security management of the MySQL database system.
7. Connection management, connection threads, and thread management
The connection management module listens to various requests for MySQL Server, receives connection requests, forwards all connection requests to the thread management module. Each client request that is connected to the MySQL Server is assigned (or created) a single connection thread to serve it separately. The main task of connecting threads is to be responsible for the communication between the MySQL server and the client, accept the client's command request, deliver the server-side result information, etc. The thread management module is responsible for managing the maintenance of these connection threads. Including the creation of threads, cache of threads, and so on.
8. Query parsing and forwarding module
In MySQL we are accustomed to all the client end to the server side of the command is called query, in the MySQLServer, the connection thread received the client's query, the query will be directly passed to the special responsibility of the various query Classify and forward to each corresponding processing module, this module is the query parsing and forwarding module. The main task is to analyze the query syntax and semantics, and then classify them according to different types of operations, then make a targeted forwarding.
9, Query Cache module
Query cache module in MySQL is a very important module, his main function is to submit the client to MySQL's Select query request of the return result set Cache in memory, and the query a hash value to do a corresponding. After any data changes in the base table of the data in this query, MySQL automatically invalidates that query's cache. In an application system with very high reading and writing ratio, Query Cache has a significant performance improvement. Of course, its memory consumption is also very large.
10. Query Optimizer Module
Query optimizer, as the name suggests, is to optimize the client request query, according to the client request query statement, and the database of some statistical information, in a series of algorithms based on the analysis, come up with an optimal strategy, tell the following program how to obtain the results of this query statement.
11. Table Change Management module
The table change Management module is primarily responsible for the processing of some DML and DDL queries, such as update,delte,insert,create table,alter table.
12. Table Maintenance Module
Table state checking, bug fixes, and optimization and analysis are all things that the table maintenance module needs to do.
13. System State Management Module
The System State Management module is responsible for the client to request the system State, the various state data returned to the user, such as the various show status commands commonly used by DBAs, show variables command, and so on, the results are returned by this module.
14. Table Manager
This module seems to be easily confused with the table change and the table maintenance module from the name, but the function is completely different from the change and maintenance module. As you know, each MySQL table has a table definition file, which is the *.frm file. The main task of the Table manager is to maintain these files, as well as a cache, where the main contents of the cache are the structure information of each table. In addition, it maintains table-level lock management.
15. Logging Module
The logging module is primarily responsible for log records of the entire system-level logical layer, including error log,binarylog,slow query log.
16. Copy Module
The replication module can be divided into Master module and slave module, Master module is mainly responsible for reading the binary log in the replication environment and interacting with the I/O threads on the slave side. The Slave module has a little more to do than the Master module, which is mainly embodied on two threads in the system. One is responsible for requesting and accepting the binary log from master and writing to the I/O thread in the local relay log. The other is responsible for reading the related log events from the relay log and parsing them into the SQL threads that can execute correctly at the slave end and get exactly the same results as the master, and then hand them over to the slave execution.
17. Storage Engine Interface Module
The storage Engine interface module can be said to be the most characteristic of the MySQL database. At present, various database products
, basically only MySQL can implement plug-in management of its underlying data storage engine. This module is actually just an abstract class, but it is precisely because it successfully abstracts a variety of data processing to achieve today's MySQL pluggable storage engine features.

MySQL performance tuning of the monitoring method:

1. Set profiling=1 turn on performance monitoring, this command is not available in some versions of MySQL
2. Then execute SQL
3. Show profiless, view time of system execution SQL
4. Show Profiles CPU, block IO for query numeric ID (this ID is the performance output log ordinal in show profiles)

The MySQL storage engine uses three types (level) locking mechanisms: row-level locking, page-level locking, and table-level locking.
In the MySQL database, there are some non-transactional storage engines, such as myisam,memory,csv, which use table-level locking, while row-level locking is primarily InnoDB storage engine and NDB Cluster storage engine, page-level locking is mainly berkeleydb How the storage engine is locked.

The priority rules for write lock requests in MyISAM read requests and write wait queues are determined primarily by the following rules:
1. In addition to read_high_priority read locks, write locks in the Pending write-lock queue can block all other read locks;
2. Read_high_priority read lock requests can block write locks in all pending write-lock queue;
3. In addition to write locking, any other write locks in the Pending write-lock queue have a lower priority than read locks.

MyISAM Write locking occurs after the current Write-lock queue, blocking all other locked requests except for the following:
1. With the permission of some storage engines, a Write_concurrent_insert write lock request can be allowed
2. When write lock is Write_allow_write, allow all read and write lock requests except Write_only
3. When write lock is Write_allow_read, allow all read lock requests except Read_no_insert
4. When write lock is write_delayed, allow all read lock requests except Read_no_insert
5. When write lock is Write_concurrent_insert, allow all read lock requests except Read_no_insert

line-level locking considerations for Innodb:
(a) To make all data retrieval as much as possible through indexing, so as to prevent InnoDB from being upgraded to table-level locking because of the inability to lock through the index key;
b Reasonable design of the index, so that InnoDB in the key lock on the index when possible as accurate as possible to narrow the locking range, to avoid unnecessary locking and affect the execution of other query;
(c) Minimizing the data retrieval filtering conditions based on the scope and avoiding the locking of the records due to the negative effects of the gap lock;
D try to control the size of the transaction, reduce the amount of resources locked and the length of lock time;
E to minimize the additional cost of MySQL because of the transaction isolation level, as far as possible using lower-level transaction isolation, as permitted by the business environment;
How to view table-level locking information in MyISAM:
A: Show status like '%table_locks% '
Table_locks_immediate: The number that is displayed is the number of locks.
Table_locks_waited: The number that is displayed is the number of times a table-level lock contention occurs and waits

How to view InnoDB row-level locking information:
A: Show status like '%innodb_rows% '
The Innodb row-level lock state variable not only records the number of lock waits, but also records the total length of the lock, the average length of each time, and the maximum length, in addition to a non-cumulative status indicating the number of waits currently awaiting a lock. The description of each state amount is as follows:
Innodb_row_lock_current_waits: The number of locks currently waiting;
Innodb_row_lock_time: From system boot to now lock total time length;
Innodb_row_lock_time_avg: Average time spent each wait;
Innodb_row_lock_time_max: From system startup to now wait for the most frequent time spent;
Innodb_row_lock_waits: The total number of waits since the system was started;

Mysqlslap is an official MySQL-supplied stress testing tool. Here are some of the more important parameters:
–defaults-file, configuration file storage location
–concurrency, concurrent number
–engines, engine.
–iterations, number of iterations of the experiment
–socket,socket File Location
Automated Tests:
–auto-generate-sql, automatically generate test SQL
–auto-generate-sql-load-type, test the type of SQL. Type has mixed,update,write,key,read.
–number-of-queries, total number of SQL executed
–number-int-cols, number of int columns in the table
–number-char-cols, number of char columns in the table
For example:
shell>mysqlslap–defaults-file=/u01/mysql1/mysql/my.cnf–concurrency=50,100–iterations=1–number-int-cols=4– Auto-generate-sql–auto-generate-sql-load-type=write–engine=myisam–number-of-queries=200-s/tmp/mysql1.sock
Benchmark
Running for Engine MyISAM
Average number of seconds to run all queries:0.016 seconds
Minimum number of seconds to run all queries:0.016 seconds
Maximum number of seconds to run all queries:0.016 seconds
Number of clients running QUERIES:50
Average number of queries per Client:4
Benchmark
Running for Engine MyISAM
Average number of seconds to run all queries:0.265 seconds
Minimum number of seconds to run all queries:0.265 seconds
Maximum number of seconds to run all queries:0.265 seconds
Number of clients running queries:100
Average number of queries per Client:2
to specify a test for a database:
–create-schema, specifying the database name
–query, specify the SQL statement that you can navigate to a file that contains SQL
For example:
shell>mysqlslap–defaults-file=/u01/mysql1/mysql/my.cnf–concurrency=25,50–iterations=1–create-schema=test– Query=/u01/test.sql-s/tmp/mysql1.sock
Benchmark
Average number of seconds to run all queries:0.018 seconds
Minimum number of seconds to run all queries:0.018 seconds
Maximum number of seconds to run all queries:0.018 seconds
Number of clients running QUERIES:25
Average number of queries per client:1
Benchmark
Average number of seconds to run all queries:0.011 seconds
Minimum number of seconds to run all queries:0.011 seconds
Maximum number of seconds to run all queries:0.011 seconds
Number of clients running QUERIES:50
Average number of queries per client:1

index usage related restrictions in MySQL:
1. The sum of the index key lengths of the MyISAM storage engine cannot exceed 1000 bytes;
2. BLOBs and text-type columns can only create a prefix index;
3. MySQL does not currently support function indexing;
4. mysql cannot use index when using not equal to (!= or <>);
5. The filter field uses the function operation (such as ABS (column)), MySQL can not use the index;
6. MySQL cannot use the index when the Join condition field type is inconsistent in the join statement;
7. When using the like operation, if the condition starts with a wildcard ('%abc ... ') MySQL cannot use the index;
8. MySQL cannot use hash index when using a non-equivalence query;
MySQL can now use two algorithms to sort the data:
1. Remove the field that satisfies the filter criteria for the sort criteria and the row pointer information that can be positioned directly to the row data. The actual sort operation is carried out in the Sortbuffer, and then the data of the other fields requested by the client are returned to the client according to the row pointer information after the data of the rows.
2. Remove the sorted field and all other fields that the client requests, based on the filter criteria, and store the fields in a memory area that do not need to be sorted, and then sort the sorted fields and row pointer information in the sort Buffer. Finally, the sorted row pointer is used to match the row pointer information stored in the memory area with the other fields, and then returned to the client in order.

an explanation of the various information that is presented to us in the MySQL Explain function:
Id:query the serial number of the query in the execution plan selected by Optimizer;
Select_type: The type of query used is mainly the following types of queries
◇dependent subquery: The first select of the inner layer of the subquery relies on the result set of the external query;
◇dependent Union: The Union in a subquery, and all the select after the start of the second select in Union, also relies on the result set of the external query;
◇primary: The outermost query in the subquery, the note is not the primary key query;
◇simple: Any query other than a subquery or union;
◇subquery: Subquery The first select of the inner layer query, the result does not depend on the external query result set;
◇uncacheable subquery: A subquery that cannot be cached by the result set;
All select after the second select starts in the ◇union:union statement, and the first select is primary
The results of ◇union result:union;
Table: Displays the names of the tables in the database to which this step is accessed;
Type: tells us how to access the table, mainly with the following set of types;
◇all: Full table scan
◇const: Read constant, and at most only one record match, because is a constant, so actually only need to read once;
◇eq_ref: There will be only one matching result, usually accessed through primary key or unique key index;
◇fulltext:
◇index: Full index scan;
◇index_merge: Two (or more) indexes are used in the query, then the table data is read after the index results are merge;
◇index_subquery: The return result field combination in a subquery is an index (or an index combination), but not a primary key or a unique index;
◇rang: index range scanning;
In ◇ref:join statement, the driver table index references the query;
◇ref_or_null: The only difference with ref is a query that adds a null value outside the index reference query;
◇system: System table, only one row of data in the table;
◇unique_subquery: The return result field combination in a subquery is a primary KEY or a unique constraint;
Possible_keys: The index that the query can take advantage of. If no index is available, it is displayed as NULL, which is important for tuning the index;
Key:mysql Query Optimizer The index selected from the Possible_keys;
Key_len: The index key length selected to use the index;
REF: Whether the list is filtered (via key) by constant (const) or a field in a table (if it is a join);
Rows:mysql Query Optimizer The number of result sets that are estimated by the statistical information collected by the system;
Extra: Additional details that are implemented in each step of the query may be mainly the following:
◇distinct: Find the Distinct value, so when MySQL finds the first matching result, the query that stops the value becomes the query for the other values;
◇full scan on null key: An optimization method in a subquery that is primarily encountered using a null value that cannot be accessed through an index;
◇impossible WHERE noticed after reading const TABLES:MYSQL Query Optimizer The statistical information collected to determine the possible results;
Use the From DUAL in the ◇no tables:query statement or do not contain any FROM clause;
◇not exists: The optimization method used by MySQL query Optimizer to change the composition of the original query in some left connections can partially reduce the number of data accesses;
◇range checked for each record (index MAP:N): As described in the official MySQL manual, when MySQL Query Optimizer does not find a good index to use, if you find that the column values from the previous table are known, you can Can partial index can be used. For each row in the previous table, MySQL checks whether a range or Index_merge access method can be used to request a row.
◇select tables Optimized away: When we use certain aggregate functions to access a field that has an index, MySQL query Optimizer navigates directly to the desired data row at once through the index and completes the entire query. Of course, the premise is that you cannot have a group by operation in query. If you use min () or Max ()
Waiting
◇using Filesort: When our query contains an order by operation, and cannot use the index to complete the sort operation, MySQL Query Optimizer had to choose the appropriate sorting algorithm to implement.
◇using Index: The required data only need to be in the index can be obtained without the need to go to the table to fetch data;
◇using index for group-by: data access is the same as using index, the required data only needs to read the index, and when the group BY or DISTINCT clause is used in query, if the grouping field is also in the index, the information in Extra will be the using index for group-by;
◇using temporary: When MySQL must use a temporary table in some operations, a using temporary appears in extra information. It is mostly common in operations such as group by and order by.
◇using Where: If we are not reading all the data of a table, or if we can get all the required data simply by index, a Using where information will appear;
◇using where with pushed condition: This is a message that only appears in the Ndbcluster storage engine, and it needs to be available by turning on the Condition pushdown optimization feature. The control parameter is Engine_condition_pushdown.

What is a loose index?
A: In fact, when MySQL fully utilizes the index scan to implement group by, you do not need to scan all the keys that meet the criteria to complete the operation.
To use a loose index scan to implement group by, you need to meet at least the following conditions:
The GROUP by condition field must be in a contiguous position at the front of the same index;
Only the Max and Min aggregate functions can be used at the same time that group by is used;
If a field condition other than the group by condition is referenced in the index, it must exist as a constant;

Why is loose index scanning highly efficient?
A: Because in the absence of a WHERE clause, which means that a full index scan is required, the number of key values that a loose index scan needs to read is as large as the number of grouped groups, that is, a lot less than the number of key values that actually exist. When the WHERE clause contains a range or equivalent expression, a loose index scan looks for the 1th keyword for each group that satisfies the range criteria, and reads the minimum number of keywords again.

What is a compact index?
A: The difference between a compact index scan and a loose index scan is that he needs to read all the key keys that meet the criteria while scanning the index, and then complete the group by operation based on the data read.

MySQL processing group By way, there are two following optimization ideas:
1. As far as possible, MySQL can use the index to complete the group by operation, of course, preferably a loose index scan the best way. In the case of system permitting, we can achieve the goal by adjusting the index or adjusting the query in two ways;

2. When the index can not be used to complete group by, because to use the temporary table and need filesort, so we have to have enough sort_buffer_size to be used for MySQL sort, and try not to do a large result set of GROUPBY operations, Because if the temporary table data is added to the disk when the temporary table size is exceeded, the sorting grouping operation performance will be reduced to the order of magnitude;

Dinstinct is actually similar to the GROUP by principle and can also be used loosely indexed.

MySQL Schema Design Optimization small Note:
1. Moderate Redundancy
2. Large-segment vertical split
3. Large Table Horizontal Split

Time field type: Timestamp occupies 4 bytes, Datetime,date occupies 8 bytes, but timestamp can only be used after 1970 records, Datetime,date can start in 1001.

MySQL binlog Log optimization program:

Binlog Correlation parameters and optimization strategy
Let's take a look at the relevant parameters of Binlog and get the relevant parameters about Binlog by executing the following command. Of course, it also shows the "Innodb_locks_unsafe_for_binlog" InnoDB storage engine with the unique Binlog-related parameters:
Mysql> Show variables like '%binlog% ';
+--------------------------------+------------+
| variable_name | Value |
+--------------------------------+------------+
| Binlog_cache_size | 1048576 |
| Innodb_locks_unsafe_for_binlog | Off |
| Max_binlog_cache_size | 4294967295 |
| Max_binlog_size | 1073741824 |
| Sync_binlog | 0 |
+--------------------------------+------------+
"Binlog_cache_size": holds the cache size of the binary log SQL statement during a transaction. The binary log cache is the memory allocated to each client under the premise that the server supports the transaction storage engine and the binary log (-log-bin option) is enabled by the server, and note that the Binlog cache space of the set size can be allocated by each client. If a reader friend's system often has multiple statement transactions in China, you can try to increase the size of the value to achieve better performance. Of course, we can use MySQL's following two state variables to determine the current status of Binlog_cache_size: Binlog_cache_use and Binlog_cache_disk_use. "Max_binlog_cache_size": Corresponds to "binlog_cache_size", but represents the maximum cache memory size that Binlog can use. When we execute a max_binlog_cache_size transaction, the system may report "Multi-statement transaction required more than" if it is not big enough. Max_binlog_cache _size ' bytes ofstorage ' error.
"Max_binlog_size": Binlog log maximum value, generally set to 512M or 1G, but not more than 1G. This size is not very strict control of binlog size, especially when the Binlog is closer to the tail and encountered a larger transaction, the system in order to ensure the integrity of the transaction, it is impossible to do a switch log action, only the transaction of all the SQL records into the current log, until the transaction ends. This is somewhat different from Oracle's redo log, because Oracle's redo log records the physical location of the data file, and it also records redo and undo-related information, so whether the same transaction is in a log is not critical for Oracle. and MySQL in Binlog record is the database logic change information, MySQL called event, is actually bring the database changes of the DML and so on query statements. "Sync_binlog": This parameter is critical to the MySQL system, he not only affects the binlog of MySQL's performance loss, but also affect the integrity of the data in MySQL. The various settings for the "Sync_binlog" parameter are described below:
Sync_binlog=0, when the transaction is committed, MySQL does not do fsync, such as the disk synchronization instructions to refresh the information in the Binlog_cache to disk, and let filesystem decide when to do synchronization, or cache full before synchronizing to disk.
Sync_binlog=n, after each n transaction commits, MySQL will perform a disk synchronization instruction such as Fsync to force the data in the Binlog_cache to disk. The default setting of the system in MySQL is sync_binlog=0, which is not to do any mandatory disk refresh instructions, at which point performance is the best, but the risk is greatest. Because once the system crash, all binlog information in the Binlog_cache will be lost. And when set to "1", is the most secure but the most performance loss of the settings. Because when set to 1, even if the system is crash, also lose most of the unfinished transaction in Binlog_cache, have no material influence to the actual data. From past experience and related testing, for systems with high concurrent transactions, the system write performance gap of "Sync_binlog" set to 0 and set to 1 may be as high as 5 times times or more.

MySQL querycache Negative impact:
A the hash operation of the Query statement and the hash lookup resource consumption. After we use query cache, after each select type of query arrives at MySQL, it needs to perform a hash operation and then find out if the query's cache exists, although the hash algorithm may have been very efficient, the hash The search process has been optimized enough for a query to consume a very, very small amount of resources, but when we have thousands or even thousands of of query per second, we cannot completely ignore the resulting CPU consumption.
b The failure of Query Cache. If we change the table more frequently, it will cause the query Cache is very inefficient. The table changes here are not just changes to the data in the table, but also any changes to the structure or index. That is to say, every time we cache the cached data in query cache, it may soon be erased because the data in the table is changed, and then the new same query comes in and cannot be used to the previous cache.
c the Query cache caches the result Set, not the data page, which means there is a possibility that the same record has been cached multiple times. Thus causing the transition consumption of memory resources. Of course, some people may say that we can limit the size of querycache. Yes, we can actually limit the size of the query cache, but in this way, query cache is prone to being swapped out because of low memory, resulting in a drop in the hit rate.

In a short connection application system, the value of the thread_cache_size should be set relatively large and should not be less than the actual concurrent requests to the database by the application system.

Through the system setup and current state analysis, we can find that the thread_cache_size setting is enough, even far more than the system needs. So we can reduce the thread_cache_size setting appropriately, for example, set to 8 or 16. Based on the two system state values of connections and threads_created, we can also calculate the Threadcache hit ratio of the system's new connection, that is, through the thread Cache The ratio of the number of connecting threads to the total number of connections received by the system in the pool, as follows:
Threads_cache_hit = (connections-threads_created)/connections * 100%
Generally speaking, when the system runs stably for a period of time, our thread Cache hit rate should be kept at around 90% or even higher than normal. We can see that the thread Cache hit ratio in the above environment is basically normal.

How to view the number of MySQL open table:
Mysql> Show status like ' Open_tables ';
+---------------+-------+
| variable_name | Value |
+---------------+-------+
| Open_tables | 6 |
+---------------+-------+

MySQL Buffer Considerations
Join_buffer_size and sort_buffer_size are for each thread's buffer size, rather than the entire system-shared buffer.

Suppose to be a separate host for MySQL, the total physical memory size of the 8g,mysql maximum connection number of 500, while also using the MyISAM storage engine, this time our overall memory how to allocate it?
Memory allocations are as follows:
(a) system use, assuming that 800M is reserved;
b) thread exclusive, approximately 2GB = + (1MB + 1MB + 1MB + 512KB + 512KB), the composition is approximately as follows:
Sort_buffer_size:1mb
Join_buffer_size:1mb
Read_buffer_size:1mb
read_rnd_buffer_size:512kb
thread_statck:512kb
c) MyISAM Key Cache, assuming about 1.5GB;
d) Innodb Buffer Pool maximum available dosage: 8GB-800MB-2GB-1.5GB = 3.7GB;

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.