Sort mysql Databases and mysql Databases

Source: Internet
Author: User

Sort mysql Databases and mysql Databases
Database-related 1. InnoDB logs

InnoDB has many logs. There are two concepts in logs that need to be clearly divided: Logical logs and physical logs.

  • 1.1 logical logs
    The operation information logs are logical logs.
    For example, to insert a piece of data, the format of the undo logic log is roughly as follows:
    <Ti, Qj, delete, U> Ti indicates the transaction id, utable shows the Undo information, and Qj indicates the unique identifier of an operation.

    The undo log is always like this:
    1) The insert operation records a delete logic log.
    2) The delete operation records an insert logic log.
    3) in the update operation, record the opposite update and change the row before modification back.

  • 1.2 physical logs
    The new value and old value information logs are called physical logs. <Ti, Qj, V> physical logs

    Binary logs are typical logical logs, while transaction logs are physical logs. What are their differences?

  • 1.3 log types
    Error Log: Record error information, warning information, or correct information.
    Query log: records all database requests, whether or not these requests are correctly executed.
    Slow query log: sets a threshold to record all SQL statements whose running time exceeds this threshold to the slow query log file. Binary log: records all operations performed to change the database.
    Relay logs and transaction logs.

  • 1.4 conclusion
    1. redo log ensures the atomicity and durability of transactions (physical logs)
    2. the undo log ensures transaction consistency. the InnoDB MVCC (Multi-version concurrency control) is also implemented using the undo log (logical log ).
    3. The redo log contains checkpoints for efficient data recovery.
    4. Physical logs record the details of the modification page, and logical logs record operation statements. Physical logs are restored faster than logical logs.

2. Principles of transactions

The role of a transaction: A transaction converts a database from a consistent state to another consistent state.

The transaction mechanism is generally summarized as the ACID principle, namely atomicity (A), consistency (C), isolation (I), and durability (D ).

 2.1 transaction isolation is implemented by the storage engine lock 

Database transactions may cause problems such as dirty reads, non-repeated reads, and Phantom reads.
1) Dirty read: the transaction has not been committed, and its modifications have been seen by other transactions.
2) Non-repeated read: The content read by two identical SQL statements in the same transaction may be different. Changes committed by other transactions between the two reads may cause inconsistent read data.
3) phantom data: the same transaction suddenly discovers data that he has not previously discovered. It is similar to non-repeated reads, but changes data to add data.

InnoDB provides four different levels of mechanisms to ensure data isolation.
Different from MyISAM's use of table-level locks, InnoDB uses more fine-grained row-level locks to improve the performance of data tables. InnoDB locks are implemented by locking the index. If there is a primary key in the query condition, the primary key is locked. If there is an index, the corresponding index is first locked and then the corresponding primary key is locked (which may cause a deadlock ), if no index is connected, the entire data table is locked.

Four isolation levels:
1) read uncommitted (uncommitted read)
Modifications in the transaction, even if not committed, are visible to other transactions. Dirty Read (Dirty Read ).
2) read committed (submit READ)
At the beginning of a transaction, you can only "see" the changes made by the committed transaction. This level is also called nonrepeatable read ).
3) repeatable read (repeatable read)
This level ensures that the results of the same records Read multiple times in the same transaction are consistent. However, theoretically, this transaction level still cannot solve another Phantom Read problem (Phantom Read ).
Phantom read: when a transaction reads records within a certain range, another transaction inserts new records within this range. A phantom line is generated when the previous transaction reads the range again. (Phantom Row ).
The phantom read problem should be solved by a higher isolation level, but mysql and other databases are different. It also solves this problem at the isolation level of the Repeatable read.
The isolation level of Repeatable read in mysql solves two problems: "non-repeated read" and "Phantom read.
However, oracle databases may need to be isolated at the "SERIALIZABLE" transaction level to solve the phantom read problem.
The default isolation level for mysql is repeatable read)
4) SERIALIZABLE (SERIALIZABLE)
The forced transaction serial execution avoids the above three problems: dirty reads, repeated reads, and Phantom reads.

2.2 Implementation of atomicity and Durability

The redo log is called a redo log (also called a transaction log) to ensure the atomicity and durability of transactions.
Redo restores the page operation for submitting transaction modifications. redo is a physical log and physical modification operation for the page.

When committing a transaction, it actually does the following two tasks:
I. the InnoDB Storage engine writes transactions to the log buffer, and the log buffer refreshes the transactions to the transaction log.
II. the InnoDB Storage engine writes transactions to the Buffer pool ).

There is a problem here. Transaction logs are also disk logs. Why does it not require dual-write technology?
Because the transaction log block size is the same as the disk sector size, it is 512 bytes, so the write of transaction logs can ensure atomicity, without the need for doublewrite Technology

Redo log buffering is composed of every 512-byte log block. the log block consists of the log header (12 bytes), log Content (492 bytes), and log tail (8 bytes ).

2.3 Implementation of consistency

The undo log is used to ensure transaction consistency. The undo rollback row records to a specific version. The undo log is a logical log and is recorded based on each row of records.
The undo segments are stored in the database, while the undo segments are in the shared tablespace.
Undo only restores the database logic to its original form.

In addition to rollback, undo logs implement MVCC (Multi-version concurrency control). When a row of records is read, the transaction is locked and restored to the previous version through undo to implement non-locked read.

The myisam engine does not support transactions. innodb and BDB engines support transactions.
3. What is the use of indexes?
  • Purpose: The index is the disk structure associated with the table or view, which can speed up row retrieval from the table or view. An index contains keys generated by one or more columns in a table or view. These keys are stored in a structure (B tree), allowing the database to quickly and effectively find the rows associated with the key value.

  • Well-designed indexes can reduce disk I/O operations and consume less system resources, thus improving query performance.

  • In general, you should create an index on these columns, for example:
    You can speed up the search on columns that frequently need to be searched;
    In a column that acts as a primary key, the uniqueness of the column and the data arrangement structure in the organization table are enforced;
    These columns are usually used in connection columns. These columns are mainly foreign keys, which can speed up the connection;
    Create an index on a column that often needs to be searched by range. The specified range is continuous because the index has been sorted;
    Create an index on the columns that frequently need to be sorted. Because the index has been sorted, you can use the index sorting to speed up the sorting query time;
    Create an index on the columns in the WHERE clause frequently to speed up condition judgment.

  • Disadvantages of indexing:
    First, it takes time to create and maintain indexes. This time increases with the increase of data volume.
    Second, indexes occupy physical space. In addition to data tables, each index occupies a certain amount of physical space. To create a clustered index, the required space is larger.
    Third, when adding, deleting, and modifying data in the table, the index must also be dynamically maintained, which reduces the Data Maintenance speed.

4. Database Optimization
  • Temporary tables are created in the following situations (temporary tables consume performance ):
    1. If the group by column has no index, an internal temporary table is generated.
    2. If order by is a different column from group by, or when multiple tables are used for joint query, a temporary table is generated if the column in group by is not in the first table.
    3. When distinct and order by are used together, temporary tables may be generated.
    4. If SQL _SMALL_RESULT is used, MySQL uses a memory temporary table, unless there are some temporary tables that must be created on the disk in the query.
    5. Temporary tables are used for union merge queries.
    6. Some views use temporary tables. For example, if you use temptable to create a temporary table or use union or aggregate queries, you can use EXPLAIN to query the plan, check the Extra column to see if there is Using temporary.

  • Table creation: Split the table structure. For example, the core fields are all fixed-length structures such as int, char, and enum.
    Non-core fields, or use text and ultra-long varchar to split a table.
    Index creation: reasonable indexes can reduce the number of temporary internal tables.
    Write statement: unreasonable statements will cause a large amount of data transmission and use of internal temporary tables.

  • Table optimization and column type selection
    Table optimization:
    1: Fixed Length and variable length Separation
    For example, id int occupies 4 bytes, char (4) occupies 4 characters, and is also a fixed length, time
    That is, each unit value occupies a fixed byte.
    The core and common fields should be fixed and placed in a table.
    Varchar, text, blob, and this variable-length field are suitable for placing a single table and associating it with the core table with the primary key.
    2: common fields and infrequently used fields must be separated.
    It is necessary to analyze the specific business of the website, analyze the field query scenario, and split the fields with low query frequency.
    3: Add redundant fields properly.

  • Column selection principles:
    1: field type priority integer> date, time> enum, char> varchar> blob

    Column Feature Analysis:
    Integer: Fixed Length, no country/region, no character set difference
    Time is fixed, fast computing, and space saving. Considering the time zone, it is not convenient to write SQL statements where> '2017-10-12 ';
    Enum) the verification set varchar takes into account the verification set during character set conversion and sorting for an indefinite length, which is slow. compared with char, a length identifier is added, which requires an extra operation once. Text/Blob cannot use the memory temporary table

    Appendix: about date/time selection, clear comments http://www.xaprb.com/blog/2014/01/30/timestamps-in-mysql/

    2: do not be generous (such as smallint and varchar (N ))
    Cause: memory is wasted for large fields, which affects the speed.
    Taking age as an example, tinyint unsigned not null can be stored as 255 years old, enough. the use of int wastes three bytes and stores the same content as varchar (10) and varchar (300). However, during table join queries, varchar (300) consumes more memory.

    3: Avoid using NULL () whenever possible ()
    Cause: NULL is not conducive to indexing. Special bytes are used for marking. Each row has one more byte occupying a larger disk space.

    Description of the Enum Column
    1: The enum column is stored in an integer.
    2: The fastest association between enum and enum Columns
    3: weak ratio of enum columns to (var) char-conversion is required when it is associated with char. It takes time.
    4: The advantage is that when char is very long, enum is still a fixed integer length. When the queried data volume is larger, the advantage of enum is more obvious.
    5: enum is associated with char/varchar, because the conversion speed is slower than enum-> enum, char-> char, but sometimes it is used as well ----- when the data volume is very large, IO can be saved.

  • SQL statement Optimization
    1) try to avoid using it in the where clause! = Or <> operator. Otherwise, the engine will discard the index for full table scanning.
    2) try to avoid null value determination on the field in the where clause. Otherwise, the engine will discard the index and perform full table scanning, for example:
    Select id from t where num is null
    You can set the default value 0 on num to make sure that the num column in the table does not have a null value, and then query it like this:
    Select id from t where num = 0
    3) using exists instead of in is a good choice.
    4) Replace the HAVING clause with the Where clause because HAVING filters the result set only after all records are retrieved.

  • Meanings of various items displayed in the explain statement;
    Select_type
    Indicates the type of each select clause in the query.
    Type
    Indicates how MySQL finds the required rows in the table, also known as "access type"
    Possible_keys
    Indicates which index MySQL can use to find rows in the table. If an index exists in the fields involved in the query, the index will be listed, but not necessarily used by the query.
    Key
    Displays the indexes actually used by MySQL in the query. If no index is used, it is NULL.
    Key_len
    The number of bytes used in the index. You can use this column to calculate the length of the Index Used in the query.
    Ref
    Indicates the join matching condition of the above table, that is, which columns or constants are used to find the value of the index Column
    Extra
    Contains additional information that is not suitable for displaying in other columns but is very important.

  • The meaning and application scenarios of profile;
    Query the SQL Execution time, and check the CPU/Memory usage, Systemlock, Table lock, and so on.
Index optimization Policy
  • 1 Index type

    1.1 B-tree indexes
    Note: The name is btree index. In The Big aspect, the Balance Tree is used. However, the implementation of different engines is slightly different. For example, strictly speaking, the NDB engine, in T-tree, Myisam, and innodb, B-tree indexes are used by default, but abstract --- B-tree system, which can be understood as "sorting the fast search structure ".

    1.2 hash Index
    In the memory table, the default value is hash index. The theoretical query time complexity of hash is O (1)

    Q: Since hash search is so efficient, why not use hash indexes?
    A:
    1) The result of hash function calculation is random. If data is placed on the disk, for example, if the primary key is id, the row corresponding to the id increases with the id, random placement on disks.
    2) It is illegal to optimize the range query.
    3) The prefix index cannot be used. for example, in B tree, the field column value is "hellopworld" and the index query xx = helloword is added. Naturally, the index, xx = hello, or the index can be used. (left prefix index) Because hash ('helloword') and hash ('hello'), the relationship between the two is still random.
    4) sorting cannot be optimized.
    5) rows must be returned. That is, the Data Location obtained through the index must be returned to the table for data retrieval.

  • 2. Common Mistakes in btree Indexes

    2.1 Add indexes to columns that are commonly used in the where Condition
    For example, where cat_id = 3 and price> 100; // query 3rd items with a price of more than 100 yuan
    Error: add the index to cat_id, sum, and price.
    Error: Only cat_id or Price indexes can be used, because they are independent indexes and only one index can be used.

    2.2 After an index is created on multiple columns, the index will play a role in which column to query.
    Incorrect: The index must meet the left prefix requirements when it is used for multiple-column indexes.

  • The index used by the query statement after an index is created on multiple columns:

    For ease of understanding, assume that ABC each 10 meters long wooden board, river surface width 30 meters.
    The full index is 10 meters long,
    Like: queries the left prefix and range. The plank length is 6 meters,
    You can splice it yourself to determine whether the index can be used across the river.
    In the preceding example, where a = 3 and B> 10, and c = 7,
    Board A is 10 meters long and column A index plays A role
    Board a is connected to board B normally, and board B Index plays a role
    Board B is short and board C cannot be connected. Column C index does not play a role.

Index application example:

  • The innodb primary index file directly stores this row of data, which is called a clustered index. Secondary indexes point to references to primary keys.
    In myisam, both the primary index and secondary index point to the physical row (Disk location ).

    Note: For innodb,
    1: primary key indexes store both index values and row data in the leaves.
    2: If no primary key exists, the Unique key is used as the primary key.
    3: If there is no unique, the system generates an internal rowid as the primary key.
    4: In innodb, the primary key index structure stores both the primary key value and row data. This structure is called "clustered Index"

  • Clustered Index

    Advantage: when there are few primary key query entries, you do not need to return the rows (the data is under the primary key node)
    Disadvantage: frequent page splitting occurs when irregular data is inserted.
    The primary key value of the clustered index should be a continuously increasing value, rather than a random value (do not use a random string or UUID). Otherwise, a large number of page splits and moves will occur.

  • High-performance index Policy

    For innodb, because there are data files under the node, the node split will be slow.
    For the primary key of innodb, try to use an integer and an incremental integer.
    If there is irregular data, the split of pages will be generated, affecting the speed.

  • Index coverage:

    Index overwrite means that if the queried column is just a part of the index, the query only needs to be performed on the index file without going back to the disk to find data. this query speed is very fast, called "index coverage"

  • Ideal Index

    1: Query frequency 2: high differentiation 3: small length 4: overwrite common query fields as much as possible.

    Note:
    The index length directly affects the size of the index file, the speed of addition, deletion, and modification, and the query speed (the memory usage is high). For the value in the column, extract the part from left to right to create the index.
    1: The shorter the cut, the higher the repetition, the smaller the discrimination, and the worse the index Effect
    2: the longer the cut, the lower the repetition, the higher the discrimination, the better the index effect, but the greater the impact-slow addition, deletion, and change, and indirect impact on the query speed.

    Therefore, we need to strike a balance between discrimination and length.
    Conventional method: intercept different lengths and test their discrimination,
    Select count (distinct left (word, 6)/count (*) from dict;

    For general system applications: The index performance is acceptable if the difference reaches 0.1.
    Indexing tips for columns with a left prefix that are not easy to distinguish: for example, url Columns
    Http://www.baidu.com
    Http://www.zixue.it
    The first 11 characters of a column are the same and are not easy to distinguish. You can use the following two methods to solve this problem:
    1: stores the column content and creates an index.
    Moc. udiab. www //: ptth
    Ti. euxiz. www //: // ptth
    In this way, the left prefix is highly differentiated,
    2: pseudo hash index Effect
    Save the url_hash column at the same time

    Multi-column index multi-column index considerations-column query frequency and column differentiation.

  • Index and sorting

    Sorting may occur in two cases:
    1: For covering indexes, when directly querying on the index, there is a sequence. using index
    2: extract data first to form a temporary table for filesort (File Sorting, but files may be stored on disk or in Memory)

    Our goal is to get the data in order! Sort by index.

  • Duplicate and redundant Indexes

    Duplicate index: Multiple indexes are created for the same column (such as age) or several columns with the same sequence (age, school, duplicate indexes do not help. They only increase the index file, slow down the update speed, and remove the index.

    Redundant index: it refers to the overlapping columns covered by two indexes.
    For example, x, m, column, and index x (x), index xm (x, m)
    The x and xm indexes overlap the x columns. In this case, they are called redundant indexes.
    You can even create index mx (m, x) indexes. mx and xm are not repeated because the column order is different.

  • Index fragmentation and Maintenance

    During the long-term data change process, both index files and data files will generate holes to form fragments.
    We can use a nop operation (operation that does not have a substantial impact on the data) to modify the table.
    For example, if the table engine is innodb, you can alter table xxx engine innodb.
    You can also fix the optimize table name.

    Note: To fix the data and index fragmentation of a table, all data files will be reorganized to align them.
    In this process, if the number of rows in the table is large, it is also a very resource-consuming operation. Therefore, it cannot be repaired frequently.

    If the Update operation of a table is frequent, it can be repaired by week/month.
    If it is not frequent, you can fix the issue for a longer period of time.

Database-related Interview Questions 1. Differences between drop, delete and truncate

Drop directly deletes the table truncate to delete the table data. When inserted, the auto-increment id is deleted from 1 and the table data is deleted from 1. You can add the where clause.
(1) The DELETE statement deletes a row from the table each time, and saves the DELETE operation of the row as a transaction record in the log for rollback. Truncate table deletes all data from the TABLE at a time and does not store the delete operation records in logs. Deleting rows cannot be recovered. In addition, table-related deletion triggers are not activated during the deletion process. Fast execution speed.
(2) space occupied by tables and indexes. When a table is TRUNCATE, the space occupied by the table and index will be restored to the initial size, and the DELETE operation will not reduce the space occupied by the table or index. The drop statement releases all the space occupied by the table.
(3) In general, drop> truncate> delete
(4) application scope. TRUNCATE can only be set to TABLE. DELETE can be set to table or view.
(5) TRUNCATE and DELETE only DELETE data, while DROP deletes the entire table (structure and data ).
(6) truncate and delete without where: delete data only, but not the table structure (Definition) drop statement will delete the constraints (constrain) on which the table structure is dependent ), trigger index; the stored procedure/function dependent on the table will be retained, but its status will change to: invalid.
(7) When the delete statement is DML (data maintain Language), this operation will be placed in the rollback segment and take effect after the transaction is committed. If there is a corresponding tigger, it will be triggered during execution.
(8) truncate and drop are DLL (data define language). The operation takes effect immediately. The original data is not stored in rollback segment and cannot be rolled back.
(9) exercise caution when using drop and truncate without backup. To delete some data rows, use delete and use where to restrict the impact scope. The rollback segment must be large enough. To delete a table, use drop. If you want to retain the table and delete the table data, use truncate if it is unrelated to transactions. If it is related to the transaction or the instructor wants to trigger the trigger, delete is used.
(10) Truncate table names are fast and efficient because:
The truncate table function is the same as the DELETE statement without the WHERE clause: both DELETE all rows in the table. However, truncate table is faster than DELETE and uses less system and transaction log resources. The DELETE statement deletes a row at a time and records one row in the transaction log. Truncate table deletes data by releasing the data pages used to store TABLE data, and only records the release of pages in transaction logs.
(11) truncate table deletes all rows in the TABLE, but the TABLE structure and its columns, constraints, and indexes remain unchanged. The Count value used by the new row ID is reset to the seed of the column. To retain the ID Count value, use DELETE instead. To delete TABLE definitions and data, use the drop table statement.
(12) for tables referenced by the foreign key constraint, the truncate table cannot be used, but the DELETE statement without the WHERE clause should be used. Because the truncate table is not recorded in the log, it cannot activate the trigger.

2. Database paradigm

1. 1NF)

In any relational database, the first paradigm (1NF) is the basic requirement for the relational model. databases that do not meet the first paradigm (1NF) are not relational databases.
The first paradigm (1NF) means that each column in the database table is an inseparable basic data item. The same Column cannot contain multiple values, that is, an attribute in an object cannot have multiple values or duplicate attributes. If duplicate attributes exist, you may need to define a new object. A new object consists of duplicate attributes. The new object has one-to-multiple relationships with the original object. In the first paradigm (1NF), each row of the table contains only information of one instance. In short, the first paradigm is a non-repeated column.

2 second Paradigm (2NF)

The second Paradigm (2NF) is established on the basis of the first paradigm (1NF), that is, to satisfy the second Paradigm (2NF) must satisfy the first paradigm (1NF) first ). The second Paradigm (2NF) requires that each instance or row in the database table be able to be distinguished by a unique region. To implement differentiation, you usually need to add a column to the table to store the unique identifier of each instance. This unique attribute column is called as the primary keyword, primary key, and primary code. The second Paradigm (2NF) requires that the attributes of an object fully depend on the primary keyword. The so-called full dependency refers to the fact that there cannot be an attribute that only depends on a part of the primary keyword. If so, this attribute and this part of the primary keyword should be separated to form a new entity, the relationship between the new object and the original object is one-to-multiple. To implement differentiation, you usually need to add a column to the table to store the unique identifier of each instance. In short, the second paradigm is that non-primary attributes are not partially dependent on primary keywords.

3. Third Paradigm (3NF)

The third paradigm (3NF) must satisfy the second Paradigm (2NF) first ). In short, the third paradigm (3NF) requires that a database table do not contain information about non-primary keywords already contained in other tables. For example, there is a department information table, where each department has a department ID (dept_id), department name, Department profile, and other information. After listing the Department numbers in the employee information table, you cannot add the Department name, Department profile, and other information related to the department to the employee information table. If the department information table does not exist, it should also be constructed based on the third paradigm (3NF), otherwise there will be a large amount of data redundancy. In short, the third paradigm is that attributes do not depend on other non-primary attributes. (My understanding is to eliminate redundancy)

3. MySQL replication principles and procedures

Basic principle flow, three threads and their associations;
1. master: binlog thread-records all statements that change database data and puts them into the binlog on the master;
2. From: io thread -- after start slave is used, it is responsible for pulling binlog content from the master and putting it into its own relay log;
3. From: SQL Execution thread -- execute the statement in relay log;

4. Differences between myisam and innodb in MySQL, at least 5 points

1>. InnoDB supports transactions, while MyISAM does not.
2> InnoDB supports row-level locks, while MyISAM supports table-level locks.
3> InnoDB supports MVCC, but MyISAM does not.
4>. InnoDB supports foreign keys, but MyISAM does not.
5>. InnoDB does not support full-text indexing, while MyISAM does.

5. Four features of the innodb Engine

Insert buffer, double write, ahi, and read ahead)

6. myisam and innodb 2 selectcount (*) which is faster, why?

Myisam is faster, because myisam maintains a counter internally and can be directly retrieved.

7. Differences between varchar and char in MySQL and meanings of 50 in varchar (50)

(1) differences between varchar and char
Char is a fixed-length type, and varchar is a variable-length type.

(2) The meaning of 50 in varchar (50)
A maximum of 50 characters can be stored. The varchar (50) and (200) storage space for hello are the same, but the latter consumes more memory during sorting, because order by col uses fixed_length to calculate the col length (same for memory engines)

(3) 20 in int (20) indicates the length of the display character.
However, if you want to add a parameter, the maximum value is 255. For example, if it is the id of the number of records and 10 pieces of data are inserted, it will display 00000000001 ~~~ 00000000010. When the number of digits of a character exceeds 11, it only displays 11 characters. If you do not add the parameter that makes it less than 11 characters, the parameter is added to the front with 0, it does not add 0 20 to the front to indicate that the maximum display width is 20, but still occupies 4 bytes of storage, and the storage range remains unchanged;

(4) Why is mysql designed like this?
It does not make sense for most applications, except that some tools are used to display the number of characters; int (1) and int (20) are both stored and computed in the same way;

8. Openness:

A 0.6 billion Table a, a 0.3 billion table B, uses an external tid Association to quickly query the 50,000th data records that meet the conditions from 50,200th to 200.
1. If table A's TID is auto-increasing and continuous, table B's ID is an index
Select * from a, B where a. tid = B. id and a. tid> 500000 limit 200;

2. If the TID of Table A is not continuous, overwrite the index. TID must be either A primary key or A secondary index, and the ID of Table B must also have an index.
Select * from B, (select tid from a limit 50000,200) a where B. id = a. tid;

 

9. Differences between mysql Database Engine MyISAM and InnoDB

 

10. How many TRIGGERS are allowed in the MySql table?

Six triggers are allowed in the MySql table, as follows:
· BEFORE INSERT
· AFTER INSERT
· BEFORE UPDATE
· AFTER UPDATE
· BEFORE DELETE
· AFTER DELETE

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.