Database Basics Summary

Source: Internet
Author: User
Tags joins microsoft sql server types of tables unique id mysql index

1 Database Basics

1. Data abstraction: Physical abstraction, conceptual abstraction, view-level abstraction, internal mode, pattern, outer mode

Database system Level Three abstraction refers to:

    • View-level abstraction: Abstract the real world into the external model of the database; The information in the real world is abstracted from different users ' views as multiple logical data structures, each logical structure is called a view, which describes the data that each user cares about, that is, one side of the database. A collection of all views forms the outer schema of the database.
    • Conceptual level Abstraction: Abstract the database outside the schema into the conceptual schema of the database. All views of the external mode are integrated into the whole logical structure of the database, which is the database conceptual pattern, that is, the real abstraction of all users ' concerns.
    • Physical level Abstraction: Abstract The database conceptual schema into the internal schema of the database.

Three database schemas: external mode, conceptual mode, and internal mode:

(1) mode

Definition: Also known as logical mode, is a description of the logical structure and characteristics of all the data in the database, and is the public data view of all users.

Understand:

① a database has only one schema;

② is the view of database data at the logical level;

③ database schema is based on a data model;

④ defines the schema not only to define the logical structure of the data (such as which data items are composed of data records, the name, type, value range, etc.) of the data, and to define the security and completeness requirements associated with the data, and to define the connection between the data.

(2) External mode (External schema)

Definition: Also known as sub-mode (subschema) or user mode, is a description of the logical structure and characteristics of local data that database users (including application and end users) can see and use, a data view of the database user, and a logical representation of the data associated with an application.

Understand:

① a database can have multiple external modes;

② external mode is the user view;

③ External mode is a powerful measure to ensure data security.

(3) Internal mode (Internal schema)

Definition: Also known as the storage mode (Storage schema), it is a description of the physical structure and storage of the data, is the representation of the data within the database (for example, the storage of records is stored sequentially, stored in accordance with the B-tree structure, or by the hash method, the index is organized in what way; Whether the data is compressed and stored, and whether it is encrypted; The storage record structure of the data).

Understand:

① a database has only one internal mode;

② A table may consist of multiple files, such as: Data files, index files.

It is a method of database management system (DBMS) to organize and manage data in database effectively.

The objectives are:

① to reduce data redundancy and realize data sharing;

② to improve access efficiency and improve performance.

For details, see: http://www.2cto.com/database/201412/360263.html

2. SQL language includes data definition, data manipulation, data control

    • Data definition: Create table, Alter table, Drop table, Craete/drop Index, etc.
    • Data manipulation: Select, INSERT, UPDATE, delete
    • Data control: Grant, REVOKE

3. SQL Common commands:

CREATE TABLE Student (

ID number PRIMARY KEY,

NAME VARCHAR2 (not NULL);//Build table

CREATE VIEW View_name as

Select * from table_name;//build view

Create UNIQUE index index_name on TableName (col_name);//Build Index

INSERT into tablename {column1,column2,...} values (Exp1,exp2,...); /insert

INSERT into Viewname {column1,column2,...} values (Exp1,exp2,...); /Insert view actual impact table

Update tablename SET name= ' Zang 3 ' condition;//updated data

Delete from Tablename WHERE condition;//deleted

GRANT (Select,delete,...) On (object) to user_name [with Grant option];//authorization

REVOKE (Permission table) on (object) from user_name [with REVOKE OPTION]//right of withdrawal

List the names of the workers and their leaders:

Select e.name, s.name from EMPLOYEE E S

WHERE E.supername=s.name

4. View

A view is a table that is exported from one or more tables (or views). Views are different from tables (sometimes, in contrast to views, also known as Table--base table), which is a virtual table in which the data for the view is not actually stored, only the definition of the view is stored in the database, and the underlying table associated with the view is manipulated according to the definition of the view when the data is being manipulated.

Database storage data is implemented through a table, which has physical storage space and is where your data is really stored. You can work with the table to implement your operations on the data.

The view is actually not physical, it is through the database code to make some tables of data in a desired logical structure to re-organize. Maybe it's a bit more confusing to say.

The view does not create a new table, but only the data members in the original table are re-organized in the logical structure that we need in the database language, and the operation is the same as a table.

For details, see: http://www.w3school.com.cn/sql/sql_view.asp

5. Integrity constraints: Entity integrity, referential integrity, user-defined integrity

There are three types of integrity constraints in a relational model: entity integrity, referential integrity, and user-defined integrity. Entity integrity rules define the constraints on the value of the primary attribute (primary key) in the relationship, i.e., the restriction of the domain of the main attribute, and the referential integrity rule defines the referential constraint between the reference and the referenced relation, that is, the restriction of the external code attribute domain of the reference relation, Specifies that the range of the outer code attribute can only be null or the value of the corresponding reference relationship Master code attribute. User-defined integrity is a constraint on a particular relational database, which reflects the semantic requirements that the data involved in a particular application must meet, and is determined by the application environment. For example, the bank's user account requirement must be greater than or equal to 100000, less than 999999. Therefore, user-defined integrity is typically defined as a constraint on the value of a property other than the primary key and the foreign key attribute in the relationship, that is, the domain of the other attribute.

Entity integrity constraint rules are: If a property (or a set of properties) is the primary attribute of the basic relationship R, a cannot take a null value. The so-called null value is a value that "does not know" or "does not exist".

The entity integrity rules are described below:

    • (1) Entity integrity rules are for basic relationships. A basic table usually corresponds to an entity set in the real world.
    • (2) entities in the real world are distinguishable, that is, they have some unique identity.
    • (3) Accordingly, the main code is identified as the uniqueness in the relational model.
    • (4) The attribute in the main code is the primary attribute and cannot be empty. If the main attribute takes a null value, it indicates that there is an entity that is not identifiable, that is, there is an indistinguishable entity, which contradicts the point of (2), so this rule is called entity integrity.

User-defined integrity constraints:

Depending on the application environment, different relational database systems often require some special constraints. User-defined integrity is a constraint on a specific relational database. It reflects the semantic requirements that the data involved in a particular application must meet.

6. The third paradigm

    • 1NF: Each attribute is non-divided. The emphasis is on the atomicity of the column, that is, the column cannot be divided into several other columns.
    • 2NF: If the relationship R is 1NF, and each non-primary attribute is fully functional dependent on the key of R. Example SLC (sid#, courceid#, Sname,grade), then not 2NF; The first is 1NF, and there are two parts, one is that the table must have a primary key, and the other is that the columns that are not included in the primary key must be completely dependent on the primary key, not just part of the primary key.
    • 3NF: If R is 2NF, and any of its non-key properties are not passed depends on any candidate key. The first is 2NF, and the non-primary key column must be directly dependent on the primary key, and there cannot be a transitive dependency. That cannot exist: non-primary key column A relies on non-primary key column B, and non-primary key column B depends on the primary key.

7. ER (Entity/Contact) model

    • Definition: ER model is also called entity Contact Model (entity-relationship). is an important analysis model of the design database.
    • Entity: is a DataSet object, or dataset (introduced in the introduction to the previous blog data structure). An objective thing that can be distinguished in an application. An entity can be a person, a file, a course, a collective that has its own attributes, a set of meaningful data.
    • Contact: Entities are not isolated, and entities are connected. For example, students and courses have cross-attributes, i.e. fractions; the data generated by interactions between A and B entities is their properties.

8. Index function

Why do you create an index? This is because creating an index can greatly improve the performance of the system.

    • First, by creating a unique index, you can guarantee the uniqueness of each row of data in a database table.
    • Second, it can greatly speed up the retrieval of data, which is the main reason for creating indexes.
    • Thirdly, the connection between tables and tables can be accelerated, particularly in terms of achieving referential integrity of the data.
    • Finally, when using grouping and sorting clauses for data retrieval, you can also significantly reduce the time to group and sort in queries.
    • By using the index, we can improve the performance of the system by using the optimized hidden device in the process of querying.

Perhaps someone will ask: there are so many advantages to adding indexes, why not create an index for each column in the table? Although this kind of thought has its rationality, but also has its one-sidedness. Although indexes have many advantages, it is very unwise to add indexes to each column in a table. This is because there is a lot of downside to increasing the index.

    • First, it takes time to create indexes and maintain indexes, and this time increases as the amount of data increases.
    • Second, the index needs to occupy the physical space, in addition to the data table to occupy the data space, each index also occupies a certain amount of physical space, if you want to establish a clustered index, then the space will be larger.
    • Thirdly, when the data in the table is added, deleted and modified, the index should be maintained dynamically, thus reducing the maintenance speed of the data.

For details, see:

    • http://blog.csdn.net/pang040328/article/details/4164874
    • Http://www.cnblogs.com/huangye-dream/archive/2013/03/13/2957049.html

9. Business

A transaction is a series of database operations, which is the basic logical unit of a database application. A transaction (Transaction) is the basic unit of concurrency control. The so-called transaction, which is a sequence of operations that are either executed or not executed, is an inseparable unit of work.

    • ① atomicity (atomicity): All elements in a transaction are committed or rolled back as a whole, and the elements of a transaction are non-divided, and the transaction is a complete operation.
    • ② Consistency (CONSISTEMCY): When things are done, the data must be consistent, that is, the data in the data store is in a consistent state before the things begin. Ensure that data is lossless.
    • ③ Isolation (Isolation): Multiple transactions that modify data are isolated from each other. This indicates that the transaction must be independent and should not be in any way or affect other transactions.
    • ④ Persistence (Durability): After a transaction is complete, its effect on the system is permanent, and the modification persists even if a system failure occurs, and the database is actually modified

10. Lock: Shared lock, mutual exclusion lock

    • Shared Lock (S-Lock): If transaction T adds a shared lock to data A, the other transaction can only have a shared lock on a, and cannot add an exclusive lock until all shared locks have been freed. Transactions that are allowed to share locks can read only data and cannot modify data.
    • Exclusive lock (X Lock): If transaction T adds an exclusive lock to data A, no other transaction can add any type of lock to a, until the lock on the resource is released at the end of the transaction. A transaction that is granted an exclusive lock can read and modify data.

Two-stage Lock protocol: Phase 1: Lock phase Phase 2: Unlock phase

For details, see: http://www.cnblogs.com/ggjucheng/archive/2012/11/14/2770445.html

11. Deadlock and Processing: The transaction loop waits for a data lock, it is deadlocked.

    • ① Deadlock prevention uses deadlock prevention protocols to prevent the system from entering a deadlock state by destroying the necessary conditions for deadlocks, preventing deadlocks from occurring.
    • ② deadlock detection and recovery is to allow the system to enter a deadlock state, and periodically check if the system has a deadlock. When a deadlock is found, the system is freed from the deadlock state by taking the appropriate recovery mechanism.

12. Stored procedure: A stored procedure is a compiled SQL statement.

Stored Procedures (Stored Procedure) are in a large database system, a set of SQL statements to complete a specific function, stored in the database, after the first compilation after the call does not need to compile again, The user executes it by specifying the name of the stored procedure and giving the parameter (if the stored procedure has parameters). A stored procedure is an important object in a database.

The difference between a stored procedure and a function:

    • In general, the function of the stored procedure implementation is a little more complicated, and the function implementation is more specific.
    • Parameters (output) can be returned for a stored procedure, and the function can only return values or table objects.
    • A stored procedure is typically executed as a separate part, and the function can be called as part of a query statement, since the function can return a Table object, so it can be in a query statement after the FROM keyword.

Advantages of stored procedures:

    • The ability of stored procedures greatly enhances the functionality and flexibility of the SQL language.
    • Ensures the security and integrity of your data.
    • Stored procedures enable users who do not have permissions to access the database indirectly under control, thus guaranteeing the security of the data.
    • Stored procedures allow related actions to occur together to maintain the integrity of the database.
    • Before running the stored procedure, the database has been analyzed by syntax and syntax, and the optimized execution scheme is given. This compiled process can greatly improve the performance of SQL statements. That is, stored procedures will be compiled directly in the database, as a part of the database, can be called repeatedly, fast running speed, high efficiency
    • You can reduce the amount of traffic on your network. The stored procedure is primarily run on the server, reducing the pressure on the client.
    • The operational procedures that embody enterprise rules are placed in the database server for centralized control.
    • Stored procedures can be divided into system stored procedures, extended stored procedures, and user-defined stored procedures
    • Ensures the security and integrity of your data. Stored procedures enable users who do not have permissions to access the database indirectly under control, thus guaranteeing the security of the data. Stored procedures allow related actions to occur together to maintain the integrity of the database.
    • Stored procedures can accept parameters, output parameters, return single or multiple result sets, and return values. You can return an error reason to your program * stored procedures can contain program flow, logic, and queries to the database. Data logic can also be encapsulated and hidden by entities.

13. Trigger: When the trigger condition is met, the trigger body of the trigger is automatically executed by the system.

Trigger time: Before,after. Trigger event: There are three kinds of insert,update,delete. Trigger type: Row trigger, statement trigger

14. What are the differences between inner joins and outer joins?

An inner connection is a guarantee that all rows in two tables meet the conditions of the connection, but not the outer connection.

In an outer join, certain columns of dissatisfaction are also displayed, that is, only the rows of one of the tables are restricted, not the rows of the other table. Left-connected, right-connected, fully connected, three kinds of

15. Differences between stored procedures and functions

A stored procedure is a collection of user-defined sets of SQL statements that involve a task for a particular table or other object, a user can call a stored procedure, and a function is usually a method defined by a database that takes parameters and returns a value of some type and does not involve a particular user table.

16. What is a transaction?

A transaction is a series of operations performed as a logical unit, and a logical work cell must have four properties, called ACID (atomicity, consistency, isolation, and persistence) properties, in order to be a transaction:
Atomic transactions must be atomic units of work, either all executed or not executed for their data modifications.
When a consistency transaction is complete, all data must be kept in a consistent state. In a related database, all rules must be applied to transaction modifications to maintain the integrity of all data. At the end of the transaction, all internal data structures, such as B-tree indexes or doubly linked lists, must be correct.
Isolation changes made by concurrent transactions must be isolated from modifications made by any other concurrent transaction. The state in which the data is located when the transaction is viewing the data, either when another concurrent transaction modifies its state or after another transaction modifies it, and the transaction does not view the data in the middle state. This is called serializable because it is able to reload the starting data and replay a series of transactions so that the state at the end of the data is the same state as the original transaction execution.
After the persistence transaction completes, its effect on the system is permanent. This modification will persist even if a system failure occurs.

17. What is the role of cursors? How do I know that the cursor is at the end?

Cursors are used to locate the rows of the result set, by judging the global variable @ @FETCH_STATUS can determine whether the last, usually this variable is not equal to 0 indicates an error or to the end.

18. The trigger is divided into pre-trigger and post-trigger, and the two triggers have and differ from each other. What is the difference between a statement-level trigger and a row-level trigger.

An advance trigger runs before the triggering event occurs, and afterwards the trigger runs after the triggering event. Usually the pre-trigger can get the event before and the new field value.
Statement-level triggers can be executed before or after the statement executes, while row-level firings are triggered once for each row affected by the trigger.

2 Database MySql

1. MYSQL's storage engine is different

Data in MySQL is stored in files (or memory) in a variety of different technologies. Each of these technologies uses different storage mechanisms, indexing techniques, locking levels, and ultimately offers a wide range of different capabilities and capabilities. By selecting different technologies, you can gain additional speed or functionality to improve the overall functionality of your application.

    • MyISAM: This engine was first provided by MySQL. This engine can also be divided into static MyISAM, dynamic MyISAM and compression MyISAM three kinds:
      • Static MyISAM: If the length of each data column in the datasheet is pre-fixed, the server will automatically select this type of table. Because each record in the data table occupies the same amount of space, the table accesses and updates are highly efficient. When data is compromised, recovery is easier to do.
      • Dynamic MyISAM: If the varchar, xxxtext, or Xxxblob fields appear in the datasheet, the server will automatically select this type of table. Compared with static MyISAM, this kind of table storage space is relatively small, but because of the length of each record is different, so the data in the data table can be stored in memory after multiple modifications, resulting in a decrease in execution efficiency. Also, there may be a lot of fragmentation in memory. Therefore, this type of table is often defragmented with the Optimize table command or the Optimization tool.
      • Compression MyISAM: The two types of tables mentioned above can be compressed with the Myisamchk tool. This type of table further reduces the amount of storage consumed, but the table can no longer be modified after it is compressed. In addition, because it is compressed data, such a table should be read to the first time to extract the rows.
        However, regardless of the MyISAM table, it does not currently support transactional, row-level, and foreign key constraints.
    • MyISAM Merge Engine: This type is a variant of the MyISAM type. Merging tables is the merging of several identical MyISAM tables into a single virtual table. Often applied to logs and data warehouses.
    • The Innodb:innodb table type can be thought of as a further update to the MyISAM product, which provides the functionality of transaction, row-level locking mechanisms, and foreign key constraints.
    • Memory (Heap): This type of data table is only available in RAM. It uses a hash index, so the data is accessed very quickly. Because it exists in memory, this type is often applied to temporary tables.
    • Archive: This type only supports SELECT and INSERT statements, and does not support indexing. Often applied to logging and aggregation analysis.

Of course, MYSQL supports more than just a few types of tables.

For details, see:

    • Http://www.cnblogs.com/lina1006/archive/2011/04/29/2032894.html
    • http://blog.csdn.net/zhangyuan19880606/article/details/51217952

2. Single index, federated index, primary key index

An index is a special kind of file (an index on a InnoDB data table is an integral part of a table space), and they contain reference pointers to all records in the datasheet. The only task for a normal index (an index defined by the keyword key or index) is to speed up access to the data.

Mysql Common indexes are: Primary key index, unique index, normal index, full-text index, composite index

    • PRIMARY key (primary key index) ALTER TABLE table_name ADD PRIMARY key ( column )
    • Unique (unique index) ALTER TABLE table_name ADD Unique ( column )
    • Index (normal index) ALTER TABLE table_name ADD index index_name ( column )
    • Fulltext (full-text index) ALTER TABLE table_name ADD fulltext ( column )
    • Combination index ALTER TABLE table_name ADD index index_name ( column1 , column2 , column3 )

Mysql various index differences:

    • Normal index: The most basic index, without any restrictions
    • Unique index: Similar to "normal index", the difference is that the value of the indexed column must be unique, but a null value is allowed.
    • Primary KEY index: It is a special unique index and is not allowed to have null values.
    • Full-Text indexing: Available only for MyISAM tables, generating full-text indexes is a time-consuming space for larger data.
    • Combined index: For more MySQL efficiency, you can create a composite index that follows the "leftmost prefix" principle.

3. Mysql How to table, and after the table if you want to query by the conditions of the page what to do

When a piece of data reaches millions of, you spend more time searching for it, and if you have a joint query, I think it's possible to die there. The purpose of the sub-table is to reduce the burden on the database and shorten the query time.

How to divide a table?

    • Do MySQL cluster, for example: Using MySQL cluster, MySQL proxy,mysql replication,drdb, etc.
    • Anticipate large data volumes and access to frequently-occurring tables, dividing them into a number of tables
    • Using the merge storage engine to implement the sub-table

What should I do if I want to search by conditions after the table is divided?

    1. If only for paging, you can consider this table, that is, the table ID is scoped, and the ID is continuous, such as the first table ID is 1 to 100,000, the second is 100,000 to 200,000, so paging should be fine.
    2. If it is a different table, we recommend that you first build the index with Sphinx and then query the paging
    3. Damage service, just give him a year to check the data, or only save 1kw of data. Create a table to store data for one year, moving the oldest data from the table to the sub-table every one months. If the demand side to check all the data, let him choose the year to check.

For details, see: http://blog.51yip.com/mysql/949.html

4. After the table wants to make an ID multiple table is self-increment, the efficiency realization

Primary keys between multiple tables cannot be supported with the self-increment primary key of the database itself, because the primary key generated between the different tables is duplicated. So there are other ways to get the primary key ID.

(1) Generate ID from MySQL table

In the research on MySQL sub-table operation, a method is mentioned:
For insert, which is the insert operation, the first is to get a unique ID, you need a table to create an ID specifically, insert a record, and get the last inserted ID.

This works well, but in high concurrency, MySQL's auto_increment will cause the entire database to be slow. If there is a self-increment field, MySQL will maintain a self-increment lock, InnoDB will save a counter in memory to record the auto_increment value, when inserting a new row of data, a table lock will be used to lock the counter until the end of the insertion.

(2) Generate ID via Redis

(3) Queue mode

Using the queue service, such as Redis, Memcacheq, and so on, a quantitative ID is pre-allocated in a queue, each insert operation, first get an ID from the queue, if the insertion fails, add the ID to the queue again, while monitoring the number of queues, when less than the threshold, automatically add elements to the queue.
This way you can have a plan to allocate the ID, but also bring economic effects, such as QQ number, all kinds of beautiful, marked. such as the userid of the site, allow UID Landing, the introduction of a variety of beautiful, marked, for the ordinary ID scrambled and then randomly assigned.

(5) Oracle sequence: SEQ based on third-party oracle. Nextval to get an ID advantage: simple to use disadvantage: rely on third-party Oracle databases

(6) MySQL ID interval isolation: Different sub-Libraries set different starting values and steps, such as 2 MySQL, you can set one to generate an odd number, and the other to generate even. or 1 units with 0~10 billion, another with 10~20 billion. Advantages: using MySQL self-increment ID disadvantage: The operation and maintenance cost is relatively high, the data expansion needs to reset the step size.

(7) Based on database update + memory allocation: Maintain an ID in the database, get the next ID, the database will be id=id+100 where id=xx, get 100 IDs, in memory allocation advantage: Simple and efficient disadvantage: the self-increment order cannot be guaranteed

For details, see:

    • http://www.ttlsa.com/mysql/mysql-table-to-solve-the-increment-id-scheme/
    • http://blog.csdn.net/u010256841/article/details/56840743

5. MYSQL Master-Slave real-time backup synchronization configuration, as well as the principle (from the library read the main library binlog), read and write separation.

For details, see:

    • Http://www.cnblogs.com/kyrin/p/5967619.html
    • http://blog.csdn.net/jobjava/article/details/41420417

6. Write SQL statements and SQL optimizations

For details, see:

    • Optimization Analysis of SQL statements
    • Optimizing SQL queries: How to write high-performance SQL statements

7. Indexed data structure, B + Tree

See: Data structure and algorithm principle behind MySQL index

8. Database lock: Row lock, table lock, optimistic lock, pessimistic lock

Mode Row Lock table Lock page Lock
MyISAM
BDB
InnoDB
    • Table Lock: Low overhead, lock fast, no deadlock, locking force, high lock collision probability, the lowest concurrency
    • Row lock: Overhead, locking slow, deadlock, lock granularity is low, the probability of lock conflict is lower, concurrency is high
    • Page Lock: Overhead and lock speed between table and row locks, deadlock, lock granularity between table and row locks, concurrency is common

Table locks are more suitable for applications that are primarily query-based and have only a small number of update data by index criteria, and row locks are more suitable for applications that have a large number of different data updates by index and concurrent queries.

See: Database locks: Optimistic and pessimistic locks, shared and exclusive locks, row-level and table-level locks

9. Several granularity of database transactions

The granularity of database access control can be divided into 4 levels, namely database level, table level, record level (row level) and attribute level (field level)

See: Four Characteristics of database transactions

10. Relationship-type and non-relational database differences

Relational databases use foreign key associations to establish relationships between tables, and non-relational databases usually refer to data being stored in the database as objects, and relationships between objects are determined by the properties of each object itself.

The current mainstream relational databases are Oracle, DB2, Microsoft SQL Server, Microsoft Access, MySQL, and more.

Non-relational databases have NOSQL, Cloudant.

NoSQL and relational database comparisons?

Advantages:

    • 1) Cost: NoSQL databases are simple and easy to deploy, basically open source software, and do not need to spend a lot of money to buy using Oracle, compared to the relational database is cheaper than the price.
    • 2) query speed: NoSQL database stores data in cache, relational database stores data on hard disk, natural query speed is far less than NoSQL database.
    • 3) format for storing data: NoSQL storage formats are key,value forms, document forms, picture forms, and so on, so you can store the underlying types and objects or collections, and the database supports only the underlying types.
    • 4) Extensibility: The limitations of a relational database with a multi-table query mechanism like join cause the extension to be difficult.

Disadvantages:

    • 1) Maintenance of tools and information is limited, because NoSQL is a new technology, not with the relational database 10几 years of technology in the same breath.
    • 2) does not provide support for SQL, if it does not support industry standards such as SQL, will generate a certain user's learning and use costs.
    • 3) does not provide a relational database to deal with things.

For example, the comparison between MySQL and MongoDB:

Database Basics Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.