MySQL row lock and table lock

Source: Internet
Author: User



A lock is a mechanism by which a computer coordinates multiple processes or a purely thread concurrently accessing a resource. in a database, data is a resource that is shared by many users in addition to the contention of traditional computing resources (CPU, RAM, I/O). How to guarantee the consistency and validity of data concurrent access is one of the problems that the database must solve, and the lock conflict is also an important factor that affects the performance of database concurrent access. From this perspective, locks are especially important and complex for databases.


Overview compared to other databases, MySQL's locking mechanism is relatively simple, and its most notable feature is that different storage engines support different locking mechanisms. MySQL can be broadly categorized into the following 3 types of locks:
    • Table-level Lock: Low overhead, lock fast, no deadlock, lock granularity, lock conflict is the highest probability, concurrency is the lowest.
    • Row-level locks: high overhead, slow locking, deadlock, minimum lock granularity, the lowest probability of lock collisions, and the highest degree of concurrency.
    • Page locks: overhead and lock times are bounded between table and row locks, deadlock occurs, lock granularity bounds between table and row locks, and concurrency is common
----------------------------------------------------------------------MySQL table-level Lock-lock mode (MyISAM) MySQL table-level lock has two modes: Table shared Lock (Tables Read lock) and table exclusive write lock (tables write lock).
    • The read operation on MyISAM does not block other users ' requests for the same table, but blocks write requests to the same table;
    • The write operation to MyISAM will block other users from reading and writing to the same table;
    • There is a serial between the read and write operations of the MyISAM table and the write operation.
When a thread obtains a write lock on a table, only the holding lock thread can perform the update operation on the table. Read and write operations on other threads wait until the lock is released. MySQL table-level lock lock mode the table lock for MySQL has two modes: table-Shared read-Lock and table-exclusive write-lock (tables write lock). The lock mode is compatible with the following table table lock compatibility in MySQL
Current lock mode/compatibility/request Lock mode None Read lock Write lock
Read lock Is Is Whether
Write lock Is Whether Whether
It can be seen that the read operation on the MyISAM table does not block other users from reading requests to the same table, but blocks write requests to the same table, and writes to the MyISAM table will block other users ' read and write requests to the same table, between read and write operations on the MyISAM table, and between write and write operations. SerialOf ( when a thread obtains a write lock on a table, only the threads holding the lock can update the table. Read and write operations on other threads wait until the lock is released. How to add a table lock MyISAM before executing the query statement (SELECT), will automatically give all the table read lock, before performing the update operation (update, DELETE, INSERT, etc.), will automatically write locks to the table involved, this process does not require user intervention, Therefore, the user generally does not need to use the Lock Table command to explicitly lock the MyISAM table.    In the example in this book, explicit locking is basically for convenience only, not necessarily. To MyISAM table display lock, generally for a certain degree of simulation transaction operations, to achieve a point in time for multiple tables consistent read. For example, there is an order form orders, which records the total amount of the order, and there is also an order schedule Order_Detail, which records the amount of the order for each product in subtotal subtotal, assuming we need to check whether the sum of these two tables is equal, You may need to execute the following two sql:

SELECTSUM(total) FROMorders;

SELECT SUM(subtotal) FROMorder_detail;

At this point, if you do not first lock the two tables, you can produce an incorrect result because the Order_Detail table may have changed during the execution of the first statement. Therefore, the correct approach should be:

LOCK tables orders readlocal,order_detail read local;

SELECT SUM(total) FROMorders;

SELECT SUM(subtotal) FROMorder_detail;

Unlock tables;

The following two points are to be described in particular.
    • The above example adds the ' local ' option to lock tables, which allows other users to insert records at the end of the table while satisfying the MyISAM table concurrency Insert Condition
    • When explicitly adding a table lock to a table using locktables, all locks involving the table must be obtained at the same time, and MySQL supports lock escalation. That is, after the lock tables is executed, only those tables that are explicitly locked can be accessed, the unlocked tables cannot be accessed, and if a read lock is added, only the query operation can be performed and the update operation cannot be performed. In fact, in the case of automatic lock-up, the MySQL problem obtains all the locks required by the SQL statement at once. This is why the MyISAM table does not appear deadlocked (Deadlock free)
A session uses the lock Table command to add a read lock to the table Film_text, which can query for records in the locked table, but updates or accesses other tables will prompt for errors, while another session can query the records in the table, but the update will have a lock wait. When using lock table, you not only need to lock all the tables used at once, but also how many times the same table appears in the SQL statement, and how many times it will be locked by the same alias as in the SQL statement, or else error!    Concurrent locks under certain conditions, MyISAM also supports concurrency of queries and operations. The MyISAM storage engine has a system variable Concurrent_insert that is specifically designed to control the behavior of its concurrent insertions, with values of 0, 1, or 2, respectively.
    • Concurrent insertions are not allowed when Concurrent_insert is set to 0 o'clock.
    • When Concurrent_insert is set to 1 o'clock, if myisam allows a table to be read at the same time, another process inserts records from the end of the table. This is also the default setting for MySQL.
    • When Concurrent_insert is set to 2 o'clock, a record is allowed to be inserted at the end of the table, regardless of whether there is an empty hole in the MyISAM table.
You can use the Concurrency insertion feature of the MyISAM storage engine to solve the same table query and insert lock contention in your app. For example, a concurrent_insert system variable of 2 is always allowed to be inserted concurrently, and a space fragment is received by periodically executing the Optionmize table statement at the system idle time, receiving an intermediate hole resulting from the deletion of the record. MyISAM's lock scheduling, read and write locks are mutually exclusive for the MyISAM storage engine, and read operations are serial. So, one process requests a read lock on a MyISAM table, and another process requests a write lock on the same table, how does mysql handle it? The answer is that the write process gets the lock first. Not only that, even if the read process first requests to the lock waiting queue, after the write request, the write lock will be inserted before the read request! This is because MySQL considers writing requests to be generally more important than read requests. This is why the MyISAM table is not well suited for applications with a large number of update operations and query operations, because a large number of update operations can cause query operations to be difficult to obtain read locks, which can be blocked forever. This situation can sometimes get very bad! Fortunately we can adjust the scheduling behavior of MyISAM by some settings.
    • By specifying a startup parameter of Low-priority-updates, the MyISAM engine defaults to giving the read request priority rights.
    • By executing the command set Low_priority_updates=1, the priority for update requests made by this connection is reduced.
    • Reduce the priority of the statement by specifying the Low_priority property of the Insert, UPDATE, DELETE statement.
Although the above 3 methods are either update first or query first method, but still can use it to solve the query of relatively important applications (such as user logon system), read lock waiting for serious problems.    In addition, MySQL also provides a compromise method to adjust the read-write conflict, that is, to set the system parameter Max_write_lock_count a suitable value, when a table read lock reached this value, MySQL will temporarily reduce the priority of the write request, the reading process must be the opportunity to obtain a lock. The write-priority scheduling mechanism and the solution are discussed above. It is also important to emphasize that some long-running query operations can also cause the write process to "starve"! Therefore, the application should try to avoid long-running query operations, do not always want to use a SELECT statement to solve the problem. Because this seemingly ingenious SQL statement, often more complex, execution time is longer, where possible, by using the intermediate table and other measures to do a certain "decomposition" of SQL statements, so that each step of the query can be completed in a short time, thereby reducing the lock conflict. If complex queries are unavoidable, you should try to schedule them during the database idle time, such as some periodic statistics that can be scheduled for nightly execution. ----------------------------------------------------------------------InnoDB lock problem The maximum difference between InnoDB and MyISAM is two points: one is the support transaction (TRANSACTION) and the other is the row-level lock. There are many differences between row-level and table-level locks, and the introduction of transactions poses some new problems. 1. Transaction (Transaction) and its acid propertiesA transaction is a logical processing unit consisting of a set of SQL statements that have a 4 property, often referred to as the Acid property of a transaction.
    • Primitive (actomicity): A transaction is an atomic manipulation unit whose modifications to the data are either all executed or not executed.
    • Consistency (consistent): data must be in a consistent state at the beginning and completion of a transaction. This means that all relevant data rules must be applied to the modification of the transaction to make the integrity, and at the end of the transaction, all internal data structures, such as B-tree indexes or doubly linked lists, must also be correct.
    • Isolation (Isolation): The database system provides a certain isolation mechanism to ensure that transactions are performed in a "stand-alone" environment that is not affected by external concurrency operations. This means that the intermediate state in the transaction process is not visible to the outside, and vice versa.
    • Persistence (Durable): After a transaction is complete, its modification to the data is permanent, even if a system failure occurs.
2. The problems caused by concurrent transactions compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources, improve the transaction throughput of database system, and can support more users. However, concurrent transaction processing also brings some problems, including the following situations.
    • Update lost (Lost Update): When two or more transactions select the same row, and then update the row based on the value originally selected, a missing update problem occurs because each transaction is unaware of the presence of other transactions-the last update overwrites the updates made by other firms. For example, two editors made an electronic copy of the same document. Each editor changes its copy independently, and then saves the changed copy, overwriting the original document. The editor who last saved its changes to save its change copy overwrites the changes made by another editor. This problem can be avoided if another editor cannot access the same file until one editor finishes and commits the transaction
    • Dirty Read (Dirty Reads): A transaction is modifying a record, before the transaction and commit, the record data is in an inconsistent state, then another transaction to read the same record, if not control, the second transaction read the "dirty" data, and further processing will result in uncommitted data dependencies. This phenomenon is visually called "Dirty reading".
    • non-repeatable read (non-repeatable Reads): A transaction is reading some data has changed, or some records have been deleted! This phenomenon is called "non-repeatable reading".
    • Phantom Read (Phantom Reads): A transaction re-reads the previously retrieved data in the same query condition, but finds that other transactions have inserted new data that satisfies its query criteria, which is called "Phantom Reading."
3. Transaction ISOLATION LEVELIn the problems caused by concurrent transactions, "update loss" should generally be avoided altogether. However, preventing updates from being lost cannot be resolved by the database transaction controller alone, requiring the application to update the data Add the necessary locksTo resolve, therefore, preventing updates from being lost should be the responsibility of the application. "Dirty reading", "non-repeatable reading" and "phantom reading" are all database read consistency problems, which must be solved by the database to provide a certain transaction isolation mechanism. Database implementation of transaction isolation, the basic can be divided into the following two kinds. One is to lock the data before it is read, preventing other transactions from modifying the data. The other is to create a consistent data snapshot (Snapshot) of a data request point-in-time by a mechanism without any locks, and use this snapshot to provide a consistent read at a certain level (statement-level or transaction-level).    From the user's point of view, it seems that the database can provide multiple versions of the same data, so this technique is called data multi-version concurrency control (multiversion Concurrency Control, abbreviated MVCC or MCC) and is often called a multi-version database. The stricter the transaction isolation level of the database, the smaller the concurrency side effect, but the greater the cost, because transaction isolation is essentially a "serialization" of transactions, which is obviously contradictory to "concurrency", while different applications have different requirements for read consistency and transaction isolation, such as many applications    "Non-repeatable reads" and "phantom reads" are not sensitive and may be more concerned with the ability of data to be accessed concurrently. In order to resolve the contradiction between "isolation" and "concurrency", Iso/ansi SQL92 defines 4 transaction isolation levels with different levels of isolation and different side-effects, allowing applications to balance "isolation" and "concurrency" by selecting different isolation levels according to their business logic requirements. Transaction 4 Isolation Level comparison
Isolation LEVEL/Read data consistency and allowed concurrency side effects Read Data consistency dirty read non-repeatable read Phantom read
uncommitted read (READ UNCOMMITTED) lowest level, only The certificate does not read physically corrupted data Yes Yes
submitted degrees (Read committed) statement-level no is
REPEATABLE READ (repeatable Read) transaction level no no Yes
Serializable (Serializable) highest level, transaction level no no no
The last thing to note is that each specific database does not necessarily fully implement the above 4 isolation levels, for example, Oracle only provides read committed and serializable two standard levels, and also their own defined read Only isolation level: SQL In addition to supporting the 4 levels defined by the Iso/ansi SQL92 above, server supports an isolation level called snapshot, but strictly it is a serializable isolation level implemented with MVCC. MySQL supports all 4 isolation levels, but in a specific implementation there are some features, such as the use of MVCC consistent reads at some isolation levels, but not in some cases.get Inonod row lock race conditionYou can analyze the contention for row locks on the system by examining the Innodb_row_lock state variables:
1234567891011 mysql> show status like‘innodb_row_lock%‘;+-------------------------------+-------+| Variable_name | Value |+-------------------------------+-------+| Innodb_row_lock_current_waits | 0 || Innodb_row_lock_time | 0 || Innodb_row_lock_time_avg | 0 || Innodb_row_lock_time_max | 0 || Innodb_row_lock_waits | 0 |+-------------------------------+-------+5 rowsinset(0.00 sec)
If contention is found to be more severe, such as innodb_row_lock_waits and INNODB_ROW_LOCK_TIME_AVG values are higher, you can also set INNODB monitors to further observe the table, data rows, etc., where the lock conflict occurred.        and analyze the reason for lock contention. The following two types of row locks are implemented by the InnoDB lock mode and the Lock method InnoDB.
    • Shared Lock (s): Allows a transaction to read one line, preventing other transactions from acquiring an exclusive lock on the same data set.
    • Exclusive lock (x): A transaction that allows the acquisition of exclusive locks to update data, preventing other transactions from acquiring the same data set shared read and exclusive write locks.
In addition, in order to allow row and table locks to coexist and implement a multi-granularity locking mechanism, INNODB also has two intent locks (Intention Locks) that are used internally, both of which are table locks. Intent shared Lock (IS): A transaction is intended to share a lock on a data row, and a transaction must obtain the IS lock of the table before sharing it with a data row. Intent exclusive Lock (ix): The transaction intends to add an exclusive lock to the data row, and the transaction must obtain an IX lock on the table before it is added to the exclusive lock on the data row. InnoDB line lock mode compatibility List
current lock mode/compatibility/request Lock mode x s is
x conflict conflict conflict conflict
ix punch Burst compatible conflict compatible
conflict conflict compatible compatible
is conflict compatible compatible compatible
      If the lock mode of a transaction request is compatible with the current lock, INNODB grants the transaction the requested lock, whereas if both are incompatible, the transaction waits for the lock to be released.     Intent Lock is innodb automatically plus, no user intervention is required. For update, Delete, and INSERT statements, InnoDB automatically adds an exclusive lock (x) to the involved and dataset, and for the normal SELECT statement, InnoDB automatically adds an exclusive lock (x) to the data set involved, and InnoDB does not lock any locks for the normal SELECT statement A transaction can be displayed to a recordset with shared or exclusive locks through the following statement. Shared Lock (s): SELECT * FROM table_name WHERE ... Lock in SHARE mode exclusive lock (X): SELECT * FROM table_name WHERE ...  for update    with SELECT. In SHARE mode, a shared lock is used primarily to confirm the existence of a row of records when a data dependency is required, and to ensure that no one is doing an update or delete operation on the record. However, if the current transaction also requires an update to the record, it is most likely to cause deadlocks, and for applications that require an update operation after locking the row records, you should use SELECT ... The for Update method obtains an exclusive lock.      INNODB row lock implementation     INNODB row locks are implemented by index entries on the index, which is different from Oracle, The latter is achieved by locking the corresponding data rows in the data. InnoDB This type of row lock implementation is characterized by the fact that InnoDB uses row-level locks only if the data is retrieved by index criteria, otherwise INNODB will use a table lock!     In practice, pay special attention to this feature of the InnoDB row lock, otherwise it may result in a large number of lock collisions, which can affect concurrency performance.       Gap Lock (Next-key Lock)     When we retrieve data with a range condition instead of an equality condition and request a shared or exclusive lock, InnoDB locks the indexed entry for the qualifying existing data For a record in which the key value is within the condition but does not exist, called "gap", InnoDB also locks the "gap", which is not a so-called gap Lock (Next-key lock).     For example, if there are only 101 records in the EMP table, the value of the Empid is,..., 100,101 respectively, the following SQL:SELECT * FROM emp WHERE empid > 100 FOR UPDATEis a retrieval of a range condition, innodb not only locks the records that meet the conditional Empid value of 101, but also locks the "gap" of Empid greater than 101 (which do not exist). InnoDB The purpose of the use of Gap locks, on the one hand is to prevent Phantom reading to meet the requirements of the relevant isolation level, for the above example, if you do not use a gap lock, if other transactions inserted empid greater than 100 of any record, then this transaction if the above statement execution, a phantom reading will occur, on the other hand, is to meet the needs of its recovery and replication.    The effect of its recovery and replication on mechanisms, and the use of gap locks at different isolation levels innodb. Obviously, when using scope conditions to retrieve and lock records, the INNODB locking mechanism blocks concurrent insertions that match the key values within the range, which often results in severe lock waits. Therefore, in the actual development, especially the concurrent insertion of more applications, we want to optimize the business logic, as far as possible to use equality conditions to access the updated data, to avoid the use of scope conditions.when to use a table lockFor InnoDB tables, row-level locks should be used in most cases, because transactions and row locks are often the reason why we chose the InnoDB table. However, in another special transaction, you can also consider using table-level locks.
    • The first is that the transaction needs to update most or all of the data, and the table is larger, and if you use the default row lock, not only is the transaction inefficient, but it may cause other transactions to wait for a long time lock and lock conflict, in which case you might consider using a table lock to increase the execution speed of the transaction.
    • The second scenario is that the transaction involves multiple tables, which is more complex and is likely to cause deadlocks, causing a large number of transactions to be rolled back. It is also possible to consider the tables involved in a one-time locking transaction, thus avoiding deadlocks and reducing the cost of the database due to transaction rollback.
Of course, these two kinds of transactions in the application can not be too many, otherwise, you should consider using the MyISAM table.    Under InnoDB, the following two points should be noted for using table locks. (1) using lock Talbes Although it is possible to add a table-level lock to InnoDB, it must be stated that the table lock is not managed by the InnoDB storage engine layer, but is owned by the previous layer of MySQL server, only if autocommit=0, Innodb_table_ Lock=1 (the default setting), the InnoDB layer to know the MySQL plus table lock, MySQL server to sense InnoDB plus row lock, in this case, InnoDB to automatically identify the deadlock involving table-level lock;    InnoDB will not be able to automatically detect and process this deadlock. (2) in the use of Locak tables to InnoDB lock should be noted that the autocommit is set to 0, or MySQL will not add locks to the table; Before the transaction ends, do not use Unlocak tables to release the table lock, because unlock tables implicitly commits the transaction    Commit or rollback can not release the table-level lock with Locak tables, must use unlock tables release the table lock, the correct way to see the following statement. For example, if you need to write a table T1 and read from table T, you can do this as follows:

SETAUTOCOMMIT=0;

LOCAK TABLES t1 WRITE, t2 READ, ...;

[do something withtables t1 andhere];

COMMIT;

UNLOCK TABLES;




about DeadlocksThe     MyISAM table lock is deadlock free, because MyISAM always gets all the locks needed at once, either all satisfied, or waits, so there is no deadlock. However, in InnoDB, except for a single SQL-composed transaction, the lock is progressively obtained, which determines that InnoDB deadlock is possible.     After a deadlock occurs, InnoDB is typically automatically detected, and a transaction is freed and returned, and the other transaction is locked and the transaction continues to be completed. However, when an external lock is involved, or a lock is involved, the InnoDB does not automatically detect the deadlock, which needs to be resolved by setting the lock wait timeout parameter innodb_lock_wait_timeout. It is necessary to note that this parameter is not only used to solve the deadlock problem, if the concurrent access is high, if a large number of transactions due to the inability to get the required locks immediately suspended, can consume a lot of computer resources, causing serious performance problems, or even drag down the database. We can prevent this from happening by setting the appropriate lock wait timeout threshold.     In general, deadlocks are problems with application design, and most can be avoided by adjusting the business process, database object design, transaction size, and SQL statements that access the database. The following is an example of how several deadlocks are commonly used.     (1) in the application, if a different program accesses multiple tables concurrently, you should try to agree to access the tables in the same order, which can greatly reduce the chance of deadlocks. If the order of two session accesses two tables is different, the chance of deadlock is very high! However, deadlocks can be avoided if accessed in the same order.     (2) When the program processes data in batches, it can greatly reduce the likelihood of deadlocks if the data is sorted beforehand to ensure that each thread processes the records in a fixed order.     (3) in a transaction, if you want to update a record, you should request a sufficient level of lock, i.e. an exclusive lock, instead of requesting a shared lock, and then requesting an exclusive lock or even a deadlock when updating.     (4) Under Repeateable-read isolation level, if two threads simultaneously record the same condition with select ... ROR Update plus an exclusive lock, two threads will be locked successfully if the record is not met. The program found that the record does not yet exist and tries to insert a new record, and if two threads do so, a deadlock occurs. In this case, you can avoid the problem by changing the isolation level to read COMMITTED.     (5) When the isolation level is read commited, if two threads first perform a SELECT ... FOR UPDATE to determine if there is a qualifying record, and if not, insert the record. At this point, only one line Cheng Nen Insert succeeds, the other thread will have a lock wait, when the 1th thread commits, the 2nd thread will be wrong because of the primary key, but although this thread is wrong, it will get an exclusive lock! If there is a 3rd thread to apply for an exclusive lock, there will be a deadlock. In this case, you can do the insert operation directly before capturing the primary key weight exception, or when encountering a primary key weight error, always perform an exclusive lock obtained by the rollback release.      Although the above design and optimization measures can greatly reduce the deadlock, but deadlock is difficult to completely avoid. Therefore, it is a good programming habit to always capture and handle deadlock exceptions in program design.     If a deadlock occurs, you can use the show INNODB Status command to determine the cause and improvement of the last deadlock.   --------------------------------------------------------------------------------  Summary     FOR MyISAMTable lock, the following (1) shared read locks (s) are compatible, but between shared read locks (s) and exclusive write locks (x), and exclusive write locks (x) are mutually exclusive, meaning that read and write are serial.    (2) Under certain conditions, MyISAM allows querying and inserting concurrent execution, which we can use to solve the lock contention problem in the application for the same table and insert. (3) MyISAM default lock scheduling mechanism is write-first, which is not necessarily suitable for all applications, the user can set the Low_pripority_updates parameter, or in the INSERT, UPDATE, DELETE statements specified Low_    The priority option to adjust the contention for read-write locks.     (4) because the lock granularity of the table lock is large, and the read and write is serial, so if the update operation is more, the MyISAM table may have serious lock waits, you can consider using the InnoDB table to reduce the lock conflict. For InnoDBTable, the main points are as follows (1) InnoDB's marketing is based on index implementations, and InnoDB uses table locks if data is not accessed through the index.    (2) InnoDB clearance lock mechanism, and the reason why the InnoDB use Gap lock.    (3) Under different isolation levels, the INNODB lock mechanism differs from the consistent read strategy.    (4) MySQL recovery and replication have a significant impact on the INNODB lock mechanism and the consistent read strategy.    (5) lock conflicts and even deadlocks are difficult to avoid altogether. After understanding the lock characteristics of the InnoDB, users can reduce lock collisions and deadlocks through such measures as design and SQL tuning, including:
    • Try to use a lower isolation level
    • Carefully design indexes and use indexes to access data to make locks more precise, thereby reducing the chance of lock collisions.
    • Choose a reasonable transaction size, and the chances of a small transaction locking conflict are also smaller.
    • When displaying locking to a Recordset, it is a good idea to request a sufficient level of locks at once. For example, to modify the data, it is best to apply for an exclusive lock, rather than apply for a shared lock, modify and then request an exclusive lock, which is prone to deadlock.
    • When different programs access a set of tables, it is important to agree to access the tables in the same order as possible, and to access the rows in the table in a fixed order for a table. This can greatly reduce the chance of deadlock.
    • Try to access the data with equal conditions to avoid the effect of a gap lock on concurrent insertions.
    • Do not request a lock level that exceeds the actual need, and do not display a lock on the query unless necessary.
    • For some specific transactions, table locks can be used to increase processing speed or to reduce the likelihood of deadlocks


Reprint: http://www.cnblogs.com/chenqionghe/p/4845693.html#3696935



MySQL row lock and table lock


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.