With Oracle 12c introducing the in memory component, the Oracle database has dual-mode data storage that enables support for mixed-type applications: traditional, row-saving data meets OLTP applications, and data stored in columns satisfies OLAP applications that are primarily queried. The in memory component can be used with other database component functions, and does not require the user to develop or modify the application separately, it is very convenient to implement the transformation based on real-time database analysis. This article describes some of the in memory components, including the following:
-Basic knowledge of column-based storage
-Access data in the memory area
Fusion of-in Memory and RAC
1. Basic knowledge of column-based storage.
1.1 Memory structure
Traditional databases use row-based storage, and when a transaction occurs, Oracle operates on one row (or more rows) of data, that is, the data is in a row of data, even though the data that may need to be accessed is just a few of the columns. This method of data preservation is ideal for DML-based OLTP applications and is very efficient. However, in OLAP systems, query operations for large amounts of data are absolute, and these queries tend to target only specific columns in the table. In addition, data changes are taken in the form of data loading, which means that data is rarely changed after being loaded into the database, and it is no doubt that organizing the data in columns is a better choice. Because of the pros and cons of the two ways in which data is stored, no matter in which way the data is saved, the requirements of the hybrid application's database system are not well met, and Oracle introduces the so-called dual-mode data storage method: on disk (i.e. data file) and database buffer The cache stores data as rows, creating a single piece of memory space (in the memory area) where data is saved in columns to meet OLAP type query requirements. One of the main reasons why Oracle chooses to carve out a single piece of memory to hold column schema data is that OLAP is primarily query-based, and that the way data changes occur is mostly done in the form of data loading, which means that Oracle is fully loaded with bulk data to complete in Data loading in the memory area space to ensure real-time data. Next, we introduce the basic knowledge of the in memory component from the two aspects of the data loading process.
First, the in memory area is separate from the traditional SGA and PGA, and consists of two parts, 1Mpool and 64Kpool. Where 1M pool is used to hold column-style data, IMCU (in Memorycompressionunit) is the basic storage unit, 64Kpool is used to hold and IMCU relative metadata information, SMU (Snapshotmetadataunit) is the basic unit of this part of memory. Readers can find the relevant information through the following query.
IMCU is the basic storage unit for storing columnar data in memory, and Oracle will try to ensure that each IMCU is 1M in size, with each IMCU consisting of two parts shown in Figure 1
Figure 1
The SMU section is primarily used to hold IMCU data, such as IMCU corresponding pointers, Imcu included extent range, DBA scope, journaltable pointers, rowID bitmaps, and so on.
1.2 Data load (populate)
After understanding how in memory holds the data in RAM, take a look at how the data is loaded into memory. According to the previous content, the data in the data file is stored in a row format, then you need a mechanism to load the data into the memory area, and in the loading process to complete the transition from row to column mode.
First, Oracle supports specifying the in memory property on a table, partition, or table space, which means that the in memory property is for a physical database object, not a logical database object. For example, we can use the following statement to specify the in memory property for a database object:
Sql>alter Table Sales InMemory no memcompress priority critical;
Sql>alter tablespace ts_data INMEMORY;
Sql>alter TABLE Sales MODIFY PARTITION sales_201501 INMEMORY;
It should be noted that because the in memory component is primarily for OLAP applications, and most of the operations of this application are queries, and many times only the specific one or more columns in the table, the in memory feature can also specify that only one or more columns in the table are loaded into the Memory area.
Since the size of the in memory area is limited, and the memory resources of the host are limited, and the capacity of the database tends to exceed the existing memory resources, Oracle recommends loading tables with high performance requirements into the in memory area. A table with low performance requirements is saved to a flash or disk. Of course, if the memory resources are sufficient, and the database is small, most of the applications are query-oriented, or you can consider loading all the tables into the in memory area. In addition, because of resource constraints, Oracle allows users to set the in memory load priority for different tables, the basic principle is that high-priority objects are first loaded into the in memory area, and the lower-priority objects need to wait until the high-priority objects are loaded before they can be loaded. Oracle provides 5 in memory load priority, and table 1 contains detailed information about each priority level.
Table 1
In addition, because in memory is mainly query-oriented OLAP or decision support system, that is, most of the data is loaded (load) to the database will no longer change, then load the data while compressing the data can undoubtedly save memory space, It can also improve the efficiency of queries (mainly because many of the columns being queried contain a large number of duplicate values). The In memory component provides a rich compression option that allows the user to specify the compression method while specifying the in memory option for the object. Table 2 lists the supported compression levels:
The compression ratios in the table above are from top to bottom and are getting higher. The following SQL statement shows the highest priority when loading the table Salse into the memory area and needs to be compressed using the "Memcompress for Query" method: Sql>alter Table Sales InMemory Memcompress for query low priority critical;
If you need to know the compression ratios that are available for each compression option before you specify compression options, you can use the Oracle Compression Advisor (dbms_compression package) to estimate.
Finally, the loading process is implemented in conjunction with the background process IMCO and the worker process (W00) process, after the database is started or when the in memory option for some objects is enabled, the IMCO process creates some load tasks and assigns them to a number of worker processes as needed. Each worker process is responsible for loading part of the data, and notifies the IMCO process that the load is complete when all the worker processes have completed loading the corresponding portion of the data.
Data consistency for 2.IN memory
If our database is read-only, then things are much simpler because there is no consistency problem with the data, but this is not the case, for most databases, transaction processing is always happening, so the consistency of data needs to be guaranteed. The in memory component is no exception, if the data modified by the DML statement is not loaded into the in memory area, then the modification of the DML statement is limited to the SGA, whereas if the modified data is already loaded into the in memory area, Then a mechanism is needed to ensure the consistency of the data. For example: data that has not been submitted cannot be seen, and the session that executes the change should be able to see the latest data.
Oracle ensures data consistency through the journal table approach. Each IMCU corresponds to one of its own journal table, and if the data modified by the DML statement is included in the IMCU, the modified data is recorded in the journal table, which we call private journal; When the transaction commits, The corresponding record in the journal table is then identified as a shared journal. This guarantees that the query will be able to obtain consistent data when accessing the IMCU, and Oracle automatically maps the information in the rowID bitmap recorded in Imcu to the buffer cache if the data required by the query cannot be found in journal table. To find the data that meets the query requirements in the corresponding location. Figure 2 depicts the basic relationship between the journal table and the Imcu.
Figure 2
However, if the DML statement continues to occur, the data in the journal table will be more and more, even if most of the data in IMCU is old, and the new data is stored in the journal table, the performance damage to in memory queries is significant. Therefore, Oracle defines a threshold value (threshold) that triggers the reload process when the ratio of old data in the IMCU reaches this threshold, that is, the IMCO daemon checks to see if there are any IMCU to meet the reload conditions at intervals (default 2 minutes). If a IMCU is found that satisfies the condition, the W00 worker process is notified to reload the corresponding IMCU, but because the cost of reloading is relatively high and may affect some running statements, Oracle takes a gradual approach to reloading the IMCU. That is, each time only a portion of the IMCU to meet the reload condition is processed, and the specific degree can be adjusted by the Inmemory_trickle_repopulate_servers_percent parameter.
It is hard to quantify how much extra load the journal table generates to the system, because there are too many factors that can affect it, such as the compression method used at load time, the way it is changed, and the behavior of the application accessing the data. However, there are still some basic principles that can minimize the impact of data changes on in memory. Since the data is loaded into the memory area at extent, the cost of reloading is high if the changes to the data are randomly distributed to the various extent of the table, as this means that a large number of IMCU need to be rebuilt The cost of reloading is greatly reduced if the data changes can be concentrated in a particular range of extent, or if most of the changes are data insertion and are loaded using a direct path. Another suggestion is to use partitioned tables as much as possible to preserve data, which is useful for restricting data changes to specific partitions, and not using or using dml,memorycompress for DML as a lightweight compression method for these partitions.
3. Accessing data in the memory area
3.1 Single-table access
After the data is loaded into the in memory area, they can be accessed through SQL statements. A big feature of analytic queries is that it only cares about certain columns in the table, not all columns, and the values of these columns often have a large number of duplicate values, and the column as a condition is often a common data type (for example: numeric, string, date), based on these characteristics, Oracle's in Memory components have also been designed to improve the performance of these analytic query statements.
First, each column in the IMCU contains the corresponding dictionary information and stores the index information. During the loading process, the worker process writes a dictionary of the different values owned by each column in the corresponding IMCU, and then assigns a keyvalue to each row of the column, substituting this keyvalue for the specific value. This saves space and prepares you to use the CPU's SIMD technology for future queries. The storage index (STORAGEINDEX) is actually a common technique in the data warehouse, which avoids accessing a large number of data that does not meet the criteria by recording the maximum and minimum values of a column. In the header information for each column in the IMCU, the maximum and minimum values of the column in the corresponding IMCU are saved, along with their corresponding offsets. This method allows you to quickly filter out data that does not meet the criteria by comparing the maximum and minimum values while querying the data, and once the data changes affect the information in the storage index, you can quickly navigate to the corresponding location. However, it should be noted that the storage index does not necessarily apply to all the where conditions (predicates).
In addition, because the data has been loaded into the memory, so most of the operation needs to be done by the CPU, I/O-related operations are basically not present (unless a portion of the table being queried has not yet been loaded into the in memory area). How to use CPU resources more efficiently becomes an important factor in determining performance, so Oracle uses SIMD technology (single instruction processing multiple Data Values) allows the CPU to access multiple data in one instruction, but because SIMD supports instructions that are limited, this also explains why Oracle creates dictionary information for each column when building IMCU. Figure 3 describes the basic concepts of SIMD access data.
Figure 3
In, the sales table is loaded into the memory area, and the header information of the promo_id column in IMCU contains the dictionary information for that column, and the value of each row in the column has been converted to KeyValue when the query condition is promo_id= 9999, you can use SIMD technology to make the CPU compare multiple rows of data each time, which greatly improves the performance of the query.
Finally, we can confirm whether the in memory option is used to access the table by looking up the "table access INMEMORY full TEST" information in the execution plan. For example:
3.2 Multi-table connection
In addition to the optimization of the Access table, the in memory component has been improved for table joins, with the main features: Bron filter and in memory aggregation.
For Bron filter (Bloom Filters), I believe everyone is not unfamiliar. Its main function is to determine whether one data appears in another set, or to compare the common elements between large data sets. Oracle uses a fabric filter when working with table joins in some SQL statements starting at 10g. If the tables involved in the table join have already specified the in memory property and have been loaded into the memory area, the optimizer first selects a table (usually a smaller table) in the connection, and a series of hash functions for the column that is the link condition. and produce a result bitmap (bitmap), and then the data of another table in batches of the same hash function, and compared with the previous result bitmap, the entire process will not produce I/O and SIMD technology can be used in the comparison process, so the introduction of the Bron filter in Memory becomes more efficient when processing table connections.
The CBO will automatically determine whether or not to use the fabric filter when formulating the execution plan, and the user does not need to specify it manually. If you see the following information in the execution plan, the Bron filter is used.
In the above execution plan description:
1. The table "Test_small" is first accessed in the memory area, which is the 5th step in the execution plan, followed by the filter (BF0000) used for the link, which is the 4th step in the execution plan.
2. The table "Test_big" is then accessed in the memory area, which is the 7th step in the execution plan, followed by the previously built filter.
3.3 Multi-table connection
In the Data Warehouse application based on analytic query statement, besides simple table connection, multiple table links often appear, and some aggregation and grouping operations are often included, such as Star query in Data Warehouse application. For this query, Oracle proposes a vector grouping (VECTORGROUPBY) feature to improve the performance of the SELECT statement. Vector grouping is a two-stage process:
Phase 1:CBO finds a dimension table (Dimension table) with a small amount of data in the query, locates the columns that meet the criteria and joins the large fact table (fact tables) and generates the vector group. The group of vectors is then combined with the columns in the fact table that need to be grouped or aggregated to form a multidimensional array and several temporary tables.
Phase 2: Apply the vector groupings from the previous stage on the fact table, and then add the values of the columns that need to be computed for grouping or aggregating results to the staging table. Finally, the data of these temporary tables is applied to the multidimensional array, and the final grouping or aggregation results are calculated.
The construction of the vector grouping and the comparison of vectors to fact tables throughout the process are done in memory, and SIMD is used, so the performance of this query can be greatly improved. Of course, since this operation is done in memory, the memory resource requirements for the system are also large, requiring that the process running the query have enough PGA space. The following execution plan illustrates the application of the grouping vectors in the query:
According to the above execution plan:
-First, the table "Test_small_1" and "test_small_2" are accessed, of course they have been loaded into the memory area
Of After the grouping vectors are constructed, they are "KV0000" and "KV0001", and after combining the tables that need to be grouped, the temporal tables are created, which are "sys_temp_0fd9d6604_116b7c6" and "sys_temp_0fd9d6604_ 116b7c6 ".
-The table "Test_big" is accessed, after which the vector groupings are applied to the table. It then starts adding grouping results to the staging table.
-the result in a multidimensional array is generated, which is "vw_vt_f486f43f". Finally, the final grouping is done by "HASH GROUPBY".
4.in Memory and RAC Fusion
Consistent with the new features of Oracle, the in memory feature can also be compatible with other database components that already exist, such as RAC, for high availability and scalability of the system. Because RAC belongs to a typical share everything structure, it can open the same database on multiple nodes at the same time, so it can be loaded (populate) to multiple nodes for the same database object. Of course, the precondition is that the database instances of these nodes are set in memory area (the parameter in memory_size is not equal to 0). Since data can be loaded into multiple nodes, it means that we need to think about two things:
-Question 1: How to distribute data across multiple nodes.
-Question 2: Is it necessary to preserve redundancy in the memory area of the data to ensure high availability.
For the distribution of data, Oracle provides a way to distribute data across multiple nodes based on either the ROWID range of the data or the partitioning (or sub-partition) of the table. The first method is to divide the data of the table into several parts according to the ROWID, and then load each piece of data evenly into different nodes, which is more suitable for a table with uneven data distribution, and the application's access to the table is a fairly homogeneous scene on each instance. For example:
ALTER TABLE test INMEMORY Distribute by ROWID RANGE;
The second approach applies to partitioned tables, and Oracle loads each partition into the memory area of the different nodes according to the definition of the partition, which is more suitable for tables with uniform data distribution. If the application's access to the table is fairly uniform on each instance, it is especially suitable for hash partitioned tables. For example: ALTER TABLE lineorder INMEMORY Distribute by PARTITION;
If the data should be stored in the memory area of redundancy, if it is a normal RAC database, then the data will not be in the memory area to save redundancy, and for the Exadata-in-memory area of the data can be set redundant. The reason why this is chosen is that the private network configuration of the non-Exadata integrated RAC system is very diverse, if you choose to save redundancy, once an instance is down, it means that there will be a large amount of data to be transferred between the private network of the node in order to ensure the redundancy of the data. If the performance of the private network is not guaranteed, the transmission of such data can consume a lot of time and network resources, and lead to serious consequences. And the exadata of the private network using fiber-optic network, and the use of Advanced RDS protocol, data transmission can reach dozens of g per second, so in processing due to node failure caused by a large number of private network data transmission, can still ensure that the private network of the cluster normal work.
In addition, due to the current hardware level of high-availability technology is very mature, a database instance or node down the accident is mostly a one-time, can soon recover. So Oracle does not immediately trigger the redistribution of data after discovering an instance or node fail, but instead waits for a period of time to allow the problem node or instance to restart and load its own data, and the other nodes will trigger the redistribution of data only after the wait time has timed out. The data in the memory area of the failed node is re-distributed to the normal node. Based on this design pattern, it is recommended that when using the In Memory option on a RAC system, a portion of the memory area of each node should be set aside to ensure that there is still enough space to redistribute the data.
Figure 4 and Figure 5 below describe the difference between the exadata environment and the non-exadata environment in the memory area to save data
Figure 4-exadata Environment
Figure 5-Non-exadata environment
It is not difficult to see that, in a RAC environment, each node will not contain all of the data in the table. Therefore, in a RAC environment, automatic parallel queries (AUTODOP) that require Oracle to be enabled can use in memory to access tables loaded into memory area. It is also to be explained that in multiple instances of the concurrent query between instances of transmission is not IMCU, but each node will be the data of this node run the same SQL statement, and then send their own result set to the instance of the initiating SQL statement, composed of the final result returned to the user. For example: a 4-node RAC database, the table sales has been loaded into the memory area. Run the following query:
Select SUM (stock) from the sales where store_id in (100,200,300) and order_date=to_date (' 2016-01-01 ', ' yyyy-mm-dd ');
The CBO calculates the cost of using in memory scan first, and if the cost is lowest, the CBO chooses to access the sales table in the memory area. Next, Oracle accesses the information in the data dictionary, finds which instances the table is loaded into, and launches the corresponding concurrency process (parallelslave) on the corresponding node, sending the query to the concurrent process. After each instance's concurrent process runs through the corresponding SQL statement, the resulting value is sent to the instance that originated the query, generating the final value and returning it to the customer. Throughout the process, not IMCU is passed between instances, but the total value is passed, so a large amount of private network data communication can be avoided.
This is the author's brief introduction to the Oracle 12c in memory component, which I hope will be helpful to those who use the Oracle database for development, and can be useful for developing applications on Oracle databases that use the in Memoery component.
Ext.: http://blog.sina.com.cn/s/blog_74a7d3390102wegl.html
Oracle 12c column storage (in memory theory)