Oracle R & D skills

Source: Internet
Author: User

Oracle R & D skills summarized in the oracle R & D skills record. If you have any suggestions, please leave a message to supplement Oracle R & D skills www.2cto.com 1. oracle elementary skills 1.1 SQL High Performance Optimization 1.2 table design skills 2. oracle Advanced Skills 2.1 sea scale design skills 2.2 DBA perspective design architecture 1. oracle elementary skills 1.1 SQL high-performance optimization www.2cto.com Optimized Data Reading involves a wide range, not only physical I/O, storage type, hardware, network environment, etc, in addition, it involves logical dbms environment settings, SQL types, execution plans, index types, and the order of composite index columns. Here we only start from the programmer's perspective, we will discuss the optimization of Data Reading. The database environment is the SQL site, how to efficiently use the DB environment to run efficient SQL statements to reduce resource consumption of a single SQL statement and reduce the number of SQL executions to reduce resource consumption of a single SQL statement, it is necessary to execute the SQL statement in the oracle environment. What is it? 1. create a cursor 2. analysis Statement: Performs syntax analysis, checks SQL writing, validation definitions, and permissions, and selects the best execution plan and loads it into the SQL sharing zone. During the analysis statement, oracle binds variables to implement shared SQL, to reduce SQL parsing. So variable binding is the optimization point 3. describes the features that the query result set determines. For example, data type, field name, and length. define query output data to specify the location, size, and data type of the receiving variable corresponding to the queried column value. If necessary, oracle converts the data type by default. bind Variable 6. you can execute parallel operations on creating indexes, creating tables with subqueries, and Partitioned Tables by running parallel statements. By consuming more resources, you can quickly execute SQL 7. you are ready to execute the SQL statement to start running the SQL statement. This process can be optimized by batch processing. 8. fetch the queried rows and return the query result set. 9. optimized by batch processing. close the cursor as shown in the preceding analysis SQL Execution Process, we can use the following methods to reduce the resource consumption of a single SQL statement and bind variables to achieve the sharing of SQL statements. The optimal execution plan is selected, which determines the table connection mode and data access path. Because data reading methods are classified into continuous and random scanning methods, random scanning greatly affects data reading performance based on the physical characteristics of the disk. We optimize SQL, that is to say, random scanning is switched to continuous range scanning. Partial range scanning means that in continuous range scanning, oracle can intelligently read only part of the data, rather than all the data. Therefore, regardless of the required data range, the execution speed can be ensured quickly. Reduce sorting operations, try to use indexes instead of sorting to create a suitable composite index, and limit the size of the query result set as much as possible to reduce the number of SQL executions by binding variables to reduce SQL analysis, by optimizing business logic to reduce SQL Execution times 1.2 table design skills in our DB system, we need to know what types of tables are in our system, and what are the basic optimization ideas, here we classify the table as follows: 1 ). table with a small amount of data 2 ). for more information, see table 3 ). large and medium-sized tables for managing business behavior 4 ). large tables for storage 1 ). definition of a table with a small amount of data: an I/O can read the entire table into the memory. This means that the number of blocks in the storage table is smaller than that in db_file_multiblock_read_count. Such tables are generally dictionary tables and rarely updated, therefore, you can consider using IOT tables, data cluster tables (Parent and Child tables), or heap tables. The features are generally placed in a nested loop and need to be optimized multiple times: pctfree, cache, index, separate database shard 2 ). definition of large and medium-sized tables for reference: it is mainly used to store data of objects such as business behaviors, subjects, and purposes, For example, the user information table. Features: the stored data is very large, mainly in the form of random reading and small-scale data scanning. Generally, the data is read by the primary key or table connection and placed in the inner loop. Few data is inserted, and the data is select; to optimize this table, we usually create a large number of index Optimization Methods: create suitable indexes, partitions, and clustering 3 ). definitions of large and medium-sized tables that manage business behaviors: stores transaction activity data of the business, and the number of tables increases over time. Features: because the analysis dimensions are rich, there are various reading types, the data volume is very large and increases rapidly, usually on the outside of the loop. Sometimes the data range cannot be reduced through a specific column, so composite indexes are often used. Optimization Method: Create appropriate indexes, partitions, clustering, and data layers. 4 ). definition of a large table used for storage: data used to store and manage logs features: huge and increasing data volumes, large insertion costs optimization methods: pctfree, partitioning, separate database sharding 2. oracle Advanced Skills 2.1 sea scale design skills this "massive" has two meanings, one is a large amount of data, one is a high execution frequency. For An OLTP system like ours, in fact, each SQL is very little interested in data, as long as we can make every SQL process the data we are interested in and store fresh data for each table, this requires table design and SQL writing. For example, there may be a lot of data that interest SQL in our trader background. If you can, you can consider database sharding to avoid the impact of such individual businesses on the overall system stability. The data volume can be solved from the following aspects: partition tables are split transparently to applications in multiple dimensions: large tables can be split from time and function dimensions, there are corresponding routing rules. If a single server still cannot meet the requirements after table splitting, consider the database shard storage intermediate table: Multiple tables need to be associated for tables with a large amount of data, if the business permits, you can create an intermediate table and directly provide a high execution frequency for the results. Data Cache: cache the table data at the Cache layer for persistence, simplify or optimize the SQL statement for database read/write: reduce the resource consumption of a single SQL statement, thus reducing the SQL response time by database sharding: database sharding is used to share the pressure. From the DBA perspective, DB is the most prone to bottlenecks in the design of the architecture system. At the beginning of the design, the solution should be considered better than the DB bottleneck discovered, to solve the problem, the cost is much lower. Sometimes, because the DB bottleneck cannot be solved, we have to re-develop all the systems. DBA summed up its experience in practice. In order to avoid DB bottlenecks, it proposes to supplement the system architecture design from the DBA perspective. When we face huge traffic and data volumes, our system should be simplified and simplified, small. the system design scale clearly defines the business scale supported by the system, the capacity supported by the system, and whether the system needs to expand the system's most valuable resources: cpu, memory, io, and network. I/O is the most important thing. It is a short board and it is easy to have a bottle neck. We need to use the design system as a benchmark. For example, to design an oltp system, refer to the following formula Pv/(24*3600) * Dpv * lvdpv * x * (1-hit) * rwrate = r_iops Pv/(24*3600) * Dpv * lvdpv * x * (1-hit) * rwrate = w_iops Description: Pv: pv/day Dpv: dynamic pv rate, percentage of dynamic pv in daily pv Lvdpv: Logical read Hit: cache Hit rate Rwdate generated by each dynamic pv: for example, the system read/write ratio is as follows: how much I/O is required for a system that supports 10 million PVS, and how many disks are required for these I/O; how to plan the reference values of common hard disk IOPS for data storage capacity: 10.000 "113 rpm SAS 15.000 IOPS 156" 15.000 rpm SAS 146 IOPS 5.400 IOPS 7.200 "15.000 rpm sas iops" rpm SATA 71 IOPS "rpm SATA 65 IOPS" rpm FC 150 IOPS monitoring: efficient and precise monitoring of performance and faults, you can avoid most problems in advance and prepare contingency plans to deal with emergencies, so as to avoid sudden failures in a module due to busy schedule or misoperations, and avoid cascade impact or ensure normal core business, by reducing module coupling or setting the module start/stop switch, we can prevent data push avalanche. When designing the DB architecture, we should consider how to prevent performance jitter or even access avalanche caused by data push. For example: in the previous section, The kv database with high performance and the database load shifting feature of the app for DB were designed. When the access volume suddenly increases, the abnormal peak value will have a great impact on the system performance and even drag the system down, we can avoid this problem through the load shifting design, so that the system can process requests in the best performance status, for example, using the queue design to receive the maximum request, gentle request processing uses database sessions to control the maximum number of concurrent requests processed by the database, enabling the database to work at the optimal performance. The nosql database has a very high timeliness rate in dealing with kv environments, to efficiently use the nosql database and reduce the access to the RDBMS storage DB, you can use RDBMS to actively push changed data to the nosql database. The application master reads nosql and times out to read the nosql back-end storage RDBMS --- end -----

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.