Oracle large data volume query actual analysis Oracle Database: just made a 50 million data extraction, the current table is still performing the insert operation, hundreds of data entries per minute. This table is partitioned by time and by month. There are no indexes. Currently, there are 14 fields, with an average of 30 bytes for each field. One partition test server for the current table partition from 201101 to 201512 every month: xeno 5650,32-core cpu, win2003 operating system, physical memory 16 GB; test tool plsql 1. Initial query:
string.Format(@"select * from (select r.id,r.carcode,r.longtitude,r.latitude,r.velocity,r.gpstime,r.isonline from t_gps_record r where id in( select min(id) from t_gps_record r where carcode='{0}' group by to_char(gpstime,'yyyy-MM-dd HH24:mi')) and carcode='{0}' and gpstime>(select nvl((select max(gpstime) from t_gps_carposition where carcode='{0}'),(select min(gpstime) from t_gps_record where carcode='{0}')) from dual) order by gpstime asc ) where rownum<=200 ", row["carcode"].ToString());
The query starts with 200 pieces of data as segments, with a query time of 2 minutes and 16 seconds. Then, 20 pieces of data are queried for 2 minutes and 14 seconds. It is basically irrelevant to the number of items. 2. Later I wrote the minimum time as fixed:
string.Format(@"select * from (select r.id,r.carcode,r.longtitude,r.latitude,r.velocity,r.gpstime,r.isonline from t_gps_record r where id in( select min(id) from t_gps_record r where carcode='{0}' group by to_char(gpstime,'yyyy-MM-dd HH24:mi')) and carcode='{0}' and gpstime>to_date('2011-11-1 00:00:00','yyyy-mm-dd HH24:mi:ss') order by gpstime asc ) where rownum<=200 ", row["carcode"].ToString());
The query time is 1 minute 34 seconds. 3. query without extra points
select r.id,r.carcode,r.longtitude,r.latitude,r.velocity,r.gpstime,r.isonline from t_gps_record r where id in( select min(id) from t_gps_record r group by carcode, to_char(gpstime,'yyyy-MM-dd HH24:mi')) and gpstime>=to_date('2011-11-1 9:00:00','yyyy-mm-dd HH24:mi:ss') and gpstime<=to_date('2011-11-1 9:59:59','yyyy-mm-dd HH24:mi:ss') order by gpstime asc
Query time: 3 minutes 29 seconds, 1426 entries in total 4. Add partition Query
select r.id,r.carcode,r.longtitude,r.latitude,r.velocity,r.gpstime,r.isonline from t_gps_record r where id in( select min(id) from t_gps_record partition(GPSHISTORY201111) r group by carcode, to_char(gpstime,'yyyy-MM-dd HH24:mi')) and gpstime>=to_date('2011-11-1 9:00:00','yyyy-mm-dd HH24:mi:ss') and gpstime<=to_date('2011-11-1 9:59:59','yyyy-mm-dd HH24:mi:ss') order by gpstime asc
Query after adding a partition: 17 s, a total of 1426 entries. Therefore, the query efficiency after adding a partition is improved by more than 10 times. Therefore, it is very important to create a partition table with a large amount of data.