Yesterday I saw a colleague using the PL/SQL developer tool to move text data to Oracle tables, with two texts, one with 300,000 records, and 70,000 records.
In the process of import, the error record also needs to click Confirm. But use black Technology (Screen Wizard) to automatically click. It takes about 10 minutes to write a single 70,000-and-a-year text.
See the situation; I'm a warm-hearted outbreak; I'll help you. Import speed. I think of two options;
1:oracle's SQLLDR command
External table of the 2:oracle;
Select 1 because the text has an error record.
The text is formatted as follows: There are 76,760 records
[[email protected] ~]$ wc -l lottu.txt
76761 lottu.txt
[[email protected] ~]$ head lottu.txt
stat_user_stay_info.rowkey,stat_user_stay_info.appkey,stat_user_stay_info.phone_softversion,stat_user_stay_info.dim_type,stat_user_stay_info.dim_code,stat_user_stay_info.time_peroid,stat_user_stay_info.stat_date,stat_user_stay_info.indicator,stat_user_stay_info.stat_time,stat_user_stay_info.value
3a00997_7c34d20170108,307A5C626E6C2F6472636E6E6A2F736460656473,2.14.0,cpid,blf1298_12243_001,1,20170105,stay3day,20170109102339,1
3a00997_bf86b20170108,307A5C626E6C2F6472636E6E6A2F736460656473,2.13.0,cpid,blp1375_13621_001,1,20170105,stay3day,20170109102339,7
3a00e87_4b11a20170126,337A5C626E6C2F6472636E6E6A2F736460656473,1.4.0,cpid,all,1,20170123,stay3day,20170127095931,6
3a0129a_6575220170118,307A5C626E6C2F6460726E742F716D7472,all,cpid,bsf1389_10917_001,1,20170116,stay2day,20170119094145,1
3a0183b_5764a20170202,307A5C626E6C2F6472636E6E6A2F736460656473,1.91,cpid,blf1298_12523_001,1,20170128,stay5day,20170203094327,1
3a01b9b_54b4720170123,307A5C626E6C2F6472636E6E6A2F736460656473,2.13.0,cpid,blp1375_13641_001,1,20170122,stay1day,20170124102457,3
3a0230d_7464120170126,307A5C626E6C2F6460726E742F606F65736E686569646D716473,all,cpid,bsp1405_13363_001,1,20170122,stay4day,20170127100446,18
3a02bed_3ea3320170206,307A5C626E6C2F6472636E6E6A2F736460656473,2.15.0,cpid,blp1375_14217_001,1,20170130,stay7day,20170207135438,1
3a03fe3_4c5fe20170119,307A5C21626E6C2F6472776865646E21,all,cpid,bvf1328_10885_001,1,20170116,stay3day,20170120093733,1
The results of the import table are as follows:
SQL> desc STAT_USER_STAY_INFO1;
Name Null? Type
----------------------------------------- -------- ----------------------------
JOBID VARCHAR2(64)
APPKEY NOT NULL VARCHAR2(200)
PHONE_SOFTVERSION NOT NULL VARCHAR2(32)
DIM_TYPE NOT NULL VARCHAR2(64)
DIM_CODE NOT NULL VARCHAR2(64)
TIME_PEROID NOT NULL VARCHAR2(4)
STAT_DATE NOT NULL VARCHAR2(500)
INDICATOR NOT NULL VARCHAR2(200)
STAT_TIME VARCHAR2(500)
VALUE NOT NULL NUMBER
Execute the SQLLDR command; But what about the results? Only 50,000 records were imported, and the results were unexpected.
' lottu/li0924 ' control=/home/oracle/stay_info.ctl log=/home/oracle/stay_info.log Bad=/home/oracle/stay_info.bad
View log file, due to space issues;
.......
Record 55076: Rejected - Error on table STAT_USER_STAY_INFO1, column DIM_CODE.
ORA-01400: cannot insert NULL into ("LOTTU"."STAT_USER_STAY_INFO1"."DIM_CODE")
MAXIMUM ERROR COUNT EXCEEDED - Above statistics reflect partial run.
Table STAT_USER_STAY_INFO1:
55025 Rows successfully loaded.
51 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 165120 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 55105
Total logical records rejected: 51
Total logical records discarded: 0
Run began on Fri Feb 24 10:51:02 2017
Run ended on Fri Feb 24 10:51:09 2017
Elapsed time was: 00:00:06.87
CPU time was: 00:00:00.46
Log prompt; only 55025 Rows successfully loaded are imported. Plus 51 of the rejections; this is not enough with 76,761 records.
Log records are not deceptive; go to the table and see exactly 55,025 records.
SQL> select count(*) from STAT_USER_STAY_INFO1;
COUNT(*)
----------
55025
Odd Strange; 76,761 records; Why Oracle only recognizes 50,000 records. There are more than 20,000 records why not approve it?
In fact, Oracle gave a hint, just in that log file. I just ignore the phrase "MAXIMUM ERROR COUNT exceeded-above statistics reflect partial run."
This means that the maximum number of tolerance errors is exceeded.
Now that's so clear; we're looking at the SQLLDR command.
[[email protected] ~]$ sqlldr
SQL*Loader: Release 11.2.0.1.0 - Production on Fri Feb 24 11:00:08 2017
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Usage: SQLLDR keyword=value [,keyword=value,...]
Valid Keywords:
userid -- ORACLE username/password
control -- control file name
log -- log file name
bad -- bad file name
data -- data file name
discard -- discard file name
discardmax -- number of discards to allow (Default all)
skip -- number of logical records to skip (Default 0)
load -- number of logical records to load (Default all)
errors -- number of errors to allow (Default 50)
rows -- number of rows in conventional path bind array or between direct path data saves
(Default: Conventional path 64, Direct path all)
bindsize -- size of conventional path bind array in bytes (Default 256000)
silent -- suppress messages during run (header,feedback,errors,discards,partitions)
direct -- use direct path (Default FALSE)
parfile -- parameter file: name of file that contains parameter specifications
parallel -- do parallel load (Default FALSE)
file -- file to allocate extents from
skip_unusable_indexes -- disallow/allow unusable indexes or index partitions (Default FALSE)
skip_index_maintenance -- do not maintain indexes, mark affected indexes as unusable (Default FALSE)
commit_discontinued -- commit loaded rows when load is discontinued (Default FALSE)
readsize -- size of read buffer (Default 1048576)
external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE (Default NOT_USED)
columnarrayrows -- number of rows for direct path column array (Default 5000)
streamsize -- size of direct path stream buffer in bytes (Default 256000)
multithreading -- use multithreading in direct path
resumable -- enable or disable resumable for current session (Default FALSE)
resumable_name -- text string to help identify resumable statement
resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200)
date_cache -- size (in entries) of date conversion cache (Default 1000)
no_index_errors -- abort load on any index errors (Default FALSE)
PLEASE NOTE: Command-line parameters may be specified either by
position or by keywords. An example of the former case is ‘sqlldr
scott/tiger foo‘; an example of the latter is ‘sqlldr control=foo
userid=scott/tiger‘. One may specify parameters by position before
but not after parameters specified by keywords. For example,
‘sqlldr scott/tiger control=foo logfile=log‘ is allowed, but
‘sqlldr scott/tiger control=foo log‘ is not, even though the
position of the parameter ‘log‘ is correct.
One of the lines is "errors--number of errors to allow (Default 50)"
So it's not surprising that there are problems.
Finally, add the SQLLDR command to the errors parameter.
' lottu/li0924 ' control=/home/oracle/stay_info.ctl log=/home/oracle/stay_info.log Bad=/home/oracle/stay_info.bad errors =1000
The entire process is completed in 20 seconds. My colleague, by contrast, chooses this method decisively.
Oracle sqlldr Import "MAXIMUM ERROR COUNT exceeded"