Currently in use is the community version of the Infobright, does not support the DML function, can only use the load data to import the information.
If there are special control characters in the metadata, it is annoying to make an error frequently in the import process. There are two approaches to coping strategies:
1. Before setting the REJECT file import, set @BH_REJECT_FILE_PATH and @BH_ABORT_ON_COUNT can ignore the number of failed import records, and save the records in the specified file
Copy Code code as follows:
/** when the number of rows rejected reaches, abort process **/
Set @BH_REJECT_FILE_PATH = '/tmp/reject_file ';
Set @BH_ABORT_ON_COUNT = 10;
Bh_abort_on_count set to-1, means never ignore.
You can also set the Bh_abort_on_threshold option, which indicates that the maximum percentage of data is allowed to be ignored, so the value of this option is a decimal format, such as Bh_abort_on_threshold = 0.03 (for 3%)
2. Specify a terminator at export in addition, you can set a terminator when you export data and configure which escape characters (\,,, and so on) are ignored, for example:
Copy Code code as follows:
Select Fields_list outfile '/tmp/outfile.csv ' fields terminated by ' | | ' escaped by ' \ \ ' lines terminated by ' \ r \ n ' from mytable;
3. Alternatively, the line spacer is set to another special identifier, for example: Select Fields_list ... into outfile '/tmp/outfile.csv ' fields terminated by ' | | ' Escaped by ' \ ' lines terminated by ' $$$$$\r\n ' from mytable; Of course, in this case, the actual data row can not exist "$$$$$\r\n" this value, otherwise it will be treated as a newline identity.