When developing a system that processes a large amount of data and has extremely powerful randomness (such as network data), we will always face the problem of positive data verification,
Especially when these systems are not based on databases but have a large number of conditional queries, we often need raw data for statistics and re-query.
Compared with the values in the system to make it difficult for data to be correct, but the flexibility of the conditions makes manual calculation miserable.
Import the data to the database system and use SQL statements to verify the correctness of the data.
The following code is used to read several data files into SQLite and create corresponding database files. In the test system, the user's permissions are
When the limit is reached, you need to put Perl-related libraries under your own directory.
For example, my directory is as follows:
Dongq @ dongq_lap ~ /Workspace/test/perl $ tree imp_exp/
Imp_exp
| -- DBD --- compile the Perl library code of SQLite and copy the directory.
| -- SQLite. PM
| '-- Getsqlite. pl
| -- Auto --- compile the Perl library code of SQLite and copy the directory.
| '-- DBD
| '-- SQLite
| -- SQLite. BS
| '-- SQLite. So
| -- Createdb. pl --- script code
| -- Dbitrace. Log
| -- Do. Clear
| -- Dump. DB -- generated database file
'-- Test. dat -- imported data
4 directories, 9 files
Dongq @ dongq_lap ~ /Workspace/test/perl $
The following Perl code
#! /Usr/bin/ENV Perl #******************************** * ****** # createdb. PL dump1.txt dump2.txt #************************************ * ** use dBi; use strict; Use lib QW (. /.); # description of the location of the database die "Usage :. /createdb. PL dump1.txt dump2.txt... "Unless $ # argv> = 0; # create a database connection my $ DBH = DBI-> connect (" DBI: SQLite: dbname =. /dump. DB "); # create a data table my $ create_th = $ DBH-> do (" create table arp_record (ar_hrd integer, ar_pro integer, ar_hl N integer, ar_pln integer, ar_op integer, ar_sha varchar (17), ar_spa integer, ar_tha varchar (17), ar_tpa integer )"); # create an index my $ index_something = $ DBH-> do ("create index idx_ar_sha on arp_record (ar_sha )"); $ index_thing = $ DBH-> do ("create index idx_ar_spa on arp_record (ar_spa)"); $ index_thing = $ DBH-> do ("create index idx_ar_tha on arp_record (ar_tha) "); $ index_something = $ DBH-> do (" create index idx_ar_tpa on ARP _ Record (ar_tpa) "); # insert data my $ insert_something = $ DBH-> prepare (q {insert into arp_record values (?, ?, ?, ?, ?, ?, ?, ?,? )}); # Detailed debugging information DBI-> trace (1, 'dbitrace. log'); $ DBH-> {autocommit} = 0; my $ input_name; foreach $ input_name (@ argv) {open (CSV, "<$ input_name ") or die "can't open file: $! "; While (<CSV>) {chomp; my ($ ar_hrd, $ ar_pro, $ ar_hln, $ ar_pln, $ ar_op, $ ar_sha, $ ar_spa, $ ar_tha, $ ar_tpa) = Split/,/; $ insert_things-> bind_param (1, $ ar_hrd); $ insert_things-> bind_param (2, $ ar_pro); $ insert_things-> bind_param (3, $ ar_hln); $ insert_thing-> bind_param (4, $ ar_pln); $ insert_thing-> bind_param (5, $ ar_op); $ insert_thing-> bind_param (6, $ DBH-> quote ($ ar_sha); $ insert_thing-> bind_param (7, $ ar_spa); $ insert_thing-> bind_param (8, $ DBH-> quote ($ ar_tha); $ insert_thing-> bind_param (9, $ ar_tpa); $ insert_thing-> execute or die $ DBH-> errstr ;}} $ DBH-> commit or die $ DBH-> errstr; close (CSV); $ DBH-> disconnect; exit;
Haha, the example above is everywhere on the Internet :)
Reference: <Perl DBI programming>