Found today deder MYSQL time field, are used
`senddata` int (10) unsigned NOT NULL DEFAULT '0';
Then found this article on the Internet, it seems that if the time field to participate in the operation, using int better, one to retrieve the field without conversion operator, directly for time comparison! Second, the efficiency is higher as described below.
In the final analysis: int instead of the data type, more efficient.
surroundings:
Windows XP
PHP Version 5.2.9
MySQL Server 5.1
The first step, create a table date_test (non-fixed-length, int time)
CREATE TABLE `test`.`date_test` (
`id` INT NOT NULL AUTO_INCREMENT,
`start_time` INT NOT NULL,
`some_content` VARCHAR (255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = InnoDB;
The second step, create a second table date_test2 (fixed length, int time)
CREATE TABLE `test`.`date_test2` (
`id` INT NOT NULL AUTO_INCREMENT,
`start_time` INT NOT NULL,
`some_content` CHAR (255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = InnoDB;
The third step is to create a third table date_test3 (varchar, datetime)
CREATE TABLE `test`.`date_test3` (
`id` INT NOT NULL AUTO_INCREMENT,
`start_time` DATETIME NOT NULL,
`some_content` VARCHAR (255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = InnoDB;
The fourth step, create a fourth table date_test3 (char, datetime time)
CREATE TABLE `test`.`date_test4` (
`id` INT NOT NULL AUTO_INCREMENT,
`start_time` DATETIME NOT NULL,
`some_content` CHAR (255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE = InnoDB;
ok, now we start to do the test, the environment is php, first insert a million tables to each table. When inserted into the 200 points, each into the library 5000.
Table one execution record: Page runtime: 26.5997889042 seconds, found an interesting insert: SELECT count (id) FROM `date_test` WHERE 1 results is 100w, and direct select * from` date_test` is 1,000,374 results . (I saw later this is a possible value, see MySQL FAQ 3.11).
Table 2 execution records: page running time: 62.3908278942 seconds, this record is 1,000,066.
Table three execution records: page running time: 30.2576560974 seconds, this time is 1,000,224.
Table four execution records: page running time: 67.5393900871 seconds, this time is: 1,000,073.
Now four tables start_time field one by one with the index.
Test the update of the four tables, respectively, update 100 records, and record time:
Table 1: page running time: 2.62180089951 seconds (non-fixed length, int time)
Table 2: page running time: 2.5475358963 seconds (fixed length, int time)
Table 3: Page runtime: 2.45077300072 seconds (varchar, datetime time)
Table 4: page runtime: 2.82798409462 seconds (char, datetime time)
Test the four tables read, select 100 random records, the primary key id for the conditions of the query, and record time:
Table 1: page running time: 0.382651090622 seconds (non-fixed-length, int time)
Table 2: page running time: 0.542181015015 seconds (fixed length, int time)
Table 3: Page runtime: 0.334048032761 seconds (varchar, datetime time)
Table 4: page running time: 0.506206989288 seconds (char, datetime time)
Test the four tables read, respectively, select 10 random records to star_time conditions for the query, and record time:
Table 1: page running time: 30.1972880363 seconds (non-fixed length, int time)
Table 2: page running time: 65.1926910877 seconds (fixed length, int time)
Table 3: page running time: 39.7210869789 seconds (varchar, datetime time)
Table 4: page running time: 70.4632740021 seconds (char, datetime)
Because the amount is relatively small, so we default even small changes, it makes sense.
in conclusion:
Large amount of data, if there is a large number of select * from table where time> XX such queries, use MySQL for MySQL date5 for datetime meaningful.