Efficiency Comparison Between Reading and Writing files and reading and writing databases 1: Since the read database statements call a simple encapsulation function twice, the read files are also changed to two consecutive calls, the database record ID is 1 in the first entry, and the unique index. NOTE 2: 4 k Data is tested twice, and integer data is 1set_time_limit (0); 23 functionfnGet ($ fi
Efficiency Comparison Between Reading and Writing files and reading and writing databases 1: Since the read database statements call a simple encapsulation function twice, the read files are also changed to two consecutive calls, the database record ID is 1 in the first entry, and the unique index. NOTE 2: 4 k Data is tested twice, and integer data 1 set_time_limit (0); 2 3 function fnGet ($ fi
Efficiency Comparison Between Reading and Writing files and reading and writing Databases
Note 1: Because the read database statement calls a simple encapsulation function twice, the Read File is also changed to a continuous call for two times. The database record ID is 1 at the first entry and has a unique index.
NOTE 2: 4 k Data and integer data are tested twice.
1 set_time_limit (0); 2 3 function fnGet ($ filename) 4 5 {6 7 $ content = file_get_contents ($ filename); 8 9 return $ content; 10 11} 12 13 function fnGetContent ($ filename) 14 15 {16 17 $ content = fnGet ($ filename); 18 19 return $ content; 20 21} 22 23 $ times = 100000; 24 25 echo 'database query result:
'; 26 27 // --------------------------------- 28 29 $ begin = fnGetMicroTime (); 30 31 for ($ I = 0; $ I <$ times; $ I ++) 32 33 {34 35 $ res = $ dbcon-> mydb_query ("SELECT log_Content FROM blog WHERE log_ID = '1 '"); 36 37 $ row = $ dbcon-> mydb_fetch_row ($ res); 38 39 $ content = $ row [0]; 40 41} 42 43 echo 'fetch _ row '. $ times. 'Time :'. (fnGetMicroTime ()-$ begin ). 'Second
'; 44 45 // --------------------------------- 46 47 $ begin = fnGetMicroTime (); 48 49 for ($ I = 0; $ I <$ times; $ I ++) 50 51 {52 53 $ res = $ dbcon-> mydb_query ("SELECT log_Content FROM blog WHERE log_ID = '1 '"); 54 55 $ row = $ dbcon-> mydb_fetch_array ($ res); 56 57 $ content = $ row ['Log _ content']; 58 59} 60 61 echo 'fetch _ array '. $ times. 'Time :'. (fnGetMicroTime ()-$ begin ). 'Second
'; 62 63 // --------------------------------- 64 65 $ begin = fnGetMicroTime (); 66 67 for ($ I = 0; $ I <$ times; $ I ++) 68 69 {70 71 $ res = $ dbcon-> mydb_query ("SELECT log_Content FROM blog WHERE log_ID = '1 '"); 72 73 $ row = $ dbcon-> mydb_fetch_object ($ res); 74 75 $ content = $ row-> log_Content; 76 77} 78 79 echo 'fetch _ object '. $ times. 'Time :'. (fnGetMicroTime ()-$ begin ). 'Second
'; 80 81 // ------------------------------- 82 83 $ dbcon-> mydb_free_results (); 84 85 $ dbcon-> mydb_disconnect (); 86 87 fnWriteCache('test.txt', $ content ); 88 89 echo 'directly read the file test result:
'; 90 91 // --------------------------------- 92 93 $ begin = fnGetMicroTime (); 94 95 for ($ I = 0; $ I <$ times; $ I ++) 96 97 {98 99 $ content = fnGetContent('test.txt '); 100 101} 102 103 echo 'file _ get_contents direct read '. $ times. 'Time :'. (fnGetMicroTime ()-$ begin ). 'Second
'; 104 105 // --------------------------------- 106 107 $ begin = fnGetMicroTime (); 108 109 for ($ I = 0; $ I <$ times; $ I ++) 110 111 {112 113 $ fname = 'test.txt '; 114 115 if (file_exists ($ fname) 116 117 {118 119 $ fp = fopen ($ fname, "r "); // flock ($ fp, LOCK_EX); 120 121 $ file_data = fread ($ fp, filesize ($ fname); // rewind ($ fp ); 122 123 fclose ($ fp); 124 125} 126 127 $ content = fnGetContent('test.txt '); 128 129} 130 131 echo 'fopen direct read '. $ times. 'Time :'. (fnGetMicroTime ()-$ begin ). 'Second
';
Query results of 4 k Data:
Fetch_row 100000 times time: 16.737720012665 seconds
Fetch_array 100000 times time: 16.661195993423 seconds
Fetch_object 100000 requests time: 16.775065898895 seconds
Test results of Directly Reading files:
File_get_contents reads 100000 times directly. Time: 5.4631857872009 seconds
Fopen direct read 100000 times time: 11.463611125946 seconds
Integer ID query result:
Fetch_row 100000 times time: 12.812072038651 seconds
Fetch_array 100000 times time: 12.667390108109 seconds
Fetch_object 100000 requests time: 12.988099098206 seconds
Test results of Directly Reading files:
File_get_contents reads 100000 times directly. Time: 5.6616430282593 seconds
Fopen direct read 100000 times time: 11.542816877365 seconds
Test conclusion:
1. Direct file reading is more efficient than database query, and the connection and disconnection time are not counted yet.
2. The larger the content to be read at a time, the more obvious the advantage of direct File Reading (the reading time increases slightly, which is related to the continuity of file storage and the cluster size ), this result is exactly the opposite of Tian Yuan's expectation, indicating that MYSQL may have attached some operations to read larger files (two times of increase by nearly 30% ), if the conversion is simple, the difference is small.
3. Write files and INSERT can be inferred without testing, and the database efficiency will only be worse.
4. If you do not need to use the database features for a small configuration file, it is more suitable for storing it in an independent file without creating a separate data table or record, it is more convenient to store large files than to store files or music files. It is more reasonable to store index information such as paths or thumbnails in the database.
5. If you only read files in PHP, file_get_contents is more efficient than fopen and fclose, not including determining whether the function will take about 3 seconds.
6. fetch_row and fetch_object should be converted from fetch_array. I have not read the PHP source code. The execution alone shows that fetch_array is more efficient, which is opposite to the online statement.
In fact, before doing this test, you will probably get the result from your personal experience. After the test is completed, you will feel a new sense of openness. It is assumed that, when the program efficiency and key process are equivalent and not included in the cache, no data of any type can be read and written directly, regardless of the MSYQL process, finally, you must read the "file" on the disk (equivalent to the record storage area). Therefore, the premise of all this is read-only content, regardless of any sort or search operations.