In the past few days, I have been busy with this optimization solution and have never had time to play. After a busy period of time, I finally won the project? Do not setProgramLet the MySQL database handle the problem. It is just MySQL.
Problem 1: optimize the business execution speed for MySQL community5.5 + to improve the overall system efficiency
Test server hardware environment: Intel Core I5, 8 GB memory, 7200rpm hard drive, win7 Professional Edition
TableName |
Read frequency |
Write frequency |
Minimum continuous read Speed |
Minimum continuous write speed |
Estimated data volume |
C .... |
Frequent |
Not frequent |
30 rows/150 ms |
100 rows/s |
Million lines |
D... |
Frequent |
Not frequent |
30 rows/150 ms |
100 rows/s |
Million lines |
De ..... |
Frequent |
Not frequent |
30 rows/150 ms |
10 rows/s |
10 million rows |
St ...... |
Frequent |
Not frequent |
30 rows/150 ms |
100 rows/s |
Million lines |
S ...... |
Frequent |
Frequent |
30 rows/150 ms |
100 rows/s |
Million lines |
PR... |
Very frequent |
Not frequent |
1000 rows/s-5000 rows/s |
100 rows/s |
Million rows/Tables |
De ....... |
Very frequent |
Frequent |
1000 rows/s-5000 rows/s |
1000 rows/s |
Million rows/Tables |
De ...... |
Very frequent |
Frequent |
1000 rows/s-5000 rows/s |
1000 rows/s |
Million rows/Tables |
What I think of here is the split table, which is nothing more than the Vertical Split and horizontal split of the table. I also encountered a similar problem in the project. At that time, I used the MySQL database with more than 800 columns, mySQL DATA cannot contain more than 800 fields in a table. In the initial test, strings are stored in the database in JSON format. The results show that loops are added during processing, on the contrary, it is very difficult to export an Excel file slowly. At last, nine sub-tables were used to solve the problem. Now I want to see if Object serialization is used, save the serialized object to the database and use it for deserialization from the database. After the sub-tables meet the requirements, the serialization method of the object will no longer be tested. However, this request does not require vertical table splitting. Do you need horizontal table splitting? First test, every idea is the E to test.
Sometimes limit needs to be used properly in MySQL,
The remaining primary keys are not indexed, so there should be no problem in querying millions of data records. If you don't want to optimize the SQL statements.
But there is still a problem with insertion. Do you need to split the table? If it is only for millions, you can still avoid table sharding. If the table sharding process requires paging query, deletion, and addition operations, I still like to use Mysql Data Synchronization and use two MySQL databases, the server IP addresses are IP1 and ip2, respectively. We use IP1 as the master database and ip2 as the slave server. we can adopt a single synchronization method, that is, the master data is the master data, it is too late for slave to take the initiative to synchronize data back from the master. The optimization project can be started after the test is passed tomorrow.
It takes 2.822 seconds to insert data in the database when the data volume exceeds million rows,
Question 2: The requirements for uploading large files are as follows,
File Size |
Action |
Maximum execution memory usage |
Minimum Intranet transmission rate(802.11 GB) |
50 MB |
Upload |
5 MB |
1 MB/S |
500 mb |
Upload |
5 MB |
1 MB/S |
5 GB |
Upload |
5 MB |
1 MB/S |
10 GB |
Upload |
5 MB |
1 MB/S |
50 MB |
Download |
5 MB |
1 MB/S |
500 mb |
Download |
5 MB |
1 MB/S |
5 GB |
Download |
5 MB |
1 MB/S |
10 GB |
Download |
5 MB |
1 MB/S |
I used to have a video website with a maximum upload capacity of 4 GB. In this case, the maximum upload capacity was 10 Gb,
To increase the transmission rate of large files, the target file is divided into N files, which start n threads and transmit data through the TCP/IP protocol, before transferring a file, check whether there is a temporary file group for the file. If there is a temporary file group, find its breakpoint location for resumable data transfer and merge the files. Asynchronous split transmission, using the most popular flash + Ajax technology. ActionScript and HTML5 support socket communication.
Implementation process:
1. The client sends a file start signal (File Name) to the server)
2. The server checks whether the temporary file corresponding to the file exists (for example, transfer file.txt). (For example, transfer file.txt, the server checks whether the temporary file exists. (I is the nth temporary file )), if yes, send the breakpoint location of all temporary files to the client.
3. The client has more file sizes and divides the file into N blocks (each of which is 500 mb by default and can be configured through parameters) to speed up the transmission rate, and then based on the breakpoint location returned by the server (if any) shift each file and move it to the breakpoint for resumable data transfer. If there is no temporary folder on the server, a normal file will be transferred.
4. When all threads are finished, merge all temporary folders.
The problem of uploading large files has been implemented using serverlight. The test passed is about 5-6 m per second.