Users to the system, the system will randomly generate a short URL to the user xx.xx.xx/abced1 for editing content (where Abcde1 is the short URL identifier, called coding)
Now you want to count the access records for all coding.
There is now a access_log table, whenever a coding short URL is accessed, first based on the UA information, IP address, the effective time through the algorithm to get a Uvmark (visitor identity), if the table already has the same uvmark, indicating that the same person visited several times, Instead of making an Insert record, update the Number field +1 of the data. However, if there is no Uvmark, add an Insert record (records include coding. Access to the device. The system environment. The browser environment. Access time, etc.)
But as the number of visits increases, there is already a lot of data in the table. Nearly 90 million data increments of about 2 million per day. Count the number of large scanned code, such as time-based SQL: Select number from access_log where coding = XXXX and between Time_start and Time_end
The data that is taken out of the UV is the number of bars PV is the sum of the number of each (by region. Environment, etc.) is less efficient. If a short URL has an average of 2W of traffic per day, then I want to count his last one months of traffic, the time required to reach more than 50S
Casually find a coding of access statistics. As follows
Do I have a problem with this? Is there a place to optimize? Like Baidu statistics This kind of database design is how, why feel their very fast.
Reply to discussion (solution)
Summarize each day, the past visits directly take the results of the summary, rather than the beginning of the statistics
Summarize each day, the past visits directly take the results of the summary, rather than the beginning of the statistics
But a module is real-time statistics, that is, in a day, every half an hour of data is also available. Summing up, I'd like to know how to count the 24-hour stats for a day 30 days ago.
Is it real time 30 days ago ?
Obviously it's not!
In addition to today's data will change, in the past any day of the data will not change (passed over)
So all you have to do is record the statistical results according to the statistical scheme.
Is it real time 30 days ago ?
Obviously it's not!
In addition to today's data will change, in the past any day of the data will not change (passed over)
So all you have to do is record the statistical results according to the statistical scheme.
Then I need to know what's on August 1 9 o'clock to 12.
That's okay, you're counting every hour, only 24 records a day.
You can also count every minute, or even every second, more quickly than you would have to roll back from the original data.
That's okay, you're counting every hour, only 24 records a day.
You can also count every minute, or even every second, more quickly than you would have to roll back from the original data.
Teachable. Oh yes, how about the other statistical methods of daily summary, such as regional statistics it. Existing time conditions and regional conditions
Each saves a city
How do I summarize this data?
How do you calculate this data?
The 10+10 will not be counted?
How do you calculate this data?
The 10+10 will not be counted?
The first is a single condition (time), and I can save one data per hour for PV uvs per short URL.
The second is multi-conditional (time and place), I summarize the data by the hour, I need to put every short URL per hour according to the region to save N data is this it?
Take #6 Suzhou As an example: Traffic volume 519 represents the number of visits to date, and the number of visits tomorrow is 519 + N
That's not a question, is it?
So tomorrow, today's 519 will change? It's obviously not going to change.