The data platform team has built a unified Kafka message channel before the company opens multiple systems to its customers, and operators want to understand how their customers use each system.
To ensure that the architecture meets the business's potential for expanded performance requirements, Storm is used to process the buried data uploaded to Kafka in each application and converge in MySQL.
Buried data is reported in JSON format, which reports data similar to the following
{ "account": "001", "AccountName": "Wang Treasure", "Subaccount": "001", "Subaccountname": "caller001", "timestamp": 1474625187000, "EventType": "Phone" , " Eventtags ": [ { " name ":" Incoming ", " value ": 1 }, { "name": "Missed", "value": 1 }, { "name": "edited", "value": 1 } ]}
Eventually, through storm, the following format is aggregated into MySQL.
Account |
Account_name |
Subaccount |
Subaccount_name |
Event_type |
Event_tag |
Start_time |
End_time |
Count |
001 |
Wang Treasure |
|
|
Phone |
Incoming |
2016/9/23 18:00:00 |
2016/9/23 18:59:59 |
53 |
001 |
Wang Treasure |
|
|
Phone |
Missed |
2016/9/23 18:00:00 |
2016/9/23 18:59:59 |
53
|
A Web wrapper SQL statement is used to query the various business scenarios, such as the number of occurrences of an account over time, the number of events that occurred over time for all accounts, or the time period when an event was high.
Before considering where the final results are stored: there are a few to choose from, Redis,hbase,mongodb,mysql. After estimating that the amount of data in a year may be less than 10 million, MySQL can walk sideways in this order of magnitude.
A real-time statistical system based on Storm,kafka,mysql