PHP's inert loading and Iterator usage have recently changed to the site building software. the template system of our website building software has a problem, prompting that the memory exceeds the maximum value. The simplest way to solve memory problems is to modify php. ini increases memory_limit, but as the number of templates increases, the memory required exceeds the upper limit. as a result, the boss PHP's inert loading and Iterator usage
Recently I changed my job and changed my website creation software. the template system of the website creation software of our company encountered a problem, prompting that the memory exceeds the maximum value. The simplest way to solve memory problems is to modify php. ini increases memory_limit, but as the number of templates increases, the memory required exceeds the upper limit. so the boss asked me this question to see how I can solve it. When I got this question, I first understood the original code idea and analyzed the main cause of the problem. The page for the problem was very simple, that is, a template display page with 16 templates displayed on each page, page-based processing, while background processing is not simple. First, the template data is not read from the database, but a serialized string is obtained from the server. The information array of all templates is retrieved, then, cache the information (read the cache later), filter the information in the array, and retrieve 16 templates for display. Then the problem is analyzed. it is obvious that the problem is that the number of templates is huge when the information array of all templates is retrieved, but in fact, only 16 pages are needed, but all of them are added to the memory. The problem was found. how can this problem be solved? So I asked my former employees to listen to their opinions. They have provided a lot of comments, including putting templates in the database for processing, and putting paging and other operations on our servers for processing. However, all solutions need to make major improvements to our backend servers, what should I do if the changes are too large and there are too many problems introduced? 1. later, I thought that the problem was that all the data was serialized and retrieved. if we could just retrieve the data we needed in the file, as I have java experience, the first thing I think of is the xml sax processing method, but I am not familiar with XML processing in PHP, so I have no choice. However, it gave me the correct idea. many PHP memory problems are caused by the fact that PHP developers can pre-load all required data to the memory (array) for convenience ), then, through processing, the problem is that the required memory is much larger than its actual needs, resulting in excessive memory. The solution is to load data to the memory for processing only when the data is used. The problem here is obviously the serialization problem. The PHP serialize mechanism is a pre-loaded policy and does not support inert loading. You need to find a serialization method that supports the inert loading to read the content of only one template at a time, and the CSV format will soon appear. CSV supports this reading method in behavior units, there is no need to make major changes to the original system mechanism, so we need to make csv serialization changes to the original system. 2. the Iterator interface uses csv reading and while for traversing, while the original system uses arrays for traversing. due to the many filtering conditions of the system code, many code modifications are involved, the previous array is implemented using key => value pairs, while csv can only be accessed using numbers 0, 1 and 2. In order to minimize the original code modification, we first processed the csv so that the key value of its first array line would display the actual value of the template from the second line, the location corresponds to the key one by one, which meets future scalability. To be as close as possible to the foreach style of the original code, we use Iterator to encapsulate the csv operation and merge the key-> value of the csv. In this way, code migration is almost negligible. the bigger advantage is that the csv file size is much larger than the serialized file size, and some may worry about performance issues. We tested 2000 existing templates with a speed of less than 1 second. in the future, the number of templates will increase to 20000, and the traversal speed will not exceed 2 s, which is also acceptable. So this problem is solved. Recently, our self-made simple ORM framework showed that the memory exceeds the maximum value, because it also put the values of all databases into the memory array, so I held a spear of inert loading in one hand, with the Iterator's shield in one hand, I once again entered the war of solving the problem.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.