How to count the frequency of each word in a 2 GB file? when a beginner encounters a problem, count the frequency of each word in a 2 GB file, after modifying the memory limit, the error "Allowed memory size of xxxx bytes exhausted" is still reported. The total number of lines or characters in the test can be output. how can this problem be optimized?
ini_set("memory_limit", "-1");function calcWordFrequence($sFilePatch){$aWordsInFile = array();$aOneLineWords = array();$sOneLineWords = "";$fp = fopen($sFilePatch,"r");while(!feof($fp)){$sOneLineWords = fgets($fp);$aOneLineWords = str_word_count($sOneLineWords,1);foreach($aOneLineWords as $v){array_push($aWordsInFile, $v);}}fclose($fp);$aRes = array_count_values($aWordsInFile);arsort($aRes);return $aRes;}echo calcWordFrequence("2013.mp4");
Reply to discussion (solution)
This problem cannot be solved. memory usage is exhausted when a computer with 2 GB file hardware is almost turned on. Distributed Design on storage.
This problem cannot be solved. memory usage is exhausted when a computer with 2 GB file hardware is almost turned on. Distributed Design on storage.
Is there a way to separate the file from the code into several parts for batch statistics or only output the word with the highest frequency?
Use the split command to cut the file into a small file.
Only text files have the line concept
The 2013.mp4 file you tested is obviously not a text file.
If \ n is not displayed in the file, or the back is displayed, your $ sOneLineWords = fgets ($ fp); memory consumption will be reduced.
If you are a text file such as logs, you can use the php SplFileObject () class to operate large files. I used this to analyze nginx access logs for more than 5 GB.