1.1 million recorded text files, remove the top 10 duplicates.
Sample text:
098
123
234
789
......
234
678
654
123
Seeking Ideas
Reply to discussion (solution)
Import into the table, and then use SQL statistics to not know if it is feasible. You can try it.
Import into the table, and then use SQL statistics to not know if it is feasible. You can try it.
This is certainly possible, but it should not be the solution that the person who asked the question wanted. Want to use PHP processing or algorithms
Explode//Read split array
array_count_values//count repetitions
arsort//sort, get results
Can be the text block processing, record the results, estimated one-time reading, memory also can't eat ...
Can be the text block processing, record the results, estimated one-time reading, memory also can't eat ...
Well, your way of leaning, can you elaborate on that?
$fp = fopen (' file ', ' R '), while ($buf = fgets ($fp)) { $res [$buf]++;} Fclose ($FP); Arsort ($res); $res = Array_keys (array_slice ($res, 0, ten));p Rint_r ($res);
When 1 million records are only half the case, there is no difference from the algorithm below
$a = file (' files '); $res = Array_count_values ($a); Arsort ($res); $res = Array_keys (array_slice ($res, 0, ten));p Rint_r ($res);
Bulk INSERT into the database and then use the SQL Statement's group BY and order by implementations