There are too many things on the computer (more than 300 GB space, as well as many things on the mobile hard disk, computer and hard disk are repeated), and I always want to clear it. This kind of active technology is not very high, the physical strength coefficient is surprisingly high, so no one can do it. You have to program it.
Using. NET is not enough for me to make a fuss.
In the past, programs that used to search files using Perl were very simple and very simple. They chose him, but they still took a detour.
In the initial conception, we first traverse all the items and then compare them:
Traverse first:
$ DW-> onfile (
Sub {
My ($ file) = @_;
Push @ files,
{
"Name" => basename ($ file ),
"Dir" => dirname ($ file ),
"Fullname" => $ File
};
Return file: dirwalk: success;
}
);
$ DW-> walk ('d:/old/perl ');
Try to compare it cyclically. If there are only two duplicate files, you can say a little. However, if there are multiple duplicate files, you may need to mark them or roll out the list multiple times, I think it is more troublesome.
My $ cmp_sub = sub
{
$ _ [0] {"name"} CMP $ _ [1] {"name "}
};
# Sort first
My @ newfiles = sort {$ cmp_sub-> ($ A, $ B)} @ files;
While (@ newfiles)
{
Print $ # newfiles. "\ n ";
My $ item = pop @ newfiles;
My $ idx = custom_list_search ($ cmp_sub, $ item, \ @ newfiles );
If ($ idx! =-1)
{
Print $ item-> {"fullname"}. "\ n ";
Print $ newfiles [$ idx] {"fullname"}. "\ n ";
}
Print "\ n ";
}
It is even more troublesome. I always feel that this problem should be well solved using the Perl data structure. The flash of the light will flash. Isn't it done with hash? Use filename as the key and use value as the path array? Is that okay:
My % files;
$ DW-> onfile (
Sub {
My ($ file) = @_;
Push @ {$ files {basename ($ file)}-> {"paths" }}, $ file; # paths is a path array. When you see the path, you can plug it in, it is automatically inserted into the path list under the same file name.
Return file: dirwalk: success;
}
);
My $ htrace;
Open $ htrace, '> trace.txt ';
Select $ htrace;
$ DW-> walk ('d:/old/perl ');
Print dumper (\ % files );
Close $ htrace;
OK. All the work is done.That sentence push.
The final hash data body is as follows: where does a file appear multiple times for a period of time? The rest of the work is easy to say :)
It's true that Perl is a wizard !!!
$ Var1 = {
'Getelementchain. pl' => {
'Paths '=> [
'D: \ old \ Perl \ getdataelefromasn.1 \ Copy (3) of getelementchain. pl'
]
},
'Rand. pl' => {
'Paths '=> [
'D: \ old \ Perl \ advancedperl \ closure \ Rand. pl'
]
},
'Get all ATS core faults. ples' => {
'Paths '=> [
'D: \ old \ Perl \ coding ereview add functionbanner \ get all ATS core faults. pl ',
'D: \ old \ Perl \ get all ATS core faults. pl'
]
},