Perl removes duplicate script code (repeating row + array repeating field) _ Application Tips

Source: Internet
Author: User
Tags foreach hash perl script

If there is such a sequence:
1 2
1 2
2 1
1 3
1 4
1 5
4 1
We need to get the following results:
1 3
1 5
2 1
4 1
Then, use the following Perl script to implement it.

Code One:

Copy Code code as follows:

#!/bin/perl
Use strict;
Use warnings;
my $filename;
My%hash;
My @information;
My $key 1;
My $key 2;
Print "Please put in the ' file like this f:\\\\perl\\\\data.txt\n";
Chomp ($filename =<stdin>);
Open (In, "$filename") | | Die ("Can not open");
while (<IN>)
{
Chomp
@information =split/\s+/,$_;
If (exists $hash {$information [0]}{$information [1]})
{
Next
}
Else
{
$hash {$information [0]}{$information [1]}= ' A];
}
}
Close in;
Open (In, "$filename") | | Die ("Can not open");
while (<IN>)
{
@information =split/\s+/,$_;
If (exists $hash {$information [1]}{$information [0]})
{
Delete $hash {$information [0]}{$information [1]}
}
Else
{
Next
}
}
Close in;
Open (out, ">f:\\a_b_result.txt") | | Die ("Can not open");
foreach $key 1 (sort{$a <=> $b} keys%hash)
{
foreach $key 2 (sort{$a <=> $b} keys%{$hash {$key 1}})
{
Print out "$key 1 $key 2\n";
}
}
Close out;


Code Two:

If you have a file with a data size of 10G, but there are many rows that are duplicated, and you need to merge the duplicate rows of the file into one row, what do we need to do to implement
Cat data |sort|uniq > New_data #该方法可以实现, but it will take you several hours. Results to come out.
Here's a little tool that uses Perl scripts to complete this feature. The principle is very simple, create a hash, each row of the contents of the key, the value by the number of occurrences of each row to fill, the script is as follows;

Copy Code code as follows:

#!/usr/bin/perl
# Author:caojiangfeng
# date:2011-09-28
# version:1.0
Use warnings;
Use strict;

My%hash;
My $script = $; # Get the script name

Sub usage
{
printf ("usage:\n");
printf ("Perl $script <source_file> <dest_file>\n");

}

# If The number of parameters less than 2, exit the script
if ($ #ARGV +1 < 2) {

&usage;
Exit 0;
}


My $source _file = $ARGV [0]; #File need to remove duplicate rows
My $dest _file = $ARGV [1]; # File after remove duplicates rows

Open (FILE, "< $source _file") or die "Cannot open FILE $!\n";
Open (SORTED, "> $dest _file") or die "Cannot open file $!\n";

while (defined (my $line = <FILE>))
{
Chomp ($line);
$hash {$line} = 1;
# print ' $line, $hash {$line}\n ';
}

foreach my $k (keys%hash) {
Print SORTED "$k, $hash {$k}\n"; #改行打印出列和该列出现的次数到目标文件
}
Close (FILE);
Close (SORTED);

Code Three:

Delete duplicate fields in a data group through a Perl script

Copy Code code as follows:

#!/usr/bin/perl
Use strict;
My%hash;
My @array = (1..10,5,20,2,3,4,5,5);
#grep save elements that match the criteria
@array = grep {+ + $hash {$_} < 2} @array;
Print join ("", @array);
print "\ n";

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.