See http://www.jb51.net/article/20575.htm for more information
But in my tests, RegEx is about twice faster. However, we are still not satisfied with the fact that many dirty words are used for filtering on our website, which has some impact on efficiency. After some consideration, I made Algorithm . I tested it on my own machine. I used the dirty Character Library in the original text, the string length of 0x19c, 1000 loops, 1933.47 ms for text search, and 1216.719 ms for RegEx, my algorithm only uses 244.125 Ms.
Update: adds a bitarray to determine whether a char has exceeded all the dirty characters. The total time has been reduced from 244ms to 34 Ms.
The main algorithms are as follows: Code As shown in
Copy code The Code is as follows: Private Static dictionary DIC = New Dictionary ();
Private Static bitarray fastcheck = new bitarray (char. maxvalue );
Static void prepare ()
{
String [] badwords = // read from File
Foreach (string word in badwords)
{
If (! Dic. containskey (Word ))
{
Dic. Add (word, null );
Maxlength = math. Max (maxlength, word. Length );
Fastcheck [word [0] = true;
}
}
}
copy Code the code is as follows: int Index = 0;
while (index {< br> If (! Fastcheck [target [Index])
{< br> while (index }< br> for (Int J = 0; j {< br> string sub = target. substring (index, J);
If (DIC. containskey (sub)
{< br> Sb. replace (sub, "***", index, J);
index + = J;
break;
}< BR >}< br> index ++;
}