Use JavaScript to crack the verification code

Source: Internet
Author: User
Recently, the JavaScript script -- GreaseMonkey! This script developed by "ShaunFriedle" can easily handle the CAPTCHA of the Megaupload site. If you do not believe this, go to http://herecomethelizards.co.uk/mu_captcha/and try it yourself! Now

Recently, the JavaScript script -- GreaseMonkey! The script developed by "Shaun Friedle" can easily handle CAPTCHA of the Megaupload site. If you do not believe this, go to http://herecomethelizards.co.uk/mu_captcha/and try it yourself!

Now, CAPTCHA provided by Megaupload has been defeated in front of the above Code. To be honest, the verification code here is not well designed. But what's more interesting is:

1. the Canvas application interface getImageData in HTML 5 can be used to obtain pixel data from the verification code image. Using Canvas, we can not only embed an image into a Canvas, but also extract it again.

2. The above script contains a neural network fully implemented using JavaScript.

3. After pixel data is extracted from the image using Canvas, it is sent to a neural network. A simple optical character recognition technique is used to determine which characters are used in the verification code.

By reading the source code, we can not only better understand the working principle, but also understand how the verification code is implemented. As we can see above, the verification code used here is not very complicated-each verification code consists of three characters, each character uses a different color, only 26 characters are used, and all characters use the same font.

The purpose of the first step is to copy the verification code to the canvas and convert it into a grayscale image.

Function convert_grey (image_data ){
For (var x = 0; x <image_data.width; x ++ ){
For (var y = 0; y <image_data.height; y ++ ){
Var I = x * 4 + y * 4 * image_data.width;
Var luma = Math. floor (image_data.data [I] * 299/1000 +
Image_data.data [I + 1] * 587/1000 +
Image_data.data [I + 2] * 114/1000 );
Image_data.data [I] = luma;
Image_data.data [I + 1] = luma;
Image_data.data [I + 2] = luma;
Image_data.data [I + 3] = 255;
}
}
}


Then, divide the canvas into three separate pixel matrices, each containing one character. This step is very easy to implement, because each character uses a separate color, so it can be distinguished by color.

Filter (image_data [0], 105 );
Filter (image_data [1], 120 );
Filter (image_data [2], 135 );
Function filter (image_data, color ){
For (var x = 0; x <image_data.width; x ++ ){
For (var y = 0; y <image_data.height; y ++ ){
Var I = x * 4 + y * 4 * image_data.width;
// Turn all the pixels of the certain color to white
If (image_data.data [I] = color ){
Image_data.data [I] = 255;
Image_data.data [I + 1] = 255;
Image_data.data [I ++ 2] = 255;
// Everything else to black
} Else {
Image_data.data [I] = 0;
Image_data.data [I + 1] = 0;
Image_data.data [I + 2] = 0;
}
}
}
}


Eventually, all irrelevant interference pixels are removed. To this end, you can first find the front or back of the Black (unmatched) pixels around the white (matched) pixels, and then delete the matched pixels.

Var I = x * 4 + y * 4 * image_data.width;
Var above = x * 4 + (Y-1) * 4 * image_data.width;
Var below = x * 4 + (y + 1) * 4 * image_data.width;
If (image_data.data [I] = 255 &&
Image_data.data [above] = 0 &&
Image_data.data [below] = 0 ){
Image_data.data [I] = 0;
Image_data.data [I + 1] = 0;
Image_data.data [I + 2] = 0;
}


Now we have obtained the approximate character graph, but before loading it into the neural network, the script will further perform the necessary edge detection on it. The script searches for the pixels at the leftmost, rightmost, top, and bottom of the image, converts them into a rectangle, and then converts the rectangle into a 20*25 pixel matrix.

Cropped_canvas.getContext ("2d"). fillRect (0, 0, 20, 25 );
Var edges = find_edges (image_data [I]);
Cropped_canvas.getContext ("2d"). drawImage (canvas, edges [0], edges [1],
Edges [2]-edges [0], edges [3]-edges [1], 0, 0,
Edges [2]-edges [0], edges [3]-edges [1]);
Image_data [I] = cropped_canvas.getContext ("2d"). getImageData (0, 0,
Cropped_canvas.width, cropped_canvas.height );


After the above processing, what do we get? A 20*25 matrix contains a single rectangle, which is filled with black and white. Great!

Then, the rectangle is further simplified. We strategically extract some points from the matrix. As optical sensors, these optical sensors are delivered to neural networks. For example, a sensor may be located at 9*6 pixels, with pixels or without pixels. The script extracts a series of such States (much less than the total number of computations on the 20*25 matrix-only extracts 64 States) and sends these states to the neural network.

You may ask, why not compare pixels directly? Is it necessary to use neural networks? The key to the problem is to remove the ambiguous situations. If you have tried the previous demo, you will find that comparing pixels directly is more error-prone than comparing through neural networks, even though there are not many errors. However, we must admit that for most users, direct pixel comparison should be enough.

The next step is to try to guess the letter. A Neural Network imports 64 boolean values (obtained from one of the Character Images) and contains a series of pre-computed data. One of the concepts of neural networks is that we know the expected results in advance, so we can train neural networks based on the results. The author of the script can run the script multiple times and collect a series of optimal scores, which can help to deduce the values that generate them and help the neural network guess the answer. In addition, these scores have no special significance.

After a neural network computes 64 boolean values corresponding to a letter in the verification code, it compares it with a pre-calculated alphabet and then gives a score for matching each letter. (The final result may be similar: 98% may be A, 36% may be B .)

When all three letters in the verification code are processed, the final result is displayed. It should be noted that the script cannot be 100% correct (I don't know if it can improve the scoring accuracy if it doesn't convert letters into rectangles at the beginning), but this is already quite good, at least for the current purpose. All operations are performed in a browser based on the standard client technology!

Note that this script is a special case. This technology may work well on other simple verification codes, but for complicated verification codes, it is a little difficult (especially this client-based analysis ). I hope more people will be inspired by this project to develop more amazing things, because its potential is too great.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.