Not to mention whether the browser's built-in HTTP plug-in supports binary data streams, JavaScript itself has no binary processing capability. Smart readers may want to say that VBScript can be used. Yes, because VBScript, IE, and ActiveX are products of Microsoft, they have a seamless combination. The HTTP component of IE can indeed read binary data, and can only read VBScript. But other browsers are helpless.
After all, the script concept is only used to deal with some simple interactions. It should not be the responsibility of the script to handle complicated problems such as byte streams. However, as a kind of exploration, we can still explore the fun of it. Of course, the first thing to be clear is that there is no way to read JS in binary format, but we can simulate it to achieve the same effect. Let's follow me here.
For example, if you want to create a box-pushing game, there will be 200 off. At this time, the only thing worth considering is: Where can we save the 200 off map data? If it seems too large to be directly inserted into a script file, or it is stored in a file separately, what format is used... However, the simple text format is enough for the push box game, but for some maps that are complicated, BASE64 encoding may be used, and then downloaded and decoded by the HTTP component of the client. BASE64 encoding is often used in JS. After all, it is not restricted by any environment and can process strings.
Since there is a BASE64, why cannot we have BASE128 and BASE256? If "BASE256" can be implemented, isn't it a binary byte stream? If this is the case, this method has been circulated for a long time, and there are still so many BASE64 operations. After all, this is a string, not a byte string. It must be different. Try reading a binary file as a text file. Soon you will find that more than 127 (0x7F) characters appear in the file, and an error will occur immediately; if there is a 0x00 byte, the subsequent content will be lost, which means less than half of the 256 characters can be used.
However, don't forget that this test only uses the most basic ASCII code. It is not limited to the powerful XMLHTTP Support, so try what Unicode characters will do. Enter a few characters in notepad and save them as Unicode text files. In this case, XMLHTTP is used to read the file. The displayed file is exactly the same as that in Notepad. However, when you use a hexadecimal editor to open the file, it is very different. The beginning of the file contains two bytes of ff fe, each of which is separated by a value of 0. After all, this is a 16-bit Unicode character, in addition to basic ASCII, but also to save the text of countries. For example, a Chinese character occupies 2 bytes, while an English number still occupies 2 bytes, but the upper part is filled with 0 (note that the upper part is behind the lower part ).
If XMLHTTP is successfully displayed, Unicode is supported. Now, try to modify the data in the file to see if the data exceeds the specified range. Modify the data to the following: FFFE 0001 0203 8000 7F00 8100 FF00 FFFF. The XMLHTTP test shows garbled characters, but there is no error. The returned string is tried using the charCodeAt (I) and toString (16) methods! After several tests, Unicode is not as limited as ASCII, but only one exception: 0x0000!
As we all know, 0x00 is the end sign of ASCII. But in the world of Unicode, everything is 16 bits, so the character tail is 0x0000. It seems a bit regrettable here, but the goal is very clear: if you can remove 0x0000 in the file and add 0 xFEFF before, it will be able to read JavaScript.
Remove and restore the code. You can call it encoding and decoding. The encoding method is insightful. The simplest way is to record the position of each 0x0000 and then remove it. The client restores it back Based on the recorded position. Although simple, do not forget that 0x00 is quite a lot of binary files, even 0x0000. In this way, there are many records of their content, which is obviously not very good. Now that we want to record, why must we record the location at 0x0000? In turn, we should record the characters with the least number of occurrences in this file and their location, and then replace the 0x0000 with this character. Once this character is decoded, however, if the current position is not in the record, it can be determined that this is 0x0000. In fact, there must be a character in a file below 64 KB that won't appear at all (why do you understand it after careful consideration), even if it is at least 64 KB, there are still a lot of files that do not contain a specific character. After all, a Unicode character contains as many as 65536 characters, and few files will be used for all of them. Unless it is a compressed file with extremely small redundancy, it will not be much.
At this point, the encoding and decoding ideas have been clarified, and the rest is to implement them.
We mentioned that the source file contains at least (or even none) Unicode characters, which can be called keys.
First, define the New Binary File Header Format:
Copy codeThe Code is as follows:
00 01 0 xFEFF. Unicode file header, Which is required
02 03 Key value. To prevent 0x0000 from becoming the Key, ignore 0x0000 during search.
03 04 Key appears more than 1 times. + 1 is used to avoid the occurrence of 0x0000 at this position.
05 06
The location of 1st keys is saved in 4 bytes.
09 0A
0B 0C...
0D 0E
Location of the nth Key in 0F 10
11 12 file data content...