Asp.net URL encoding and decoding

Source: Internet
Author: User
Tags printable characters

For example, if the Url parameter string uses the key = value key-value pair to pass the parameter, key-value pairs are separated by the & symbol, such as/s? Q = abc & ie = UTF-8. If your value string contains = or &, it will inevitably cause parsing errors on the server that receives the Url. Therefore, you must escape the ambiguous & and = symbols, that is, encode it.

For example, the Url encoding format uses ASCII code rather than Unicode, which means you cannot include any non-ASCII characters in the Url, such as Chinese characters. Otherwise, if the character set supported by the client browser and the Server Browser is different, Chinese may cause problems.

The Url encoding principle is to use secure characters (printable characters without special purposes or special meanings) to indicate insecure characters.

Prerequisites: URI indicates a Uniform Resource Identifier. Generally, the Url is only a type of URI. The format of a typical Url is shown above. The Url encoding mentioned below actually refers to URI encoding.

Copy codeThe Code is as follows:
Foo: // example.com: 8042/over/there? Name = ferret # nose

\_/\______________/\________/\_________/\__/

Scheme authority path query fragment

Characters to be encoded

RFC3986 documents stipulate that the Url can only contain letters (a-zA-Z), numbers (0-9), and ),-_.~ 4 special characters and all reserved characters. RFC3986 provides detailed recommendations on Url encoding and decoding, and points out which characters need to be encoded to avoid Url semantic changes, and explains why these characters need to be encoded.

No printable character in the US-ASCII Character Set: only printable characters are allowed in the Url. The 10-7F bytes in the US-ASCII code all represent control characters that cannot appear directly in the Url. At the same time, for 80-FF bytes (ISO-8859-1), it is not allowed to be placed in a Url because it is beyond the byte range defined by the US-ACII.

Reserved characters: the Url can be divided into several components, such as the Protocol, host, and path. There are some characters (:/? # [] @) Is used to separate different components. For example, a colon is used to separate the protocol and host, And/is used to separate the host and path ,? Used to separate paths and query parameters. There are also some characters (! $ & '() * +,; =) Is used to separate each component. For example, = is used to represent the key-value pair in the query parameter, the & symbol is used to separate multiple key-value pairs. When common data in a component contains these special characters, it must be encoded.

RFC3986 specifies the following characters as reserved characters :! * '();: @ & = + $ ,/? # []

Unsafe characters: there are also some characters that may cause ambiguity in the parsing program when they are directly placed in the Url. These characters are considered unsafe for many reasons.

Space: During the Url transmission process, the user's typographical process, or the Text Processing Program's Url Processing Process, there may be irrelevant spaces, or remove the meaningful spaces.
Quotation marks and <>: quotation marks and angle brackets are usually used to separate URLs in common text.
#: Used to indicate bookmarks or anchor points
%: Percent signs are special characters used to encode unsafe characters. Therefore, they must be encoded.
{}| \ ^ [] '~ : Some gateway or transport proxy will tamper with these characters
It should be noted that for valid characters in the Url, encoding and non-encoding are equivalent, but for those characters mentioned above, if they are not encoded, then they may cause different Url semantics. Therefore, for URLs, only common English characters and numbers are supported. special characters include $-_. +! * '() And reserved characters can appear in unencoded URLs. Other characters must be encoded before they can appear in the Url.

However, due to historical reasons, there are still some nonstandard coding implementations. For example ~ Symbol, although RFC3986 documents stipulate that ~, Url encoding is not required, but there are still many old gateways or transmission proxies.

How to encode invalid characters in a Url

Url Encoding is also known as percentage code (Url encoding, also known as percent-Encoding) because it is very simple in encoding mode, use the % percent sign plus two characters -- 0123456789ABCDEF -- to represent the hexadecimal format of a byte. The default Character Set of Url encoding is US-ASCII. For example, if the byte of a in the US-ASCII code is 0x61, then what we get after Url encoding is % 61, we enter the http://g.cn/search in the address bar? Q = % 61% 62% 63 is actually equivalent to searching abc on google. For example, if the byte of the @ symbol in the ASCII character set is 0x40, % 40 is obtained after Url encoding.

For non-ASCII characters, the super set of the ASCII character set must be used for encoding to obtain the corresponding bytes, and then the percent code is executed for each byte. For Unicode characters, we recommend that you use UTF-8 to encode them to obtain the corresponding bytes, and then perform percent encoding for each byte. For example, "Chinese" using the UTF-8 character set to get the byte 0xE4 0xB8 0xAD 0xE6 0x96 0x87, after Url encoding, "% E4 % B8 % AD % E6 % 96% 87" is obtained ".

If a byte corresponds to a non-reserved character in the ASCII character set, this Byte does not need to be expressed by a percent sign. For example, "Url encoding", the bytes produced by UTF-8 encoding are 0x55 0x72 0x6C 0xE7 0xBC 0x96 0xE7 0xA0 0x81, because the first three bytes correspond to the non-reserved characters "Url" in ASCII, these three bytes can be represented by non-reserved characters "Url. The final Url encoding can be simplified to "Url % E7 % BC % 96% E7 % A0 % 81". Of course, you can also use "% 55% 72% 6C % E7 % BC % 96% E7 % A0 % 81.

Due to historical reasons, some Url encoding implementations do not fully follow this principle, which will be mentioned below.

Differences between escape, encodeURI, and encodeURIComponent in Javascript

Javascript provides three functions for Url encoding to obtain valid urls: escape/unescape, encodeURI/decodeURI, and encodeURIComponent/decodeURIComponent. Since the decoding and encoding processes are reversible, only the encoding processes are described here.

The three encoded functions-escape, encodeURI, and encodeURIComponent-are used to convert insecure and invalid Url characters into valid Url characters. They have the following differences.

Different security characters:

The security characters of these three functions are listed below (that is, the functions do not encode these characters)

Escape (69): */@ +-. _ 0-9a-zA-Z
EncodeURI (82 ):! # $ & '() * +,/:; =? @-._~ 0-9a-zA-Z
EncodeURIComponent (71 ):! '()*-._~ 0-9a-zA-Z
Different compatibility: the escape function exists from Javascript1.0, and the other two functions are introduced in Javascript1.5. However, because Javascript1.5 is already very popular, there is no compatibility problem when using encodeURI and encodeURIComponent.

Unicode characters are encoded in the same way: the three functions use percent signs + two hexadecimal characters for the same ASCII character encoding. However, for Unicode characters, the escape encoding method is % uxxxx, where xxxx is a four-digit hexadecimal character used to represent unicode characters. This method has been abandoned by W3C. But this encoding syntax for escape remains in the ECMA-262 standard. EncodeURI and encodeURIComponent encode non-ASCII characters using the UTF-8 before percent encoding. This is recommended by RFC. Therefore, we recommend that you use these two functions instead of escape for encoding.

Different application scenarios: encodeURI is used to encode a complete URI, while encodeURIComponent is used to encode a component of URI. From the security character range table above, we can see that the character range of encodeURIComponent encoding is larger than that of encodeURI. As mentioned above, reserved characters are generally used to separate URI components (a URI can be cut into multiple components. For details, refer to the preparations section) or sub-components (such as delimiters of query parameters in URI), such as: used to separate scheme and host ,? Number is used to separate the host and path. Since the object operated by encodeURI is a complete URI, these characters have special purposes in the URI, so these reserved characters are not encoded by encodeURI, otherwise the meaning changes.

The component has its own data representation format, but the data cannot contain the reserved characters of the separator component. Otherwise, the component separation in the entire URI is disordered. Therefore, for a single component to use encodeURIComponent, more characters need to be encoded.

Form submission

When an Html form is submitted, each form field is Url encoded before being sent. Due to historical reasons, the Url encoding Implementation of the form does not comply with the latest standards. For example, the space encoding is not "% 20", but "+". If the form is submitted using the Post method, we can see a Content-Type header in the HTTP header, the value is application/x-www-form-urlencoded. Most applications can handle this non-standard implementation of Url encoding. However, in the client Javascript, no function can decode the "+" into a space and write the Conversion Function by itself. Also, for non-ASCII characters, the encoding character set used depends on the character set used in the current document. For example, we add

<Meta http-equiv = "Content-Type" content = "text/html; charset = gb2312"/>

In this way, the browser will use gb2312 to render this document (NOTE: When this meta tag is not set in the HTML document, the browser will automatically select the character set according to the current user's preferences, you can also force the current website to use a specified character set ). When a form is submitted, the character set used for Url encoding is gb2312.

I encountered a very confusing problem when I used Aptana (why I mentioned it below), that is, when I used encodeURI, I found that the encoded results were very different from what I thought. The following is my sample code:

Copy codeThe Code is as follows:
<! DOCTYPE html PUBLIC "-// W3C // dtd xhtml 1.0 Transitional // EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<Html xmlns = "http://www.w3.org/1999/xhtml">
<Head>
<Meta http-equiv = "Content-Type" content = "text/html; charset = gb2312"/>
</Head>
<Body>
<Script type = "text/javascript">
Document. write (encodeURI ("Chinese "));
</Script>
</Body>
</Html>

Run result output % E6 % B6 % 93% EE % 85% 9F % E6 % 9E % 83. Obviously this is not the result of Url encoding using the UTF-8 character set (search for "Chinese" on Google and the Url displays % E4 % B8 % AD % E6 % 96% 87 ).

So I was very skeptical at the time. Is encodeURI still related to page encoding, but I found that under normal circumstances, if you use gb2312 for Url encoding, you will not get this result. I finally found out that the problem was caused by inconsistency between the character set used by page file storage and the character set specified in the Meta tag. The editor of Aptana uses the UTF-8 character set by default. That is to say, this file is actually stored using the UTF-8 character set. However, because gb2312 is specified in the Meta tag, the browser will parse this document according to gb2312, and errors will naturally occur in the string "Chinese, because the "Chinese" string is encoded with the UTF-8, the byte is 0xE4 0xB8 0xAD 0xE6 0x96 0x87, the 6 byte is decoded by the browser with gb2312, then we will get the other three Chinese characters "Juan" (one Chinese Character occupies two bytes in GBK ), after the three Chinese characters are passed into the encodeURI function, the result is % E6 % B6 % 93% EE % 85% 9F % E6 % 9E % 83. As a result, encodeURI uses a UTF-8 and is not affected by the page character set.

Different browsers have different processing problems for URLs containing Chinese characters. For example, for IE, if you check Advanced Settings "always send Url with UTF-8", then the Chinese language of the path section in the Url will be Url encoded using the UTF-8 and sent to the server, the Chinese part of the query parameters uses the default character set for Url encoding. To ensure maximum interoperability, it is recommended that all components put into the Url explicitly specify a character set for Url encoding, without relying on the browser's default implementation.

In addition, many HTTP monitoring tools or browser address bar will automatically decode the Url once when displaying the Url (using the UTF-8 Character Set ), this is why the Url displayed in the address bar contains Chinese characters when you access Google to search for Chinese Characters in Firefox. The original Url actually sent to the server is encoded. You can use Javascript to access location. href in the address bar. Do not be confused by these illusions when studying Url codec.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.