In Web development, users need to input Rich Text in many places, but make sure that the entered content is absolutely secure and will not cause XSS vulnerabilities. The most common technology is the whitelist technology.
Generally, a whitelist is used to construct a list of allowed tags and their corresponding attributes, and parse the HTML text entered by the user, the resolved tags and attributes are searched in the whitelist. If they are matched, they are retained and removed without matching. The structure of the whitelist is as follows: allowed tags> allowed attributes> allowed attribute values. Take the img tag as an example. If we allow image tags to be inserted in a blog, the img tag is allowed. The img tag has many attributes, such as src, alt, onerror, and onload, src is required, and the alt attribute cannot execute any javascript code, but onerror and onload can execute javascript code. Therefore, only the src and alt attributes are retained. The src attribute allows many types of values, for example, a link starting with http, A bookmarklet starting with javascript:, and a data url starting with data:, javascript: and data: may cause XSS risks in some types of browsers, therefore, the two types of values are not allowed. Only the image links starting with http: // are allowed. Therefore, the attribute values must be matched. Only when Tags match these three layers can they be considered safe. When designing a whitelist, none of these three layers can be missed.
There are a lot of third-party HTML parsing libraries in Python, I chose BeautifulSoup here, now there is a Chinese document available http://www.crummy.com/software/BeautifulSoup/bs3/documentation.zh.html
The code is implemented as follows:
01 regex_cache = {}
02
03 def search (text, regex ):
04 regexcmp = regex_cache.get (regex)
05 if not regexcmp:
06 regexcmp = re. compile (regex)
07 regex_cache [regex] = regexcmp
08 return regexcmp. search (text)
09
10 # XSS White List
11 VALID_TAGS = {'h1 ':{}, 'h2' :{}, 'h3 ':{}, 'h4' :{}, 'strong ':{}, 'em ':{},
12 'P': {}, 'ul ': {}, 'lil': {}, 'br': {}, 'A': {'href ': '^ http: //', 'title ':'. *'},
13 'img ': {'src':' ^ http: // ', 'alt ':'.*'}}
14
15 def parsehtml (html ):
16 soup = BeautifulSoup (html)
17 for tag in soup. findAll (True ):
18 if tag. name not in VALID_TAGS:
19 tag. hidden = True
20 else:
21 attr_rules = VALID_TAGS [tag. name]
22 for attr_name, attr_value in tag. attrs:
23 # Check the property type
24 if attr_name not in attr_rules:
25 del tag [attr_name]
26. continue
27
28 # Check the attribute value format
29 if not search (attr_value, attr_rules [attr_name]):
30 del tag [attr_name]
31
32
33 return soup. renderContents ()
Let's take a piece of html for testing:
1 if _ name _ = '_ main __':
2 text = '''
3
4
5 <a href = 'javascript: alert (1); '> Hello </a>
6 <a href = 'HTTP: // www.baidu.com '<script> alert (1); </script> 'title = 'sddasadsd'/>
7 '''
8 print parsehtml (text)
Filter result:
1 2
3 <a> Hello </a>
4 <a href = "http://www.baidu.com"> alert (1); 'title = 'sddasadsd'/& gt;
5 </a>
Good results ~