This article reviews the love-hate hatred of JSON. C + + is risky and needs to be used with caution.
The relevant code in this article is: http://download.csdn.net/detail/baihacker/7862785
The test data is not inside because it is used in project development and needs to be kept secret.
When working with JSON, read the introduction, simple syntax, easy to expand, it is very cool to use. Some of the interactions with the backend are basically JSON. Third-party libraries are used in the project: Simple JSON, which is also a delight to use. But also ate the bank's loss:
1. Forget that the parsed jsonvalue should be deleted;
The number in the 2.simple JSON is represented by a double, but the server gives a 64-bit integer. So I changed a little bit so that when I parse it out, I keep the original number literal. [using double to denote int is OK in general 53-bit integers. The approximate algorithm is to remove the 1-bit and exponent 11-bit mantissa of the sign in IEEE754.2 52 is 4503599627370496, précis-writers is 4.5* 10^15. Reference: http://zh.wikipedia.org/zh-cn/IEEE_754 and http://bolt.xunlei.com/faq.html]
3. Be aware of the JSON's handling of the comma after the last element of the array;
The 4.json string itself is what the encoding, the library in parsing makes uxxxx the form of the character of the time is not OK. (Remember that there was a problem in the jsoncpp, and then used the value in base chromium.) If this problem is not discussed with the server side and the other side, it might be a hole again. It was after the shaking of faith.
But none of these pits can shake my faith in json until one day my mentor gave me a 1685366-byte JSON data I spent more than 10 seconds (on a notebook) parsing with simple JSON, This makes it impossible to look directly at JSON for the next few years. Whenever someone else uses JSON, I say: "Brother has a 2M JSON data, can make your parsing speed is very slow, more than 10 seconds oh." Whenever I encounter the need to deal with formatted content, I turn to the embrace of XML, and the relatively familiar is rapidxml. But Rapidxml also has a hole:
1.rapidxml is modified on the original document. Therefore, it is best to copy an input before parsing.
2. When formatting, many are pointer operations, you can not temporarily new to come out, or point to temporary variables. And to specialize in creating a storage area, the need to put strings in the inside, after the completion of the format of the unified collection.
3. It is best to use sstream when formatting, otherwise the speed will be slower.
4. You need to catch an exception.
5. It is best to handle only UTF8 strings. If it is utf16, put a Chinese "one" on the node attribute, or text node, and try it immediately.
Until later, I was more lazy, more like the format of HTTP header. A key plus a ":" followed by value. Different kv with "\ r \ n" segmentation. You'll soon be able to write a code that parses such data.
--------------------------------------------------------------------------
Until yesterday, but also old Luo and the first day after the war, I think: I would also like to evaluate the JSON library. The war is: Simplejson,jsoncpp,libjson,rapidjson. The challenge target is the legendary 1685366-byte JSON file. Only for file parsing.
A couple of libraries are still big, and the common denominator is that they're now like moving to GitHub. For example, when Simplejson moved to GitHub, I had to download down the code on master. The mjpa.co.uk of its lair is file not found. But then a comparison of the code, visual master on the code can be used directly as release. Several developers in the software maintenance of the Black magic [reference: http://blog.jobbole.com/76216/] Relatively few, but also more humane, it is easy to compile it. The slightly darker one is Libjson, You need to shut down the C interface yourself. Debug builds require a macro to be opened. Jsoncpp is gray, need to use scons, but I did not succeed. The VS project was found under Makefiles and can be upgraded.
People lazy, write the test code just barely able to use. Presumably a function reads a file, outputting UTF8 and UTF16 to two global variables. Defines a:
void simple_json_test (const char* UTF8, const wchar_t* UTF16);
void rapid_json_test (const char* UTF8, const wchar_t* UTF16);
void json_cpp_test (const char* UTF8, const wchar_t* UTF16);
void lib_json_test (const char* UTF8, const wchar_t* UTF16);
These functions are called in the main function. Timing with the clock () function. Each time a function is changed, the result is recompiled. The specific test function body adds up to 11 lines, some code even frees the resources to write.
simplejson:56,62,57
rapidjson:16,16,15
jsoncpp:60,45,68
libjson:9,9,16
The result enabled me to restore my faith in json, but I didn't understand why the difference was so big before and after the Simplejson. Find the library of the previous Simplejson and run it again. The result is:
simplejson_old:6893,6826,6844
VS performance analysis did not look carefully, the simple code review found that the change is not big, but eventually I found the problem.
There is a function in JSON.h:
Simple function to check a string ' s ' have at least ' n ' charactersstatic inline bool Simplejson_wcsnlen (const wchar_t *s , size_t N) {if (s = = 0) return false;const wchar_t *save = s;while (n--> 0) {if (* (save++) = = 0) return false;} return true;}
In JSON.cpp 181 rows, JSONValue.cpp 62, 70 rows are referenced, and the corresponding code in the older version is:
We need 5 chars (4 hex + the ' u ') or its not validif (Wcslen (*data) < 5)//Is it a boolean?else if ((Wcslen (*data) & gt;= 4 && wcsncasecmp (*data, L "true", 4) = = 0) | | (Wcslen (*data) >= 5 && wcsncasecmp (*data, L "false", 5) = = 0)) Is it a null?else if (Wcslen (*data) >= 4 && wcsncasecmp (*data, L "null", 4) = = 0)
If you change these three places of the old version to the new version, the problem is resolved. Of course, before and after the substitution, the question is what is also obvious.
Love and hate for JSON