The user uploads the JSON data to our server, we need to parse it, and then we upload it to the cloud. Now we have two opinions on this side:
The first is to perform a JSON format check on the server, including data that needs to be uploaded to the cloud after parsing, before uploading. Because the JSON nesting level is deep, we need to determine whether it is empty for each layer of parsing. The error is handled directly as soon as it appears empty.
The second is to assume that the user uploads the data is legitimate, as long as the outermost catch an exception, to ensure that the thread will not hang, the inside parsing and processing do not have to do any judgment and validation. The two kinds of advantages and disadvantages, I would like to ask this situation, generally how to deal with it?
Reply content:
The user uploads the JSON data to our server, we need to parse it, and then we upload it to the cloud. Now we have two opinions on this side:
The first is to perform a JSON format check on the server, including data that needs to be uploaded to the cloud after parsing, before uploading. Because the JSON nesting level is deep, we need to determine whether it is empty for each layer of parsing. The error is handled directly as soon as it appears empty.
The second is to assume that the user uploads the data is legitimate, as long as the outermost catch an exception, to ensure that the thread will not hang, the inside parsing and processing do not have to do any judgment and validation. The two kinds of advantages and disadvantages, I would like to ask this situation, generally how to deal with it?
Incorrect data must be blocked from the first gate .
First, you cannot assume that the processing fails (throws an exception) = = data has errors . These two points certainly do not have the necessary connection. There is always a data business that must be logically, but not filled out without triggering a fatal error on the program.
Of course, it can be said that the calibration of data, followed by the process of transaction processing gradually. However, this can lead to two problems:
- Time delay, can not immediately get the data legal feedback results; users will always use a variety of unexpected methods to test the API, they want to make a mistake to get immediate feedback, rather than waiting for the server to deal with a half-day failure to know.
- Doing half of the business is also deliberately discarded, wasting the server processing resources, it is not as good as the illegal data simply do not open the transaction.
please insist on getting the user data input when the first time to verify . For RESTful style APIs, you might consider returning the corresponding HTTP error status code and the JSON-formatted error message body to help the user get the correct feedback.
Off-topic.
It's a little bit worse for the API interface. However, for the Web Form interface, node. JS will have a huge advantage: for the node. JS program, the front-end and back-end data validation code is directly reused !
If you have a typical PHP backend, you might want to do something about it. I suggest that you can set a uniform JSON format check table, define which data fields require which constraints (required, email, number, etc.), and then check the same check table in front and back to write the common code to do the verification.
More taboo is the same purpose of the verification code, manually with the original JS and PHP write two copies--maintenance two copies of the same purpose code synchronization, did know will be dead!
Look at you. What is the definition of the data that appears empty: null judgment and error handling if the correct data is allowed to appear empty, and if NULL is definitely the wrong data, then catch the exception in the outer layer.
1, before uploading to determine whether the key data is empty
2, upload to the service end of all authentication
3, when the display or use, to deal with the exception
Does the exception and thread hang out have a definite connection? Data exception checking causes the thread to hang up, only to say that the program's exception handling mechanism is not well done. A lot of people like to catch the exception and throw the stack information directly, instead of dealing with the exception, every time I see this kind of processing, I will be crazy. Thrown, but also to analyze the cause of the throw, or even the demand, deceptive it. For example: if () {...}, this is the end. Holy. else, even if you do not log, or other processing, at least to print a bit, let people know, oh, debugging. This is where the problem arises. Instead of relying on the IDE's debug step to follow. (Off-topic, wordy.) )。。。 I believe that the LZ as long as the appearance of the empty direct error processing, so-called processing is to directly kill the thread ... I'm so embarrassed ...
As for the 2nd, I do not know LZ is based on customer kindness, or based on the customer is their own, actually assume that "users upload data are legitimate", for programmers, this hypothesis is the most deadly, whether from the design or coding, because you simply can not fully represent the customer all operations, In fact, this also proves the above view, LZ's program for exception handling is not robust. When it is possible to write, only the normal input state is considered, not the abnormal input state. So when designing or coding, it's best not to assume what the client does. If you do, take out the data that will support the assumptions. Above, you assume that the data uploaded is legitimate, you ask yourself, if you convince the boss or the customer that this assumption is set up (customer guarantee that each input is legitimate?) your hypothesis should be this: the upload process has been the data has been closely verified, the data is basically guaranteed to conform to the format (sorry, Accidents will always occur), and the data obtained by the subsequent programs are legitimate. It is proved that how to prove that the latter procedure obtains the data is legal, the verification procedure is taken out. Of course, the verification process is not rigorous, or some do not include the verification process, this is another aspect. After all, some people will ask: how do you prove the accuracy of your verification procedures, there is no validation.
Finally, from a security standpoint, the data is at least validated. The proven data can at least filter out known potential hazards.
For LZ problem, unless the data uploaded by the user is not important, it will be verified by the data.