Just this morning, my system's network request latency was up to 544 milliseconds to 6937 milliseconds. And this is on an activated network interface. This can take an additional 10 seconds if the interface is activated from power-saving mode. So in order to provide a good user experience, the app needs to consider at least more than 10 seconds of network latency.
If the app initiates a user authentication request, only the successful user can enter the login page, which has been in the past 7 seconds. If the app needs to send another request for user information, the user is blocked from logging on to the page for up to 14 seconds. So I will try to break this request that must occur before and after the order.
In practice, I send multiple requests at the same time, especially some requests that do not consume too much bandwidth, but have high performance requirements. Depending on the application, this uses the HTTP pipelining mechanism, or uses the queue mechanism to manage multiple requests that are sent at the same time. It is important to note that we cannot guarantee that the servers are processed in the order in which they are sent. So the server may receive a request to delete information, and the server actually receives the request to create the information. In a recent application development, we will fill in the serial number in all requests, the server side will have a queue to save the received request. If a request is not received in the middle of the sending pair of columns, the server does not process any requests after it, guaranteeing that the request can be processed sequentially.
Finally, consider that the current user may switch to another page or other task because of too much latency. It is common practice that when a view controller is destroyed or disappears from the screen, the request is terminated at the same time. In general, this practice works well. However, impatient users will constantly switch pages back and forth, causing each page to be open for a long time. So I prefer to use a controller without a view to manage requests and update the app's model layer content.
Testing: Creating a test environment for different network delays
In the previous section, I mentioned using Charles to set up a local agent to test different network conditions. Charles can also configure network latency to test transmission delays between devices and servers.
English Original address: http://blog.carbonfive.com/2010/11/17/fallacy-2-latency-is-zero/
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Rethinking the design of the app by "8 Myths of Distributed Systems" second fallacy 2: No network latency