Android Network Programming (4) parsing Volley from source code
1. Volley Structure
As you can see, Volley is divided into three threads: the main thread, the cache scheduling thread, and the network scheduling thread. The request is first added to the cache queue, if the corresponding cache results are found, the cache is directly read and parsed, and then the request is called back to the main thread. If no results are found in the cache, the request is added to the network queue, then, the system sends an HTTP request, parses the response, writes the response to the cache, and calls back the request to the main thread.
2. Start with RequestQueue
We all know that before using Volley, you must first create RequestQueue:
RequestQueue mQueue = Volley.newRequestQueue(getApplicationContext());
This is also the entry to volley operations. See newRequestQueue:
public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, (HttpStack)null); }public static RequestQueue newRequestQueue(Context context, HttpStack stack) { return newRequestQueue(context, stack, -1); }
Two overload functions are called consecutively, and the final call is:
public static RequestQueue newRequestQueue(Context context, HttpStack stack, int maxDiskCacheBytes) { File cacheDir = new File(context.getCacheDir(), "volley"); String userAgent = "volley/0"; try { String network = context.getPackageName(); PackageInfo queue = context.getPackageManager().getPackageInfo(network, 0); userAgent = network + "/" + queue.versionCode; } catch (NameNotFoundException var7) { ; } if(stack == null) { if(VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } BasicNetwork network1 = new BasicNetwork((HttpStack)stack); RequestQueue queue1; if(maxDiskCacheBytes <= -1) { queue1 = new RequestQueue(new DiskBasedCache(cacheDir), network1); } else { queue1 = new RequestQueue(new DiskBasedCache(cacheDir, maxDiskCacheBytes), network1); } queue1.start(); return queue1; }
If the android version is greater than or equal to 2.3, The HurlStack Based on HttpURLConnection is called. Otherwise, the HttpClientStack Based on HttpClient is called. The RequestQueue is created and the start () method is called:
public void start() { this.stop(); this.mCacheDispatcher = new CacheDispatcher(this.mCacheQueue, this.mNetworkQueue, this.mCache, this.mDelivery); this.mCacheDispatcher.start(); for(int i = 0; i < this.mDispatchers.length; ++i) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(this.mNetworkQueue, this.mNetwork, this.mCache, this.mDelivery); this.mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } }
CacheDispatcher is a cache scheduling thread and the start () method is called. The start () method of NetworkDispatcher is called in a loop. NetworkDispatcher is a network scheduling thread. By default, mDispatchers is called. the length is 4, and four network scheduling threads are enabled by default. That is to say, five threads run in the background and wait for the request to arrive. Next we create various requests and call the add () method of RequestQueue:
Public
Request
Add (Request
Request) {request. setRequestQueue (this); Set var2 = this. mCurrentRequests; synchronized (this. mCurrentRequests) {this. mCurrentRequests. add (request);} request. setSequence (this. getSequenceNumber (); request. addMarker ("add-to-queue"); // add the request to the network Request queue if (! Request. shouldCache () {this. mNetworkQueue. add (request); return request;} else {Map var8 = this. mWaitingRequests; synchronized (this. mWaitingRequests) {String cacheKey = request. getCacheKey (); // whether the same request has been executed before and no results have been returned. if yes, add the request to the mWaitingRequests queue and no longer repeat the request if (this. mWaitingRequests. containsKey (cacheKey) {Object stagedRequests = (Queue) this. mWaitingRequests. get (cacheKey); if (stagedRequests = null) {stagedRequests = new Queue list ();} (Queue) stagedRequests ). add (request); this. mWaitingRequests. put (cacheKey, stagedRequests); if (VolleyLog. DEBUG) {VolleyLog. v ("Request for cacheKey = % s is in flight, putting on hold. ", new Object [] {cacheKey}) ;}} else {// If NO, add the request to the cache queue mCacheQueue, in addition, mWaitingRequests is added for repeat judgment next time when the same request comes. this. mWaitingRequests. put (cacheKey, (Object) null); this. mCacheQueue. add (request) ;}return request ;}}}
Determine request. shouldCache () is used to determine whether a request can be cached. By default, the request can be cached. If no cache is available, the request is added to the network Request queue, if the cache is available, check whether the same request has been executed and no results have been returned. If yes, add the request to the mWaitingRequests queue without repeating the request; if no, add the request to the cache queue mCacheQueue and add the request to mWaitingRequests for repeated judgment when the same request is sent next time.
From the above, we can see that the add () method of RequestQueue does not perform any request network or cache operations. When a request is added to a network Request queue or a cache queue, the network scheduling thread and the cache scheduling thread in the background round-robin The Request queue and start to execute a request task, let's first look at the cache scheduling thread.
3. CacheDispatcher: cache scheduling thread
The run () method of CacheDispatcher:
Public void run () {if (DEBUG) {VolleyLog. v ("start new dispatcher", new Object [0]);} // set the thread priority to the highest level of Process. setThreadPriority (10); this. mCache. initialize (); while (true) {while (true) {try {// obtain a Request in the cache queue final Request e = (Request) this. mCacheQueue. take (); e. addMarker ("cache-queue-take"); // if the request is canceled, stop the request if (e. isCanceled () {e. finish ("cache-discard-canceled");} else {// Check whether there is any cached response Entry entry = this. mCache. get (e. getCacheKey (); // if the cache response is empty, add the request to the network Request queue if (entry = null) {e. addMarker ("cache-miss"); this. mNetworkQueue. put (e); // determines whether the cache response has expired} else if (! Entry. isExpired () {e. addMarker ("cache-hit"); // parses the data and calls back to the main thread Response response = e. parseNetworkResponse (new NetworkResponse (entry. data, entry. responseHeaders); e. addMarker ("cache-hit-parsed"); if (! Entry. refreshNeeded () {this. mDelivery. postResponse (e, response);} else {e. addMarker ("cache-hit-refresh-needed"); e. setCacheEntry (entry); response. intermediate = true; this. mDelivery. postResponse (e, response, new Runnable () {public void run () {try {CacheDispatcher. this. mNetworkQueue. put (e) ;}catch (InterruptedException var2) {;}}) ;}} else {e. addMarker ("cache-hit-expired"); e. setCacheEntry (entry); this. mNetworkQueue. put (e) ;}} catch (InterruptedException var4) {if (this. mQuit) {return ;}}}}} static {DEBUG = VolleyLog. DEBUG ;}
We can see that the four while loops are a little dizzy. Let's make a point: first, retrieve the request from the cache queue to determine whether the request has been canceled, if no, the system checks whether the request has a cached response. If no response expires, it resolves the cache response and calls back the response to the main thread. Next let's look at the network scheduling thread.
4. NetworkDispatcher: network scheduling thread
The run () method of NetworkDispatcher:
Public void run () {Process. setThreadPriority (10); while (true) {long startTimeMs; Request request; while (true) {startTimeMs = SystemClock. elapsedRealtime (); try {// retrieve request = (Request) this. mQueue. take (); break;} catch (InterruptedException var6) {if (this. mQuit) {return ;}}try {request. addMarker ("network-queue-take"); if (request. isCanceled () {request. finish ("network-discard-canc Elled ");} else {this. addTrafficStatsTag (request); // request network NetworkResponse e = this. mNetwork. required mrequest (request); request. addMarker ("network-http-complete"); if (e. notModified & request. hasHadResponseDelivered () {request. finish ("not-modified");} else {Response volleyError1 = request. parseNetworkResponse (e); request. addMarker ("network-parse-complete"); if (request. shouldCache () & volleyErro R1.cacheEntry! = Null) {// Save the response result to the cache this. mCache. put (request. getCacheKey (), volleyError1.cacheEntry); request. addMarker ("network-cache-written");} request. markDelivered (); this. mDelivery. postResponse (request, volleyError1) ;}} catch (VolleyError var7) {var7.setNetworkTimeMs (SystemClock. elapsedRealtime ()-startTimeMs); this. parseAndDeliverNetworkError (request, var7);} catch (Exception var8) {VolleyLog. e (var8, "Unhandled exception % s", new Object [] {var8.toString ()}); VolleyError volleyError = new VolleyError (var8); volleyError. setNetworkTimeMs (SystemClock. elapsedRealtime ()-startTimeMs); this. mDelivery. postError (request, volleyError );}}}
The network scheduling thread also extracts the request from the queue and determines whether the request is canceled. If the request is not canceled, it requests the network for a response and calls back the request to the main thread. When requesting the network, call this. mNetwork. Invoke mrequest (request). this mNetwork is an interface and its class is BasicNetwork. Let's take a look at the BasicNetwork's receivmrequest () method:
Public NetworkResponse initiate mrequest (Request request) throws VolleyError {long requestStart = SystemClock. elapsedRealtime (); while (true) {HttpResponse httpResponse = null; Object responseContents = null; Map responseHeaders = Collections. emptyMap (); try {HashMap e = new HashMap (); this. addCacheHeaders (e, request. getCacheEntry (); httpResponse = this. mHttpStack. required mrequest (request, e); StatusLine statusCode1 = httpResponse. getStatusLine (); int networkResponse1 = statusCode1.getStatusCode (); responseHeaders = convertHeaders (httpResponse. getAllHeaders (); if (networkResponse1 = 304) {Entry requestLifetime2 = request. getCacheEntry (); if (requestLifetime2 = null) {return new NetworkResponse (304, (byte []) null, responseHeaders, true, SystemClock. elapsedRealtime ()-requestStart);} requestLifetime2.responseHeaders. putAll (responseHeaders); return new NetworkResponse (304, requestLifetime2.data, requestLifetime2.responseHeaders, true, SystemClock. elapsedRealtime ()-requestStart );}... omitted
From the above, we can see that the network of the HTTP Stack's ingress mrequest () method is called on the 12 lines. Next, we will return different NetworkResponse based on different response status codes. In addition, HttpStack is also an interface. We have mentioned two classes to implement it: HurlStack and HttpClientStack. Let's go back to NetworkDispatcher. After the network is requested, the response result will be cached. If the response result is successful, call this. mDelivery. postResponse (request, volleyError1) to call it back and forth to the main thread. Let's take a look at Delivery's postResponse () method:
public void postResponse(Request request, Response response, Runnable runnable) { request.markDelivered(); request.addMarker("post-response"); this.mResponsePoster.execute(new ExecutorDelivery.ResponseDeliveryRunnable(request, response, runnable)); }
Let's see what is done in ResponseDeliveryRunnable:
private class ResponseDeliveryRunnable implements Runnable { private final Request mRequest; private final Response mResponse; private final Runnable mRunnable; public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) { this.mRequest = request; this.mResponse = response; this.mRunnable = runnable; } public void run() { if(this.mRequest.isCanceled()) { this.mRequest.finish("canceled-at-delivery"); } else { if(this.mResponse.isSuccess()) { this.mRequest.deliverResponse(this.mResponse.result); } else { this.mRequest.deliverError(this.mResponse.error); } if(this.mResponse.intermediate) { this.mRequest.addMarker("intermediate-response"); } else { this.mRequest.finish("done"); } if(this.mRunnable != null) { this.mRunnable.run(); } } } }
Line 3 calls this. mRequest. deliverResponse (this. mResponse. result). this is the method that must be implemented to implement the Request abstract class. Let's take a look at the StringRequest source code:
Public class StringRequest extends Request
{Private final Listener
MListener; public StringRequest (int method, String url, Listener
Listener, ErrorListener errorListener) {super (method, url, errorListener); this. mListener = listener;} public StringRequest (String url, Listener
Listener, ErrorListener errorListener) {this (0, url, listener, errorListener);} protected void deliverResponse (String response) {this. mListener. onResponse (response );}... omitted}
In the deliverResponse method, this. mListener. onResponse (response) is called, and the response callback is sent to the onResponse. Listener () method. The StringRequest network is written as follows:
RequestQueue mQueue = Volley. newRequestQueue (getApplicationContext (); StringRequest mStringRequest = new StringRequest (Request. Method. GET, "http://www.baidu.com", new Response. Listener
() {@ Override public void onResponse (String response) {Log. I ("wangshu", response) ;}, new Response. errorListener () {@ Override public void onErrorResponse (VolleyError error) {Log. e ("wangshu", error. getMessage (), error) ;}}); // Add the request to mQueue in the Request queue. add (mStringRequest );
Let's see that the general process of the entire Volley is okay. Let's talk about the Volley source code here.