Android advanced-Volley-2.RequestQueue & amp; NetworkDispatcher

Source: Internet
Author: User

Android advanced-Volley-2.RequestQueue & amp; NetworkDispatcher

 

 

Using Volley is simple, and the process is divided into two steps:

1. CREATE request queue RequestQueue queue = new Volley. newRequestQueue ()

2. Create a new request XXRequest and add the request to the queue: queue. add (XXRequest );

I. Volley. newRequestQueue ()

After a new queue is created, you only need to put all types of requests in the queue, and the system automatically processes the requests. Now let's take a look at what Volley. newRequestQueue has done:

public static RequestQueue newRequestQueue(Context context, HttpStack stack) {        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);        String userAgent = volley/0;        try {            String packageName = context.getPackageName();            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);            userAgent = packageName + / + info.versionCode;        } catch (NameNotFoundException e) {        }        if (stack == null) {            if (Build.VERSION.SDK_INT >= 9) {                stack = new HurlStack();            } else {                // Prior to Gingerbread, HttpUrlConnection was unreliable.                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));            }        }        Network network = new BasicNetwork(stack);        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);        queue.start();        return queue;}

The core code in this function is:

RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);queue.start();

The two parameters in newRequestQueue are used to process cache requests and network requests respectively. I will talk about it later. Let's take a look at the following sentence ~ Queue. start ();

public void start() {        stop();  // Make sure any currently running dispatchers are stopped.        // Create the cache dispatcher and start it.        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);        mCacheDispatcher.start();        // Create network dispatchers (and corresponding threads) up to the pool size.        for (int i = 0; i < mDispatchers.length; i++) {            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,                    mCache, mDelivery);            mDispatchers[i] = networkDispatcher;            networkDispatcher.start();        }}

From the code, we can see that the execution of queue. start () enables mCacheDispatcher and networkDispatcher, that is, the thread responsible for processing Cache requests and Network requests.

 

In summary, Volley. newRequestQueue creates a new request queue, and enables the cache request processing thread mCacheDispatcher and the network request processing thread networkDispatcher.

 

2. NetworkDispatcher & CacheDispatcher

NetworkDispatcher is used to process network requests. Its constructor is:

public NetworkDispatcher(BlockingQueue
 
   queue,            Network network, Cache cache,            ResponseDelivery delivery) {        mQueue = queue;        mNetwork = network;        mCache = cache;        mDelivery = delivery;}
 

Now let's look back at the start function of RequestQueue, where ~ RequestQueue. java/start ():

NetworkDispatchernetworkDispatcher = new NetworkDispatcher (mNetworkQueue, mNetwork, mCache, mDelivery );

Here mNetworkQueue is defined and allocated in RequestQueue ~ RequestQueue. java:

private final PriorityBlockingQueue
 
   mNetworkQueue =        new PriorityBlockingQueue
  
   ();
  
 

BlockingQueue composed of requests is used to store network requests. (RequestQueue also declares & defines a mCacheQueue, similar to mNetworkQueue, used to store cache requests)

Now let's take a look at the functions executed in the run function of NetworkDispatcher:

  public void run() {        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);        Request request;        while (true) {            try {                // Take a request from the queue.                request = mQueue.take();            } catch (InterruptedException e) {                // We may have been interrupted because it was time to quit.                if (mQuit) {                    return;                }                continue;            }            try {                request.addMarker(network-queue-take);                // If the request was cancelled already, do not perform the                // network request.                if (request.isCanceled()) {                    request.finish(network-discard-cancelled);                    continue;                }                // Tag the request (if API >= 14)                if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {                    TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());                }                // Perform the network request.                NetworkResponse networkResponse = mNetwork.performRequest(request);                request.addMarker(network-http-complete);                // If the server returned 304 AND we delivered a response already,                // we're done -- don't deliver a second identical response.                if (networkResponse.notModified && request.hasHadResponseDelivered()) {                    request.finish(not-modified);                    continue;                }                // Parse the response here on the worker thread.                Response
  response = request.parseNetworkResponse(networkResponse);                request.addMarker(network-parse-complete);                // Write to cache if applicable.                // TODO: Only update cache metadata instead of entire record for 304s.                if (request.shouldCache() && response.cacheEntry != null) {                    mCache.put(request.getCacheKey(), response.cacheEntry);                    request.addMarker(network-cache-written);                }                // Post the response back.                request.markDelivered();                mDelivery.postResponse(request, response);            } catch (VolleyError volleyError) {                parseAndDeliverNetworkError(request, volleyError);            } catch (Exception e) {                VolleyLog.e(e, Unhandled exception %s, e.toString());                mDelivery.postError(request, new VolleyError(e));            }        }}

The content in while (true) is cyclically executed. The general process is to retrieve a request from the mQueue (mNetworkQueue in RequestQueue) (Note: here the mNetworkQueue type is PriorityBlockingQueue , Indicates a congestion queue with a priority. When the mNetworkQueue queue is empty, this thread is stuck in the mQueue. take () step until new requests arrive .) Next, if (mQuit) {return;}. When the thread executes quit, mQuit is assigned a value of true and the thread exits. This is not important here. You can leave it alone. Then determine whether the Request is canceled. If the Request is canceled, retrieve a new Request in mNetworkQueue for processing. If the Request is not canceled, run NetworkResponse networkResponse = mNetwork. initiate mrequest (request); (the previous judgment of a segment of API> = 14, and some other things are ignored here. Read the logic first.) mNetwork is a Network interface, the initiate mrequest function in the Network executes the request and returns a NetworkResponse. Its prototype is (Network. java ):

public NetworkResponse performRequest(Request
  request) throws VolleyError;

MNetwork is initialized in the constructor (RequestQueue. start ()):

NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,mCache, mDelivery);

The mNetwork in RequestQueue is initialized in its constructor (Volley. newRequestQueue ()):

Network network = new BasicNetwork(stack);RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);

Then parse the returned NetworkResponse using the request, and return a type of Response. . Then determine whether to cache the data. Last run:

Request. markDelivered (); // set mResponseDelivered to true and ignore it first.

MDelivery. postResponse (request, response );

Pass the result to the main thread.

CacheDispatcher is used to process cache requests. It is assumed that no cache class is used, so this item is not analyzed.

 

 

 

To sum up, we have learned about RequestQueue and NetworkDispatcher:

1. The general work performed by Volley. newRequestQueue: The RequestQueue is initialized, and then the executed queue. start starts the mCacheDispatcher thread and X NetworkDispatcher. thread.

2. NetworkDispatcher class: After the thread is started, the Request will be continuously retrieved from the Network Request queue mQueue before the thread is stopped, and the Request is completed using the interface Network. After the Network executes the request, it returns NetworkResponse, uses request to parse NetworkResponse into Response, and finally transmits the result using mDelivery. The classes used and not analyzed include Network, NetworkResponse, Response, and mDelivery. To continue.









Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.