Volley source code Analysis (iii)--volley core of the Requestqueue class

Source: Internet
Author: User

The previous article shows you the internal structure of request<t>, for this class, volley let us focus on the main request to get a response, how to resolve the response according to their own needs, and then callback the listener in the main thread method, as to how to get a response, How the thread is opened is irrelevant to the request.

As mentioned earlier, the request will be placed in the queue waiting for thread extraction, the Requestqueue class as the core class of volley, can be said to be a bridge of connection requests and responses. In addition, the Requestqueue class as a collection, to our unified management request to bring greater convenience, such a thought is worthy of our learning.

Android design, there is also the use of the form of the queue design, a more typical example, is the implementation of Handler,loop,message. Specifically, you can refer to this article blog.csdn.net/crazy__chen/article/details/44889479

The following from the source point of view Requestqueue class, the first of course is the property

/** * A request dispatch queue with A thread pool of dispatchers. * * Calling {@link #add (Request)} would enqueue the given Request for dispatch, * Resolving from either cache or network O n A worker thread, and then delivering * a parsed response on the main thread. * A request queue with a thread pool * calls the Add () distribution and adds a request for distribution * The worker thread gets the response from the cache or network and then provides the response to the main thread */public class Requestqueue {/** * call     Back interface for completed requests. * Callback Interface for task completion */public static interface Requestfinishedlistener<t> {/** Called when a request has fin ished processing.    */public void onrequestfinished (request<t> Request);     }/** * used for generating monotonically-increasing sequence numbers for requests.    * Use atomic classes to record the current number of requests in the queue */private Atomicinteger msequencegenerator = new Atomicinteger (); /** * Staging area for requests that already has a duplicate request in flight.<br> * Wait for cache queue, repeat request aggregation map, each Qu Eue inside are the same request * <ul> * &Lt;li>containskey (CacheKey) indicates that there are a request in flight for the given cache * key.</li&     Gt * <li>get (CacheKey) returns waiting requests for the given cache key. The In flight request * is <em>not</em> contained on that list. Is null if no requests is staged.</li> * </ul> * If the map contains the cachekey of the request, it indicates that a request with the same key is executing * GET (CAC            Hekey) returns the corresponding request based on CacheKey */private final map<string, queue<request<?>>> mwaitingrequests =    New hashmap<string, queue<request<?>>> (); /** * The set of all requests currently being processed by this requestqueue.     A Request * would be was in the this set if it's waiting in any queue or currently being processed by * any dispatcher. * The queue currently owns the requested set of requests in the queue, or is being dispatched, will be in this set */private final set<request<?>> mcurrentrequests = NE    W hashset<request<?>> (); /** * the Cache TriAge queue. * Cache Queue */private final priorityblockingqueue<request<?>> mcachequeue = new Priorityblockingque    Ue<request<?>> ();     /** * The queue of requests that is actually going out to the network. * Network queue, with blocking and FIFO function */private final priorityblockingqueue<request<?>> mnetworkqueue = new Priori    Tyblockingqueue<request<?>> ();     /** * Number of network request dispatcher threads to start.    * Default number of thread pools for scheduling */private static final int default_network_thread_pool_size = 4;     /** * Cache interface for retrieving and storing responses.    * Cache */private final cache Mcache;     /** * Network interface for performing requests.    * Network executing the request */private final network mnetwork; /** Response delivery mechanism.    */private final responsedelivery mdelivery;     /** * The network dispatchers. * All network Scheduler for this queue */private networkdispatcher[] Mdispatchers;     /** * the cache dispatcher.    * Cache Scheduler */private Cachedispatcher mcachedispatcher; /** * Task Completion listener queue */private list<requestfinishedlistener> mfinishedlisteners = new Arraylist&lt ; Requestfinishedlistener> ();

A lot of properties, and the coupling of the class is more, I choose the important point, here everyone just remember that a property is what you can, as to its specific implementation we don't care

1, first look at the list<requestfinishedlistener> mfinishedlisteners task complete listener queue, this queue retains a lot of listeners, these listeners are listening to the Requestqueue request queue, Instead of listening to a single request. After each request is completed in the Requestqueue, all listeners in the listener queue are recalled. This is the embodiment of the unified management of Requestqueue.

2,atomicinteger Msequencegenerator Atomic class, familiar to Java multithreading friends should know that this is for thread safety and create a class, do not know the friend, you can recognize it is an int type, used to record the number of requests in the current queue

3,priorityblockingqueue<request<?>> Mcachequeue Cache queue for requesting cache requests, thread-safe, blocking functions, i.e. when there is nothing in the queue, When a thread tries to fetch a request from a queue, the thread blocks

4,priorityblockingqueue<request<?>> mnetworkqueue Network queue for request to prepare for initiating network requests, same as

5,cachedispatcher Mcachedispatcher Cache Scheduler, inherited the thread class, essentially a thread, this thread will be turned on into a dead loop, constantly fetching requests from the Mcachequeue cache queue, Then go to cache caches to find the results

6,networkdispatcher[] Mdispatchers Network Scheduler Array, inherited the thread class, essentially multiple threads, so threads will be turned on into a dead loop, constantly fetching requests from the Mnetworkqueue network queue, Then go to network networks to request data

7,set<request<?>> Mcurrentrequests records all requests in the queue, which is the sum of Mcachequeue cache queues and Mnetworkqueue network queues above, for unified management

8,cache Mcache Cache objects, object-oriented thinking, the cache as an entity

9,network Mnetwork Network objects, object-oriented thinking, the network as an entity

The 10,responsedelivery Mdelivery dispatcher, which is the dispatcher, is responsible for sending the response to the corresponding request, which has already been mentioned before, mainly for the sake of coupling and the ability to manipulate the UI in the main thread.

11,map<string, queue<request<?>>> mwaitingrequests waits for the cache queue, repeats the request to assemble the MAP, and each Queue has the same request. Why do we need this map? The key of the map is actually the URL of the request, if we have multiple requests for the same URL, that is, the requested resource is the same, volley put these requests into a queue, in the URL to do a key to the queue into the map.

Because these requests are the same, you can say that the results are the same. Then we just get the result of a request, the other same request, from the cache to take it.

So waiting for the cache queue is that when one of the request gets a response, we put the queue in the cache queue Mcachequeue, so that the request goes to cache to get the results.

This is the idea of volley processing duplicate requests.


Actually understand the above attributes can understand the role of the Requestqueue class, we combine the above attributes, look at the flowchart


OK, let's start with the constructor.

/** * creates the worker pool.     Processing won't begin until {@link #start ()} is called. * Create a working pool, after calling the start () method, start executing * @param cache A Cache to usage for persisting responses to disk * @param network A Ne     Twork interface for performing HTTP requests * @param threadpoolsize number of the network dispatcher threads to create * @param delivery A Responsedelivery interface for posting responses and errors */public requestqueue (Cache Cach        E, network Network, int threadpoolsize, responsedelivery delivery) {Mcache = cache;//cache, reserved for response to hard disk        mnetwork = network;//network interface for executing HTTP requests mdispatchers = new networkdispatcher[threadpoolsize];//Creating a scheduler array based on thread pool size Mdelivery = delivery;//A distribution interface for response and error}/** * creates the worker pool.     Processing won't begin until {@link #start ()} is called. * * @param cache a cache to persisting responses to disk * @param network A network interface for Performi NG HTTP Requests * @param threadpoolsize number of network dispatcher threads to create */public requestqueue (cache cache, Network n etwork, int threadpoolsize) {This (cache, network, ThreadPoolSize, New Executordelivery (New Handler (    Looper.getmainlooper ()))); }
For Requestqueue, the required parameters are the number of caches, networks, distributors, network threads

Corresponding to the above properties can be known, originally these things are in the external, referring to the opening of this column, you can know, is in the volley this class inside, while in the outside, we also through Volley.newrequestqueue () method to create and open queue queues.


Then look at the start () method, which is used to start the queue

/**     * Starts the dispatchers in this queue.     */public    void Start () {        stop ();  Ensure that all currently running distributions stop make sure any currently running dispatchers is stopped.        Create the cache dispatcher and start it.        Create a new cache scheduler, and start it        mcachedispatcher = new Cachedispatcher (Mcachequeue, Mnetworkqueue, Mcache, mdelivery);        Mcachedispatcher.start ();        Create network dispatchers (and corresponding threads) up to the pool size.        Create the network scheduler and start them for        (int i = 0; i < mdispatchers.length; i++) {            Networkdispatcher networkdispatcher = new Ne Tworkdispatcher (Mnetworkqueue, Mnetwork,                    Mcache, mdelivery);            Mdispatchers[i] = Networkdispatcher;            Networkdispatcher.start ();        }    }

As can be seen, the so-called boot queue, is the creation of the Cachedispatcher cache scheduler, and mdispatchers[] Network Scheduler Array, according to the previous introduction we know that they are threads, so the start () method inside, is actually called their start ( Method That is to say, the essence of Requestqueue startup is that these schedulers are started, and when these schedulers are started, they go into a dead loop and constantly pull requests from the queue to request data.

Since the number of dispatcher schedulers is limited (based on the parameter ThreadPoolSize we passed to the constructor method), it means that the volley framework, while the number of threads executing the data request, is limited, thus avoiding the overhead of repeating the thread creation. It can also lead to a decrease in efficiency.

So threadpoolsize to different applications, set the size of everyone, we have to according to the actual situation of their projects, tested to determine this value.


When we're done, we'll see Requestqueue close.

/**     * Stops the cache and network dispatchers.     * Stop scheduler (including cache and network) */public    void Stop () {        if (mcachedispatcher! = null) {            mcachedispatcher.quit ();        } for        (int i = 0; i < mdispatchers.length; i++) {            if (mdispatchers[i]! = null) {                mdispatchers[i].quit (); 
   }        }    }
In contrast, the essence of Stop () is to close all the schedulers, call their quit () method, as to what this method does, it is easy to think of, is to set their internal while loop flag to False


Then look at the Add () method, which is a very important way to join the request queue.

/** * Adds a Request to the dispatch queue. * @param requests the request to service * @return the passed-in request * * * * * * * for requesting queue * * Public <T> Req uest<t> Add (request<t> request) {//Tag the request as belonging to this queue and add it to the set O        F Current requests. Request.setrequestqueue (this);//set its request queue synchronized (mcurrentrequests) {Mcurrentrequests.add (reques) for the request        T);        }//Process requests in the order they is added.        Request.setsequence (Getsequencenumber ());//Set Request sequence number Request.addmarker ("Add-to-queue");        If the request is uncacheable, skip the cache queue and go straight to the network.            If the request is not cached, add to the network queue if (!request.shouldcache ()) {mnetworkqueue.add (request);        return request;        }//If the request requires a cache//Insert request into stage if there's already a request with the same cache key in flight. Synchronized (mwaitingrequests) {String CacheKey = Request.getcachekey (); if (Mwaitingrequests.containskey (CacheKey)) {//There is already a request in flight.            Queue up.                If there is already a request at work, then queue queue<request<?>> stagedrequests = Mwaitingrequests.get (CacheKey);                if (stagedrequests = = null) {stagedrequests = new linkedlist<request<?>> ();                } stagedrequests.add (Request);                Mwaitingrequests.put (CacheKey, stagedrequests);                 if (volleylog.debug) {volleylog.v ("Request for cachekey=%s are in flight, putting on hold.", CacheKey); }} else {//Insert ' null ' queue for this cacheKey, indicating there are now a R            Equest in//flight.                Inserting null for the key indicates that there is now a request at work mwaitingrequests.put (CacheKey, NULL);            Mcachequeue.add (Request);        }    return request; }    }
For a request, first it will be added to Mcurrentrequests, which is the unified management for request

Then, call Shouldcache () to determine whether to fetch from the cache or network requests, if it is a network request, add Mnetworkqueue, and then change the method to return

If the cache is requested, according to whether Mwaitingrequests already has the same request in progress, if so, add the request to Mwaitingrequests

If not, add request to Mcachequeue to cache the query


So far, we know that the scheduler will get the request from the queue, as to how the specific request, we are not clear. This also embodies the rationality of the design of volley, the allocation of various responsibilities through a combination, each class of responsibilities are relatively single.

We mentioned that one of the important roles of Requestqueue is the unified management of the request, in fact the so-called management, more of the request is closed, let me look at these methods

/** * Called from {@link request#finish (String)}, indicating this processing of the given Request * has finished. * In the Finish () method of the request class, this method is called, stating that the request ends * <p>releases waiting requests for <code>request.getcachekey ( ) </code> If * <code>request.shouldcache () </code>.</p> * * Public <T> void fin        Ish (request<t> Request) {//Remove from the set of requests currently being processed.        Synchronized (mcurrentrequests) {//Remove Mcurrentrequests.remove from the current request queue (request); } synchronized (mfinishedlisteners) {//Callback listener for (requestfinishedlistener<t> Listener:mfinishedlis          Teners) {listener.onrequestfinished (request); }} if (Request.shouldcache ()) {//If the request is to be cached synchronized (mwaitingrequests) {Stri                ng CacheKey = Request.getcachekey (); Queue<request<?>> waitingrequests = mwaitingrequests.Remove (CacheKey);//Remove the cache if (waitingrequests! = null) {//If there is a cache waiting queue if (volleylog.debug) {VOLLEYLOG.V ("Releasing%d waiting requests for cachekey=%s.", Wai                    Tingrequests.size (), CacheKey); }//Process all queued up requests. They won ' t is considered as in flight, but//That's not a problem as the cache have been primed by ' req                    Uest '.        Process requests in all Queues Mcachequeue.addall (waitingrequests);/}} }    }

Finish () is used to indicate that a particular request has been completed and that only the request to be completed is passed in, and then it is removed from each queue

It is important to note that after a request is completed, all the same requests in waitingrequests are added to the Mcachequeue cache queue, which means that the requests are removed from the cache, which avoids the overhead of frequent same network requests. This is also one of the highlights of volley.


And then we'll look at some cancellation methods.

/** * A Simple predicate or filter interface for requests, for use by * {@link Requestqueue#cancelall (requestfilte R)}. * A simple filter interface that is used inside the Cancelall () method */public interface Requestfilter {public boolean apply (request&lt ;?    > Request);     }/** * Cancels all requests in this queue for which the given filter applies.         * @param filter The filtering function to use * Cancels the request according to the filter rule */public void Cancelall (Requestfilter filter) {  Synchronized (mcurrentrequests) {for (request<?> request:mcurrentrequests) {if (filter.apply (Request))                {Request.cancel (); }}}}/** * Cancels all requests in this queue with the given tag.     Tag must be non-null * and equality are by identity. * Cancel the request according to the Mark */public void Cancelall (final Object tag) {if (tag = = null) {throw new Illegalarg Umentexception ("Cannot cancelall with a null Tag ");                } cancelall (New Requestfilter () {@Override public boolean apply (Request<?> Request) {            return Request.gettag () = = tag;    }        }); }

The above design can be said to be very ingenious, in order to increase the flexibility of cancellation, created a requestfilter to customize the cancellation request rule

In the Cancelall (Requestfilter filter) method, we pass in the filter and can cancel the type of request I want to cancel as needed, which is similar to the file traversal FileFilter

In this form, volley also provides us with a concrete implementation of Cancelall (final Object tag) to cancel the request based on the label, and here we understand the usefulness of the Mtag attribute in the Request<t> class.

It can be said that volley everywhere embodies the beauty of design patterns.


Ok,requestqueue introduced here, introduced the entire basic structure, the remaining confusion, is how the Cachedispatcher,networkdispatcher from the queue to remove the question of request, But these problems with the queue of the relationship is not so tight, that is, the specific implementation of the task, and handed over to these two classes, in a word, here also embodies the principle of single responsibility.

The next article will classify the implementation of these two features.


Volley source code Analysis (iii)--volley core of the Requestqueue class

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.