Volley is an HTTP library that can help the Android app perform network operations more easily, and most importantly, it is faster and more efficient. We can get to volley through the open source AOSP warehouse.
Volley has the following advantages:
- Automatically schedule network requests.
- High concurrent network connections.
- Cache disk and memory transparent responses through standard HTTP cache coherence (cache consistency).
- Supports the priority of the specified request.
- The undo request API. We can cancel a single request, or specify an area in the cancel request queue.
- The framework is easy to customize, for example, custom retry or fallback functionality.
- Powerful directives (strong ordering) can make it easier to load network data asynchronously and display it correctly to the UI.
- Includes debugging and tracking tools.
Volley is good at performing RPC type operations that display the UI, such as obtaining data for search results. It easily consolidates any protocols and outputs the data for the results of the operation, either as an original string, or as a picture, or as a JSON. By providing the built-in functionality that we might use, volley allows us to avoid duplicating boilerplate code so that we can focus on the functional logic of the app.
Volley is not suitable for downloading large data files. Because volley will keep all the responses in the parsing process. For downloading a large number of data operations, consider using Downloadmanager.
The core code for the volley framework is hosted in the Frameworks/volley of the Aosp warehouse, and the relevant tools are placed under toolbox. The easiest way to add volley to a project is to Clone the warehouse and then set it to a library project:
Use the following command to clone the warehouse:
git clone Https://android.googlesource.com/platform/frameworks/volley
Import the downloaded source code into your project in the form of an Android library project.
Here we will dissect the Java source code of volley:
Requestqueue
when using volley, you need to get a Requestqueue object first. It is used to add various request tasks, usually by invoking the Volly.newrequestqueue () method to obtain a default requestqueue. We start with this method, the following is its source:
public static Requestqueue Newrequestqueue (context context) {return Newrequestqueue (context, NULL);} public static Requestqueue Newrequestqueue (context context, Httpstack stack) {file Cachedir = new file (context.getcached
IR (), default_cache_dir);
String useragent = "volley/0";
try {String PackageName = Context.getpackagename ();
PackageInfo info = Context.getpackagemanager (). Getpackageinfo (PackageName, 0);
useragent = PackageName + "/" + Info.versioncode; The catch (Namenotfoundexception e) {} if (stack = null) {if (Build.VERSION.SDK_INT >= 9) {stack = new Hurlstack
();
else {//Prior to Gingerbread, httpurlconnection is unreliable. see:http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack (
Androidhttpclient.newinstance (useragent));
} Network Network = new Basicnetwork (stack);
Requestqueue queue = new Requestqueue (new Diskbasedcache (CACHEDIR), network);
Queue.start ();
return queue;
}
The Newrequestqueue (context) called its overloaded method Newrequestqueue (context,null). In this method, the cache directory is first obtained through the context and the useragent information is constructed. Then determine if the stack is empty, from the above call to know that by default stack==null, so create a new stack object. Depending on the system version, the stack is hurlstack when the version number is greater than 9 o'clock, otherwise it is httpclientstack. The difference is that Hurlstack uses httpurlconnection for network communication, while Httpclientstack uses HttpClient. With stack, it creates a Basicnetwork object that can be guessed to handle the network request task. Then, a new requestqueue is created, which is the final request queue to return to us. This requestqueue accepts two parameters, the first is the Diskbasedcache object, which can be seen from the name for the hard disk cache, and the cache directory is the Cachedir that the method first obtains; the second parameter is the network object that you just created. The Last Call Queue.start () initiates the request queue.
Before you analyze start (), let's take a look at some of the key internal variables of requestqueue and how to construct them:
Repeated requests will be added to this collection private final map<string, queue<request>> mwaitingrequests = new hashmap<string, queue< ;
Request>> ();
A collection of all the requested tasks being processed private final set<request> mcurrentrequests = new hashset<request> (); Queue of cached tasks private final priorityblockingqueue<request> mcachequeue = new Priorityblockingqueue<request> ()
; Network request queues Private final priorityblockingqueue<request> mnetworkqueue = new Priorityblockingqueue<request> (
);
Default thread pool size private static final int default_network_thread_pool_size = 4;
For the response data storage and acquisition private final Cache Mcache;
For network requests private final network mnetwork;
Used to distribute response data private final responsedelivery mdelivery;
Network request dispatching private networkdispatcher[] mdispatchers;
Cache scheduling private Cachedispatcher mcachedispatcher; Public Requestqueue (cache cache, network network) {This (cache, network, default_network_thread_pool_size);} public requ Estqueue (cache cache, network network, int threadpoolsize) {This (cache, network, THReadpoolsize, New Executordelivery (New Handler (Looper.getmainlooper ()));
Public Requestqueue (cache cache, network network, int threadpoolsize, responsedelivery delivery) {Mcache = cache;
Mnetwork = network;
Mdispatchers = new Networkdispatcher[threadpoolsize];
Mdelivery = delivery;
}
Requestqueue has more than one construction method, which will eventually call the last. In this method, Mcache and Mnetwork are respectively set to Diskbasedcache and Basicnetwork from the Newrequestqueue. Mdispatchers is the array of the network request scheduler, the default size of 4 (default_network_thread_pool_size). Mdelivery is set to the new Executordelivery (new Handler (Looper.getmainlooper)), which is used to respond to data delivery, which is described later. Can be seen, in fact, we can customize a requestqueue and do not necessarily use the default newrequestqueue.
Here's a look at how the start () method starts the request queue:
public void Start () {
stop (),//Make sure any currently running dispatchers are stopped.
Create the cache dispatcher and start it.
Mcachedispatcher = new Cachedispatcher (Mcachequeue, Mnetworkqueue, Mcache, mdelivery);
Mcachedispatcher.start ();
Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mdispatchers.length i++) {
Networkdispatcher networkdispatcher = new Networkdispatcher (mnetwor Kqueue, Mnetwork,
Mcache, mdelivery);
Mdispatchers[i] = Networkdispatcher;
Networkdispatcher.start ();
}
The code is simpler, two things are done. First, create and start a cachedispatcher. Second, create and start four networkdispatcher. The so-called launch request queue is the task to the cache scheduler and network Request Scheduler processing.
Here is another question, how does the request task join the request queue? is actually called the Add () method. Now look at how it's handled internally:
Public request Add (Request request) {//Tag the request as belonging to this queue and add it to the set of current req
Uests.
Request.setrequestqueue (this);
Synchronized (mcurrentrequests) {mcurrentrequests.add (request);
}//Process requests in the order they are added.
Request.setsequence (Getsequencenumber ());
Request.addmarker ("Add-to-queue");
If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldcache ()) {mnetworkqueue.add (request);
return request;
//Insert request into stage if there ' s already a request with the same cache key in flight.
Synchronized (mwaitingrequests) {String CacheKey = Request.getcachekey (); if (Mwaitingrequests.containskey (CacheKey)) {//There is already a request in flight.
Queue up.
queue<request> stagedrequests = Mwaitingrequests.get (CacheKey);
if (stagedrequests = = null) {stagedrequests = new linkedlist<request> ();
} stagedrequests.add (Request); MwaitingrequEsts.put (CacheKey, stagedrequests);
if (volleylog.debug) {volleylog.v ("Request for cachekey=%s are in flight, putting on hold.", CacheKey);
} else {//Insert ' null ' queue for this cachekey, indicating there are now request in//flight.
Mwaitingrequests.put (CacheKey, NULL);
Mcachequeue.add (Request);
return request;
}
}
The code for this method is a bit long, but the logic is not complicated. First add the task to the mcurrentrequests, then determine if the cache is needed, and join the Network request task Queue Mnetworkqueue and then return without the need. All tasks require caching by default, and you can call Setshouldcache (Boolean Shouldcache) to change the settings. All that needs to be cached is added to the cache task queue Mcachequeue. But first to determine whether the mwaitingrequests has been, to avoid duplication of requests.
Dispatcher
after Requestqueue calls start (), the request task is handed over to Cachedispatcher and Networkdispatcher. They all inherit from thread, which is actually the background worker, responsible for getting the data from the cache and the network, respectively.
Cachedispatcher
Cachedispatcher constantly remove the task from the Mcachequeue, the following is its run () method:
public void Run () {if (DEBUG) volleylog.v ("Start new Dispatcher");
Process.setthreadpriority (Process.thread_priority_background);
Make a blocking call to initialize the cache.
Mcache.initialize (); while (true) {try {//get-a request from the cache triage \ Blocking until/At least one is available. Remove Cache queue
Mission Final Request request = Mcachequeue.take ();
Request.addmarker ("Cache-queue-take");
If the request has been canceled, don ' t bother dispatching it.
if (request.iscanceled ()) {request.finish ("cache-discard-canceled");
Continue
}//Attempt to retrieve the this item from cache.
Cache.entry Entry = Mcache.get (Request.getcachekey ());
if (entry = = null) {Request.addmarker ("Cache-miss"); Cache Miss;
Send off to the network dispatcher.
Mnetworkqueue.put (Request);
Continue
}//If It is completely expired, just send it to the network.
if (entry.isexpired ()) {Request.addmarker ("cache-hit-expired");
Request.setcacheentry (entry); MnetworkqueUe.put (Request);
Continue }//We have a cache hit;
Parse its data for delivery the request.
Request.addmarker ("Cache-hit");
response<?> Response = request.parsenetworkresponse (New Networkresponse (Entry.data, entry.responseheaders));
Request.addmarker ("cache-hit-parsed"); if (!entry.refreshneeded ()) {//completely unexpired cache hit.
Just deliver the response.
Mdelivery.postresponse (request, response); else {//soft-expired cache hit.
We can deliver the cached response,//But we need to also send the "request to" network for//refreshing.
Request.addmarker ("cache-hit-refresh-needed");
Request.setcacheentry (entry);
Mark the response as intermediate.
Response.intermediate = true; Post the intermediate response the user and have//the delivery then forward the request along to the network
. Mdelivery.postresponse (Request, Response, new Runnable () {@Override public void run () {try {mnetworkqueue.put R
Equest); } CAtch (interruptedexception e) {//Not much we can does about this.
}
}
});
The catch (Interruptedexception e) {//We may have been interrupted because it is time to quit.
if (mquit) {return;
} continue;
}
}
}
The first is to invoke the Mcache.initialize () initialization cache, followed by a while (true) dead loop. In the loop, remove the task of the cached queue. Determine whether the task is canceled, if it is done Request.finish ("cache-discard-canceled") and skip the following code to restart the loop, or whether the task has cached data from the cache. If the cached data does not exist, join the task in the network request queue and skip the following code to start the loop again. If the cache is found, it determines whether it expires, or joins the network request queue, or calls the request's parsenetworkresponse to parse the response data. The final step is to determine the freshness of the cached data, do not need to refresh the freshness of the direct call Mdelivery.postresponse (request, response) to pass the response data, or still to join the mnetworkqueue for freshness verification.
The above code logic is not very complex, but the description is more around, the following diagram can help understand:
Networkdispatcher
Cachedispatcher from the cache to find the response data for the task, if the task is not cached or cache failure will be handed over to networkdispatcher processing. It constantly pulls out task execution from the network Request task queue. The following is its run () method:
public void Run () {process.setthreadpriority (process.thread_priority_background);
Request request;
while (true) {try {//Take A] request from the queue.
Request = Mqueue.take ();
The catch (Interruptedexception e) {//We may have been interrupted because it is time to quit.
if (mquit) {return;
} continue;
try {request.addmarker ("Network-queue-take");
If the request was cancelled already, does not perform the//network request.
if (request.iscanceled ()) {request.finish ("network-discard-cancelled");
Continue }//Tag the request (if API >=) if (Build.VERSION.SDK_INT >= build.version_codes.
Ice_cream_sandwich) {Trafficstats.setthreadstatstag (Request.gettrafficstatstag ()); }//Perform the network request.
Send Network request Networkresponse Networkresponse = mnetwork.performrequest (request);
Request.addmarker ("Network-http-complete"); If the server returned 304 and we delivered a response already,//we ' re done--don ' t deliver a second IdenticaL Response.
if (networkresponse.notmodified && request.hashadresponsedelivered ()) {request.finish ("not-modified");
Continue
}//Parse The response here is on the worker thread.
response<?> Response = Request.parsenetworkresponse (networkresponse);
Request.addmarker ("Network-parse-complete");
Write to cache if applicable.
Todo:only Update cache metadata instead of entire record for 304s. if (Request.shouldcache () && response.cacheentry!= null) {Mcache.put (Request.getcachekey (),
Response.cacheentry);
Request.addmarker ("Network-cache-written");
}//Post the response back.
Request.markdelivered ();
Mdelivery.postresponse (request, response);
catch (Volleyerror volleyerror) {parseanddelivernetworkerror (request, volleyerror);
catch (Exception e) {volleylog.e (E, "unhandled Exception%s", e.tostring ());
Mdelivery.posterror (Request, new Volleyerror (e));
}
}
}
As you can see, the run () method is still an infinite loop. Remove a task from the queue, and then determine if the task is canceled. Call Mnetwork.performrequest (Request) to get the response data if it is not canceled. If the data is 304 responsive and already has data delivery for this task, this is a request to verify freshness in Cachedispatcher and does not require refreshing freshness, so skip the following code and start the loop again. Otherwise, continue to the next step and parse the response data to see if the data is being cached. Finally, call Mdelivery.postresponse (request, response) to pass the response data. The following diagram shows the process of this method:
Delivery
in Cachedispatcher and Networkdispatcher, data is passed through Mdelivery.postresponse (request, response) after data is obtained for the task. We know that Dispatcher is a separate thread, so we have to get the data from them in some way to the main thread to see how Deliver does it.
The Mdelivery type is executordelivery, and the following is its Postresponse method source code:
public void Postresponse (request<?> request, response<?> Response) {
postresponse (request, Response, NULL);
public void Postresponse (request<?> Request, response<?> Response, Runnable Runnable) {
Request.markdelivered ();
Request.addmarker ("Post-response");
Mresponseposter.execute (New responsedeliveryrunnable (Request, Response, runnable));
}
As you can see from the above code, it is ultimately through the invocation of Mresponseposter.execute (new responsedeliveryrunnable (Request, Response, runnable)) for data delivery. The mresponseposter here is a Executor object.
Private final Executor Mresponseposter;
Public Executordelivery (Final Handler Handler) {
//make a Executor that just wraps the Handler.
Mresponseposter = new Executor () {
@Override public
void execute (Runnable command) {
handler.post ( command);
}
;
Executor is the thread pool framework interface, which has only one execute () method, which is implemented Mresponseposter to pass Runnable objects with handler. In the Postresponse method, request and response are encapsulated as responsedeliveryrunnable, which is a Runnable object. So the response data is passed through the handler, so where does this handler come from? In fact, the introduction of Requestqueue has been mentioned: Mdelivery set to New Executordelivery (new Handler (Looper.getmainlooper)), this Handler is new Handler (Looper.getmainlooper ()) is connected to the message loop of the main thread so that the data is successfully delivered to the main thread.
Summarize
the basic principle of volley is this, using a picture to summarize its running process: