From the source analysis of Android Volley library workflow _android

Source: Internet
Author: User
Tags comparable

Volley has now been officially placed in the AOSP, and has gradually become the official Android recommended network framework.

Class Abstraction
Abstraction of the HTTP protocol
Requeset
as the name implies, the comparable interface is implemented for the encapsulation of the request, because the priority of the request can be specified in the volley, the comparable is implemented in order to be sorted in the request task queue, and the high priority request is prioritized for execution.
Networkresponse
the encapsulation of an HTTP response, which includes the status of the returned dock data, and so on.
Response
encapsulates the results returned by the caller, which is simpler than networkresponse and contains only three things: data exceptions and cache data.
Network
the abstraction of the httpclient, accepting a request, returns a Networkresponse

Deserialization abstraction
deserialization is the transfer of objects in the network into a Java object, in volley by extending the request class to implement different deserialization functions, such as Jsonrequest Stringrequest, We can also implement various customization of the request stream by expanding some request subclasses ourselves.

Request Workflow Abstraction

Requestqueue
used to manage a variety of request queues, which contain 4 queues
A, all requests are added through Requestqueue.add (), and will be deleted when the request is finished.
b All waiting for request, this is volley to do a little optimization, imagine, we also sent out three identical request, at this time the bottom actually do not have to really walk three network requests, and only need to go a request. So the REQUEST1 is scheduled to execute after add, and Request2 and Request3 are added, and if Request1 is not finished, then Request2 and Request3 only need to wait for Request1 results.
c) Cache queues, where the request needs to perform a lookup cache work
d The Network Task Force lists the requests that need to be performed on the network request

Networkdispatcher
the thread that executes the network request, which takes a request out of the network work queue and executes it. Volley has four threads as the thread that executes the network request by default.

Cachedispatcher
the thread that performs the cache lookup, takes a request out of the cache queue, and then finds the local cache of the request. Volley only one thread performs the cache task.

Responsedelivery
request a data distributor to publish the results of the request execution.

Cache
for cache encapsulation, the main definition of how to store, get the cache, access based on request Getcachekey () to describe.

Submit Request
volley the request through Requestqueue.add (request) to the task queue:

After a request has been submitted, there are several places to place:

1.set<request<?>> mcurrentrequests corresponds to all request queues. All calls to add are bound to be added here.
2.priorityblockingqueue<request<?>> Mnetworkqueue corresponds to the network queue. If a request does not require caching, add is then added directly to the network queue.
3.priorityblockingqueue<request<?>> Mcachequeue corresponds to a cache request. If a request requires caching, and the current requestqueue does not have the same getcachekey as the current request (which can be considered a request), then join the cache queue and let the cached worker thread handle it.
4.map<string, queue<request<?>>> mwaitingrequests corresponds to the wait queue. If one of the same requests is already being processed in the requestqueue, it is only necessary to place the request in the waiting queue, and wait for the previous request result to be processed.

Is it easy to volley commit a task to the queue? Let's say the priority request, as you may have noticed, the two queues on which you need to perform tasks are priorityblockingqueue, and the previous request is comparable, look at this method:

@Override public
int CompareTo (request<t> other) {
 Priority left = this.getpriority ();
 Priority right = Other.getpriority ();
  Msequence indicates that the request sequence number, add, will be through a counter to specify return to left
  = right?
  This.msequence-other.msequence:
  right.ordinal ()-left.ordinal ();
}

So, if our work thread (Networkdispatcher,cachedispatcher) takes the task, it will naturally start from the head.

The priority here is simply to ensure that one request is processed first rather than the other, and that there is no guarantee that a high priority request will come back before the low-priority request.

Caching worker thread handling

 @Override public void Run () {//Initialize cache mcache.initialize ();
 Request<?> request;
  while (true) {//block gets a cache task request = Mcachequeue.take ();
    try {//has been canceled if (request.iscanceled ()) {request.finish ("cache-discard-canceled");
   Continue
   //If you take the Cache, put the network request queue Cache.entry Entry = Mcache.get (Request.getcachekey ());
    if (entry = = null) {Request.addmarker ("Cache-miss");
    Mnetworkqueue.put (Request);
   Continue
    }//cache timeout, put into network request queue if (entry.isexpired ()) {Request.addmarker ("cache-hit-expired");
    Request.setcacheentry (entry);
    Mnetworkqueue.put (Request);
   Continue }///According to cache construction Response response<?> Response = request.parsenetworkresponse (New Networkresponse (entry.data
   , entry.responseheaders));
   Whether to exceed the soft expiration if (!entry.refreshneeded ()) {//Direct return cache Mdelivery.postresponse (request, response);
    else {request.setcacheentry (entry); Set Intermediate result response.intermediate = true;
    Send Intermediate result final request<?> finalrequest = Request; Mdelivery.postresponse (Request, Response, new Runnable () {@Override public void run () {try {//Center
      When the result is finished, the request is put into the network queue mnetworkqueue.put (finalrequest);
      catch (Interruptedexception e) {//Not much we can does about this.
   }
     }
    });

 The catch (Exception e) {}}}

Here you can see that volley does have a very good cache encapsulation, which is considered in a variety of situations, including the more important two points:

The cache removed is not just data, but also includes some headers for this request
Hard Expiration Soft expiration
we can see that there are two fields in the cache that describe cache expiration: Cache.ttl vs Cache.softttl. What difference does it have? If the TTL expires, the cache is never used, and if the Softttl does not expire, the data is returned directly, and if Softttl expires, the request returns two times, the first time back to the CAHCE, and the second time the result of the network request is returned. Think that this is not enough to meet our many scenes? First go to the page display cache, and then refresh the page, if the cache is too long, you can wait for the network data back and then show the data, is it good?
Networkdispatcher
the worker thread that executes the network request, by default, has 4 threads, and it constantly takes task execution from the network queue.

public void Run () {process.setthreadpriority (process.thread_priority_background);
 Request<?> request;
  while (true) {Long starttimems = Systemclock.elapsedrealtime ();
  Release previous the request object to avoid leaking the request object when Mqueue is drained.
  request = NULL;
  try {request = Mqueue.take ();
   catch (Interruptedexception e) {if (mquit) {return;
  } continue;
   try {request.addmarker ("Network-queue-take");
    Cancel if (request.iscanceled ()) {request.finish ("network-discard-cancelled");
   Continue
   ///HTTP stack to implement client send network request Networkresponse Networkresponse = mnetwork.performrequest (request);

   Request.addmarker ("Network-http-complete"); If the cache is soft expired, then the network will be restarted, if the server returns 304, the last time the request result data is not expired locally, so you can use the local, because before volley already sent a response,
   So there's no need to send response results.
    if (networkresponse.notmodified && request.hashadresponsedelivered ()) {request.finish ("not-modified");
   Continue } response<?> RespoNSE = Request.parsenetworkresponse (networkresponse);
   Request.addmarker ("Network-parse-complete"); Update Cache if (Request.shouldcache () && response.cacheentry!= null) {Mcache.put (Request.getcachekey (), respon
    Se.cacheentry);
   Request.addmarker ("Network-cache-written");
   //Send results request.markdelivered ();
  Mdelivery.postresponse (request, response);
   catch (Volleyerror volleyerror) {Volleyerror.setnetworktimems (Systemclock.elapsedrealtime ()-startTimeMs);
  Parseanddelivernetworkerror (Request, volleyerror);
   catch (Exception e) {volleylog.e (E, "unhandled Exception%s", e.tostring ());
   Volleyerror volleyerror = new Volleyerror (e);
   Volleyerror.setnetworktimems (Systemclock.elapsedrealtime ()-starttimems);
  Mdelivery.posterror (Request, volleyerror);

 }
 }
}

Request
the request mainly encapsulates the various HTTP protocol information requested, such as the URL, the request method, the request priority, the request retry strategy, the caching policy, and so on.

Here is a more interesting strategy for the postback, if a timeout exception is requested at once, such as Sockettimeoutexception connecttimeoutexception, we can configure a retrypolicy for the request, You can specify the number of times the request will be sent again, and the timeout for resetting the requests after each failure (after the first failure, you can adjust the timeout for the second request to reduce the likelihood of failure).

Deserialization
The most important function of request is to provide deserialization of content and to implement different serialization functions through different subclasses. For example, if the request result is a JSON object, we can use jsonobjectrequest, if it is a common character, use stringrequest, and we can also conveniently customize our request, through the replication response <T> Parsenetworkresponse (networkresponse response) method can be.

The default Jsonrequest uses JSON parsing in Org.json, and we use Gson to parse to construct a more general request to process JSON returns:

 public class Jsongrequest<t> extends request<t> {private static Gson Gson = new Gson ();

Private response.listener<t> Mlistener; Public jsongrequest (String URL, response.errorlistener listener,response.listener responselistener) {super (URL,
 Listener);
This.mlistener = Mlistener;

Public jsongrequest (int method, String URL, Response.errorlistener listener) {Super (method, URL, listener);} @Override protected response<t> parsenetworkresponse (Networkresponse Response) {return response.success ( Gson.fromjson (New InputStreamReader (New Bytearrayinputstream (Response.data)), GetType ()), Httpheaderparser.parsecacheheaders (response))} @Override protected void Deliverresponse (T response) {if Mlistener!= n
 ull) {mlistener.onresponse (response);
 }//Get the specified generic type protected type GetType () {type superclass; for (superclass = This.getclass (). Getgenericsuperclass (); superclass instanceof Class &&!superclass.equals ( Jsongrequest.class); Superclass = (Class) superclass). Getgenericsuperclass ()) {;
 } if (superclass instanceof Class) {throw new RuntimeException ("Missing type parameter.");
  else {Parameterizedtype parameterized = (parameterizedtype) superclass;
 return parameterized.getactualtypearguments () [0];

 }
}
}

Imagerequest
volley specifically for picture requests provided imagerequest, mainly to deserialize the data flow to the bitmap, but also to make the picture size, quality and other parameters.

Imageloader is a tool that volley provides to load pictures, and it is implemented internally using Imagerequest, and the main new addition is to add a memory cache that you can configure to set the memory cache by configuring Imagecache.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.