[Android] Volley source Analysis (ii) Cache

Source: Internet
Author: User

The cache, as part of Volley's core, volley the heavy-colored to achieve it. In this chapter we follow the volley source of ideas down, to see the volley cache of the processing logic.

As we recall the simple code of yesterday, our entry begins with the construction of a request queue, and we do not directly invoke new to construct it, but instead transfer control to the static factory of volley.

Com.android.volley.toolbox.Volley:

 public static Requestqueue Newrequestqueue (context context, Httpstack stack) {file Cachedir = new File (context.ge        Tcachedir (), default_cache_dir);        String useragent = "volley/0";            try {String PackageName = Context.getpackagename ();            PackageInfo info = Context.getpackagemanager (). Getpackageinfo (PackageName, 0);        useragent = PackageName + "/" + Info.versioncode;                } catch (Namenotfoundexception e) {} if (stack = = null) {if (Build.VERSION.SDK_INT >= 9) {            stack = new Hurlstack ();                } else {//Prior to Gingerbread, httpurlconnection is unreliable. see:http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientSta            CK (Androidhttpclient.newinstance (useragent));        }} Network Network = new Basicnetwork (stack); Requestqueue queue = new Requestqueue (New Diskbasedcache (CachEDIR), network);        Queue.start ();    return queue; }

The parameter httpstack is used to develop your httpstack implementation mechanism, such as using Apache http-client or httpurlconnection. Of course, if you don't specify, volley will also give you different strategies depending on your SDK version. This Httpstack object is wrapped by the network object. As we said in the previous section, in order to construct a unified network call for the platform, the volley is bridged to make the network call, and the bridge interface is the net.

The core of volley is the cache and network. Now that two objects have been constructed, we can generate the request queue Requestqueue. But why open Queue.start? Let's look at this code first:

public void Start () {        stop ();  Make sure any currently running dispatchers is stopped.        Create the cache dispatcher and start it.        Mcachedispatcher = new Cachedispatcher (Mcachequeue, Mnetworkqueue, Mcache, mdelivery);        Mcachedispatcher.start ();        Create network dispatchers (and corresponding threads) up to the pool size.        for (int i = 0; i < mdispatchers.length; i++) {            Networkdispatcher networkdispatcher = new Networkdispatcher (mnetwor Kqueue, Mnetwork,                    Mcache, mdelivery);            Mdispatchers[i] = Networkdispatcher;            Networkdispatcher.start ();        }    }

Previous architecture We've already said that volley uses the producer and consumer models to generate reactors, and that reaction has to be done in a threaded way. After the start of Requestqueue is called, a cache thread and a certain number of network thread pools are opened. We see that the number of thread pools Networkdispatcher is specified by the array mdispatchers. And Mdispatchers's assignment is in the <init> of Requestqueue:

Public Requestqueue (cache cache, network network, int threadpoolsize,            responsedelivery delivery) {        Mcache = Cache;        mnetwork = network;        Mdispatchers = new Networkdispatcher[threadpoolsize];        Mdelivery = delivery;    }

How? Do you think that the volley code is very simple and reasonable to write. Well, by the start of Requestqueue.start, we've built its context for our request, and then we just need to add it to this queue to get it right;

 Public <T> request<t> Add (request<t> request) {//Tag the Request as belonging to this queue an        D add it to the set of current requests.        Request.setrequestqueue (this);        Synchronized (mcurrentrequests) {mcurrentrequests.add (request);        }//Process requests in the order they is added.        Request.setsequence (Getsequencenumber ());        Request.addmarker ("Add-to-queue");        If the request is uncacheable, skip the cache queue and go straight to the network.            if (!request.shouldcache ()) {mnetworkqueue.add (request);        return request;        }//Insert request to stage if there ' s already a request with the same cache key in flight.            Synchronized (mwaitingrequests) {String CacheKey = Request.getcachekey ();            System.out.println ("Request.cachekey =" + (CacheKey)); if (Mwaitingrequests.containskey (CacheKey)) {//There is already a reQuest in flight.              Queue up. <span style= "color: #33cc00;"                > queue<request<?>> stagedrequests = mwaitingrequests.get (CacheKey);                if (stagedrequests = = null) {stagedrequests = new linkedlist<request<?>> ();                } stagedrequests.add (Request);  Mwaitingrequests.put (CacheKey, stagedrequests);</span>} else {//Insert ' null ' queue for                This cacheKey, indicating there are now a request in//flight.                Mwaitingrequests.put (CacheKey, NULL);            Mcachequeue.add (Request);        } return request; }    }

The green part of the code is the finishing touches, with the concept of staging, avoiding duplicate requests. When we add a request, we need to set up this requestqueue. The goal is to reclaim yourself from the queue at the end. We can also see a simple state machine here:

Request.addmarker ("Add-to-queue");

This method is called in a different context of request. Convenient to check the wrong later. The request will then check if the cache is required

        if (!request.shouldcache ()) {            mnetworkqueue.add (request);            return request;        }
In our view, it seems that the text data does not need the cache, you can use this method to realize whether to cache your things, of course, do not limit your data type. Then, if your request is not staged, it will be put into the cache reactor. Let's look at the object of Mcachequeue:

Private final priorityblockingqueue<request<?>> mcachequeue =        new Priorityblockingqueue<request <?>> ();

We see that the Mcachequeue essence is a priorityblockingqueue thread-safe queue, and in this queue it is possible to prioritize comparisons. The imagerequest specifies the priority of the request:

Com.android.volley.toolbox.ImageRequest:

    @Override Public Priority    GetPriority () {        return priority.low;    }

You can specify the priority level of your request. We go back to cachedispatcher consumer, cachedispatcher inherit thread. Mcache.initialize () is initialized directly to the cache after the build, and the purpose of initialization is to obtain the data that already exists in the cache. The implementation class of the cache is Diskbasedcache.java we look at how it is initialized:

 @Override public synchronized Void Initialize () {if (!mrootdirectory.exists ()) {if (!mrootdirector            Y.mkdirs ()) {VOLLEYLOG.E ("Unable to create cache dir%s", Mrootdirectory.getabsolutepath ());        } return;        } file[] files = mrootdirectory.listfiles ();        if (files = = null) {return;            } for (File file:files) {FileInputStream FIS = null;                try {fis = new FileInputStream (file);                Cacheheader entry = Cacheheader.readheader (FIS);                Entry.size = File.length ();            Putentry (Entry.key, entry);                } catch (IOException e) {if (file! = null) {file.delete ();                    }} finally {try {if (FIS! = null) {fis.close ();    }} catch (IOException ignored) {}}}} 

As we can see, another feature that distinguishes volley from other caches is the ability to store metadata, or to customize the data format. The header of the volley file is added to the head of the file. This method can not only guarantee the security of data in some aspect, but also can store the data element well.
public static Cacheheader Readheader (InputStream is) throws IOException {            Cacheheader entry = new Cacheheader ();            int magic = READINT (is);            if (magic! = cache_magic) {                //don ' t bother deleting, it ' ll get pruned eventually                throw new IOException ();            }            Entry.key = readString (is);            Entry.etag = ReadString (is);            if (Entry.etag.equals ("")) {                entry.etag = null;            }            Entry.serverdate = Readlong (is);            Entry.ttl = Readlong (is);            Entry.softttl = Readlong (is);            Entry.responseheaders = Readstringstringmap (is);            return entry;        }

We see a lot of information from this code, volley ourselves to define their own data magic number, also according to volley own specifications to read the metadata.

OK, we initialized the cache and the next step is the core of Cachedispatcher.

while (true) {try {//Get a request from the cache triage queue, blocking until                At least one is available.                Final request<?> Request = Mcachequeue.take ();                Request.addmarker ("Cache-queue-take");                If the request has been canceled, don ' t bother dispatching it.                    if (request.iscanceled ()) {request.finish ("cache-discard-canceled");                Continue                }//Attempt to retrieve the item from cache.                Cache.entry Entry = Mcache.get (Request.getcachekey ());                    if (entry = = null) {Request.addmarker ("Cache-miss"); Cache Miss;                    Send off to the network dispatcher.                    Mnetworkqueue.put (Request);                Continue                }//If It is completely expired, just send it to the network. if (entry.isexpired ()) {//Determine if the failure request.addmarker ("cache-hit-expired");                    Request.setcacheentry (entry);                    Mnetworkqueue.put (Request);                Continue }//We have a cache hit;                The parse its data for delivery the request.                Request.addmarker ("Cache-hit"); response<?> Response = request.parsenetworkresponse (New Networkresponse (Entry.data, Entry.re                Sponseheaders));                Request.addmarker ("cache-hit-parsed"); if (!entry.refreshneeded ()) {//Completely unexpired cache hit.                    Just deliver the response.                Mdelivery.postresponse (request, response); } else {//soft-expired cache hit.                    We can deliver the cached response,//But we need to also send the request to the network for                    refreshing. Request.addmarker ("CAche-hit-refresh-needed ");                    Request.setcacheentry (entry);                    Mark the response as intermediate.                    Response.intermediate = true; Post the intermediate response back to the user and has///the delivery then forward the request AL                    Ong to the network. Mdelivery.postresponse (Request, Response, new Runnable () {@Override public                            void run () {try {mnetworkqueue.put (request);                            } catch (Interruptedexception e) {//Not much we can does about this.                }                        }                    }); }} catch (Interruptedexception e) {///We may have been interrupted because it is time to quit                .              if (mquit) {return;  } continue; }        }

Threads are polled by while true, and of course, because the queue is blocked, it does not cause a power problem.

Cache.entry Entry = Mcache.get (Request.getcachekey ()); When data is obtained, the actual data is read if the data exists. This is the lazyload of volley.

if (entry.isexpired ()) {//determines if                    the Request.addmarker ("cache-hit-expired") is invalid;                    Request.setcacheentry (entry);                    Mnetworkqueue.put (request);                    Continue;                }

This code is based on timeliness to determine whether to retire. We look back at the code we just saw, and the request is always marked as a different state in different contexts, which is of great importance to later maintenance. At the same time, in order to ensure the unity of the interface, cachedispatcher its own results disguised as netresponse. For the external interface, whether you are using the same way to get the data, for me as a network to obtain, which in itself is one of the meaning of the DAO pattern exists.

                Request.addmarker ("Cache-hit");                response<?> Response = request.parsenetworkresponse (                        <strong><span style= "color: #006600;" >new networkresponse</span></strong> (Entry.data, entry.responseheaders));                Request.addmarker ("cache-hit-parsed");

The purpose of Request.parsenetworkresponse is to turn your request into your own data object. Well, by now, for the cache to be distributed, the data is fully ready. We said last that the request will eventually be thrown to the delivery object for asynchronous distribution, which effectively avoids the thread blocking caused by the distribution. As I said earlier, the cache will disguise itself as the Netresponse post data, which means that these parts are identical to the network processing. So I'm going to omit this from the next administration on Networkdispatcher. In volleythe implementation class for delivery is:

Com.android.volley.ExecutorDelivery.java

Public Executordelivery (Final Handler Handler) {        //make a Executor that just wraps the Handler.        Mresponseposter = new Executor () {            @Override public            void execute (Runnable command) {                handler.post (command) ;            }        };    }
We see a handler in its <init> that handler if it's a UI thread handler, then your thread is running in the UI thread, avoiding the problem of your own post UI thread messages. Post out data will be encapsulated into the responsedeliveryrunnable command. This command runs in the same thread as the handler. The basic flow of this cachedispatcher is over,responsedeliveryrunnable In addition to the distribution will do some finishing work, crossing can read their own.





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.