A review of basic points of use
The volley framework also does a lot of work on requesting network images, and provides several methods. This article describes the use of Imageloader to load network pictures.
Imageloader's internal use of imagerequest to implement, its constructor can pass in a Imagecache cache parameters, implementation of the image caching function, but also can filter the duplicate links, to avoid repeated requests to send.
Here's how to implement the Imageloader loading picture:
public void displayimg (view view) {
ImageView ImageView = (imageview) This.findviewbyid (R.id.image_view);
Requestqueue mqueue = Volley.newrequestqueue (Getapplicationcontext ());
Imageloader Imageloader = new Imageloader (Mqueue, New Bitmapcache ());
Imagelistener listener = Imageloader.getimagelistener (Imageview,r.drawable.default_image, R.drawable.default_image );
Imageloader.get ("Http://developer.android.com/images/home/aw_dac.png", listener);
Specifies the maximum width and height allowed for the picture
//imageloader.get ("Http://developer.android.com/images/home/aw_dac.png", listener, 200, 200) ;
}
After you create a Imagelistener instance using the Imageloader.getimagelistener () method, you can load the network picture by adding the listener and the URL to the picture in the Imageloader.get () method.
The following is a cache class implemented using LRUCache
public class Bitmapcache implements Imagecache {
private lrucache<string, bitmap> cache;
Public Bitmapcache () {
cache = new Lrucache<string, bitmap> (8 * 1024 * 1024) {
@Override
protected int S Izeof (String key, Bitmap Bitmap) {return
bitmap.getrowbytes () * Bitmap.getheight ();}}
;
}
@Override public
Bitmap getbitmap (String URL) {return
cache.get (URL);
}
@Override public
void putbitmap (String url, Bitmap Bitmap) {
cache.put (URL, Bitmap);
}
Finally, don't forget to include access to the network in the Androidmanifest.xml file
<uses-permission android:name= "Android.permission.INTERNET"/>
Second, the source code analysis
(i) Initialization of the volley request queue
Mreqqueue = Volley.newrequestqueue (MCTX);
This is the main line:
#Volley public static Requestqueue Newrequestqueue (context context) {return Newrequestqueue (context, NULL); public static Requestqueue Newrequestqueue (context, Httpstack stack) {return Newrequestqueue (context, stack
,-1); public static Requestqueue Newrequestqueue (context context, httpstack stack, int maxdiskcachebytes) {File Cachedir =
New File (Context.getcachedir (), default_cache_dir);
String useragent = "volley/0";
try {String PackageName = Context.getpackagename ();
PackageInfo info = Context.getpackagemanager (). Getpackageinfo (PackageName, 0);
useragent = PackageName + "/" + Info.versioncode; catch (Namenotfoundexception e) {} if (stack = null) {if (Build.VERSION.SDK_INT >= 9) {stack = new Hu
Rlstack ();
else {//Prior to Gingerbread, httpurlconnection is unreliable. see:http://android-developers.blogspot.com/2011/09/androids-http-clients.html stack = new HttpClientStack ( Androidhttpclient.newinstanCE (useragent));
} Network Network = new Basicnetwork (stack);
Requestqueue queue; if (maxdiskcachebytes <=-1) {//No maximum size specified queue = new Requestqueue (New Diskbasedcache (Cachedi
R), network); else {//Disk cache size Specified queue = new Requestqueue (new Diskbasedcache (Cachedir, Maxdiskcachebytes),
Network);
} queue.start ();
return queue;
}
Here is the main initialization httpstack, for the Httpstack API is greater than or equal to 9 when the choice of httpurlconnetcion, the other is to choose HttpClient, here we do not pay attention to HTTP-related code.
The requestqueue is then initialized, and then the start () method is invoked.
Next look at the construction of Requestqueue:
Public Requestqueue (cache cache, network network) {This
(cache, network, default_network_thread_pool_size);
} Public
requestqueue (cache cache, network network, int threadpoolsize) {This
(cache, network, threadpoolsize,< C5/>new Executordelivery (New Handler (Looper.getmainlooper ()));
}
Public Requestqueue (cache cache, network network, int threadpoolsize,
responsedelivery delivery) {
Mcache = Cache;
mnetwork = network;
Mdispatchers = new Networkdispatcher[threadpoolsize];
Mdelivery = delivery;
}
Initialization is mainly 4 parameters: Mcache, Mnetwork, Mdispatchers, Mdelivery. The first is hard disk caching; the second is primarily for HTTP-related operations; The third is for forwarding requests; The fourth parameter is used to forward the result to the UI thread PS: You can see the new Handler (Looper.getmainlooper ()).
Next look at the Start method
#RequestQueue
/**
* Starts the dispatchers in this queue.
*/public
void Start () {
stop ();//Make sure any currently running dispatchers are stopped.
Create the cache dispatcher and start it.
Mcachedispatcher = new Cachedispatcher (Mcachequeue, Mnetworkqueue, Mcache, mdelivery);
Mcachedispatcher.start ();
Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mdispatchers.length i++) {
Networkdispatcher networkdispatcher = new Networkdispatcher (mnetwo Rkqueue, Mnetwork,
Mcache, mdelivery);
Mdispatchers[i] = Networkdispatcher;
Networkdispatcher.start ();
}
The first is stop, to ensure that the transponder exit, in fact, is a few internal thread exit, here everyone if interested can look at the source of the eye, the reference under volley how to handle the thread exit (several threads are while (true) {//dosomething}).
Next initializes the Cachedispatcher, then calls start (), initializes the Networkdispatcher, and then calls Start ();
The above transponder, are threads, you can see, here opened a few threads in helping us work, the specific source code, we will be watching.
OK, here, the volley of the initialization of the relevant code, then look at the initialization of the Imageloader related source.
(ii) Initialization of Imageloader
#VolleyHelper
mimageloader = new Imageloader (Mreqqueue, New Imagecache ()
{
private final lrucache< String, bitmap> mlrucache = new lrucache<string, bitmap> (
(int) (Runtime.getruntime (). MaxMemory ()/10)) c6/>{
@Override
protected int sizeOf (String key, Bitmap value)
{return
value.getrowbytes () * Value.getheight ();
}
;
@Override public
void putbitmap (String url, Bitmap Bitmap)
{
mlrucache.put (URL, Bitmap);
}
@Override public
Bitmap getbitmap (String url)
{return
mlrucache.get (URL);
}
);
#ImageLoader public
imageloader (requestqueue queue, Imagecache imagecache) {
mrequestqueue = queue;
Mcache = Imagecache;
}
Very simply, we initialized a imageloader based on our initialization of Requestqueue and LRUCache.
(iii) Loading pictures
When we load the picture, we call it:
# Volleyhelper
getinstance (). Getimageloader (). Get (URL, new Imageloader.imagelistener ());
Next look at the Get method:
#ImageLoader public Imagecontainer Get (String requesturl, final Imagelistener listener) {return get (Requesturl, Liste
NER, 0, 0); Imagecontainer Get (String requesturl, Imagelistener imagelistener, int maxwidth, int maxheight) {return get
(Requesturl, Imagelistener, MaxWidth, MaxHeight, scaletype.center_inside); Imagecontainer Get (String requesturl, Imagelistener imagelistener, int maxwidth, int maxheight, ScaleType scal
EType) {//Only fulfill requests this were initiated from the main thread.
Throwifnotonmainthread ();
Final String CacheKey = Getcachekey (Requesturl, MaxWidth, MaxHeight, ScaleType);
Try to look at the request in the cache of remote images.
Bitmap Cachedbitmap = Mcache.getbitmap (CacheKey);
if (Cachedbitmap!= null) {//return the cached bitmap.
Imagecontainer container = new Imagecontainer (cachedbitmap, requesturl, NULL, NULL);
Imagelistener.onresponse (container, true);
return container; }//The bitmapDid not exist in the cache, fetch it!
Imagecontainer Imagecontainer = new Imagecontainer (null, Requesturl, CacheKey, Imagelistener);
Update the caller to let them know this they should use the default bitmap.
Imagelistener.onresponse (Imagecontainer, true);
Check to the If a request is already in-flight.
Batchedimagerequest request = Minflightrequests.get (CacheKey);
if (request!= null) {//If it is, add this request to the list of listeners.
Request.addcontainer (Imagecontainer);
return imagecontainer; }//The request isn't already in flight.
Send the new request to the network and//track it.
request<bitmap> newrequest = Makeimagerequest (Requesturl, MaxWidth, MaxHeight, ScaleType, CacheKey);
Mrequestqueue.add (newrequest);
Minflightrequests.put (CacheKey, New Batchedimagerequest (Newrequest, Imagecontainer));
return imagecontainer;
}
You can see the Get method, which first restricts the need to be invoked on the UI thread through the Throwifnotonmainthread () method;
Then the cachekey is computed according to the incoming parameters, and the cache is obtained.
=> if the cache exists, encapsulate the return result directly into a imagecontainer (Cachedbitmap, Requesturl) and then directly callback Imagelistener.onresponse (container, true); We can set the picture.
=> if the cache does not exist, initialize a imagecontainer (no bitmap) and then direct callback, Imagelistener.onresponse (Imagecontainer, true); Here to let everyone in the callback to judge, and then set the default picture (so, everyone in their own implementation of listener, don't forget to Judge Resp.getbitmap ()!=null);
Next check whether the URL has already joined the request to the right, if already joined, then the newly initialized Imagecontainer join Batchedimagerequest, return to the end.
If this is a new request, create a new request through makeimagerequest, and then add the request to Mrequestqueue and minflightrequests respectively. Note that a batchedimagerequest is initialized in the minflightrequests to store the same request queue.
Notice here that Mrequestqueue is an object, not a queue data structure, so we'll look at the Add method
#RequestQueue Public <T> request<t> Add (request<t> request) {//Tag the "request as belonging to"
Queue and add it to the set of current requests.
Request.setrequestqueue (this);
Synchronized (mcurrentrequests) {mcurrentrequests.add (request);
}//Process requests in the order they are added.
Request.setsequence (Getsequencenumber ());
Request.addmarker ("Add-to-queue");
If the request is uncacheable, skip the cache queue and go straight to the network.
if (!request.shouldcache ()) {mnetworkqueue.add (request);
return request;
//Insert request into stage if there ' s already a request with the same cache key in flight.
Synchronized (mwaitingrequests) {String CacheKey = Request.getcachekey (); if (Mwaitingrequests.containskey (CacheKey)) {//There is already a request in flight.
Queue up.
queue<request<?>> stagedrequests = Mwaitingrequests.get (CacheKey); if (stagedrequests = = null) {stagedrequests = new linkedlist<request<?>> ();
} stagedrequests.add (Request);
Mwaitingrequests.put (CacheKey, stagedrequests);
if (volleylog.debug) {volleylog.v ("Request for cachekey=%s are in flight, putting on hold.", CacheKey);
} else {//Insert ' null ' queue for this cachekey, indicating there are now request in//flight.
Mwaitingrequests.put (CacheKey, NULL);
Mcachequeue.add (Request);
return request;
}
}
The request is first added to the mcurrentrequests, which saves all requests that need to be processed, primarily to provide the entry for cancel.
If the request should not be cached, join the Mnetworkqueue directly, and then return.
Then determine if the request has the same request being processed and, if so, join the mwaitingrequests;
Add Mwaitingrequests.put (CacheKey, NULL) and Mcachequeue.add (request).
OK, so here we've finished the intuitive code, but you might think, where exactly is the trigger for the network request to load the picture?
So, first you should know that when we need to load the picture, we will makeimagerequest and then add the request to the various queues, mainly including Mcurrentrequests, Mcachequeue.
Then, remember when we initialized the Requestqueue and started a few forwarding threads? Cachedispatcher and Networkdispatcher.
Actually, the network request is in these several threads actually to load, we look separately;
(d) Cachedispatcher
Look at one eye construction method;
#CacheDispatcher public
cachedispatcher (
blockingqueue<request<?>> cachequeue, Blockingqueue <Request<?>> Networkqueue,
cache cache, Responsedelivery delivery) {
mcachequeue = cachequeue;
Mnetworkqueue = Networkqueue;
Mcache = cache;
Mdelivery = delivery;
}
This is a thread, so the main code must be in run.
#CacheDispatcher @Override public void Run () {if (DEBUG) volleylog.v ("Start new Dispatcher");
Process.setthreadpriority (Process.thread_priority_background);
Make a blocking call to initialize the cache.
Mcache.initialize ();
while (true) {try {//Get a request from the cache triage \ Blocking until/At least one is available.
Final request<?> Request = Mcachequeue.take ();
Request.addmarker ("Cache-queue-take");
If the request has been canceled, don ' t bother dispatching it.
if (request.iscanceled ()) {request.finish ("cache-discard-canceled");
Continue
}//Attempt to retrieve the this item from cache.
Cache.entry Entry = Mcache.get (Request.getcachekey ());
if (entry = = null) {Request.addmarker ("Cache-miss"); Cache Miss;
Send off to the network dispatcher.
Mnetworkqueue.put (Request);
Continue
}//If It is completely expired, just send it to the network. if (entry.IsExpired ()) {Request.addmarker ("cache-hit-expired");
Request.setcacheentry (entry);
Mnetworkqueue.put (Request);
Continue }//We have a cache hit;
Parse its data for delivery the request.
Request.addmarker ("Cache-hit");
response<?> Response = request.parsenetworkresponse (New Networkresponse (Entry.data, entry.responseheaders));
Request.addmarker ("cache-hit-parsed"); if (!entry.refreshneeded ()) {//completely unexpired cache hit.
Just deliver the response.
Mdelivery.postresponse (request, response); else {//soft-expired cache hit.
We can deliver the cached response,//But we need to also send the "request to" network for//refreshing.
Request.addmarker ("cache-hit-refresh-needed");
Request.setcacheentry (entry);
Mark the response as intermediate.
Response.intermediate = true; Post the intermediate response back to the user and have//the deliveryThen forward the request along to the network.
Mdelivery.postresponse (Request, Response, new Runnable () {@Override public void run () {try {
Mnetworkqueue.put (Request);
catch (Interruptedexception e) {//Not much we can does about this.
}
}
});
The catch (Interruptedexception e) {//We may have been interrupted because it is time to quit.
if (mquit) {return;
} continue;
}
}
}
OK, first make clear that this cache refers to the hard disk cache (the directory is Context.getcachedir ()/volley), the memory cache in Imageloader has been judged.
Can see here is an infinite loop, constantly from Mcachequeue to take out the request, if the request has been canceled directly end;
Next get from the cache:
=> if it is not fetched, join Mnetworkqueue
=> if the cache expires, join Mnetworkqueue
Otherwise, you're getting the available cache. ; Call request.parsenetworkresponse resolves data and responseheaders taken from the cache; Next, determine the TTL (mostly or not), and if not, forward directly via Mdelivery.postresponse, and then back To the UI thread, and if the TTL is not valid, the request is added to the mnetworkqueue after the callback completes.
All right, here's the thing. If you get a legitimate cache, forward it directly to the UI thread, or, conversely, join the networkqueue.
Next we'll look at Networkdispatcher.
(v) networkdispatcher
Like Cachedispatcher, it is still a thread, and the core code is still in run;
# Networkdispatcher//new networkdispatcher (Mnetworkqueue, Mnetwork,mcache, mdelivery) public NetworkDispatcher ( blockingqueue<request<?>> queue, network network, cache cache, Responsedelivery delivery) {mqueue = Q
Ueue;
Mnetwork = network;
Mcache = cache;
Mdelivery = delivery;
@Override public void Run () {process.setthreadpriority (process.thread_priority_background);
while (true) {Long starttimems = Systemclock.elapsedrealtime ();
Request<?> request;
try {//Take a request from the queue.
Request = Mqueue.take ();
The catch (Interruptedexception e) {//We may have been interrupted because it is time to quit.
if (mquit) {return;
} continue;
try {request.addmarker ("Network-queue-take");
If the request was cancelled already, does not perform the//network request.
if (request.iscanceled ()) {request.finish ("network-discard-cancelled");
Continue } addtrafficStatstag (Request);
Perform the network request.
Networkresponse networkresponse = mnetwork.performrequest (request);
Request.addmarker ("Network-http-complete"); If the server returned 304 and we delivered a response already,//we ' re done--don ' t deliver a second identical R
Esponse.
if (networkresponse.notmodified && request.hashadresponsedelivered ()) {request.finish ("not-modified");
Continue
}//Parse The response here is on the worker thread.
response<?> Response = Request.parsenetworkresponse (networkresponse);
Request.addmarker ("Network-parse-complete");
Write to cache if applicable.
Todo:only Update cache metadata instead of entire record for 304s. if (Request.shouldcache () && response.cacheentry!= null) {Mcache.put (Request.getcachekey (), Response.cachee
Ntry);
Request.addmarker ("Network-cache-written");
}//Post the response back.
Request.markdelivered (); Mdelivery.postresponse (request, response);
catch (Volleyerror volleyerror) {Volleyerror.setnetworktimems (Systemclock.elapsedrealtime ()-startTimeMs);
Parseanddelivernetworkerror (Request, volleyerror);
catch (Exception e) {volleylog.e (E, "unhandled Exception%s", e.tostring ());
Volleyerror volleyerror = new Volleyerror (e);
Volleyerror.setnetworktimems (Systemclock.elapsedrealtime ()-starttimems);
Mdelivery.posterror (Request, volleyerror);
}
}
}
Before looking at the code, we first think about the logic, under normal circumstances we will take out the request, let network to request processing our request, after processing completes: joins the cache, then forwards.
So see if:
Take out the request first, then process our request through Mnetwork.performrequest (request), get networkresponse, and then use request to parse our networkresponse.
After you get the response, decide if you should cache it, or cache it if needed.
Last Mdelivery.postresponse (request, response);
OK, it's about the same as we expected.
In this way, our volley to load the image of the core logic of the analysis is done, simply summed up:
First initialize the Requestqueue, mainly is to open several dispatcher threads, the thread will continue to read the request (used blocking queue, no message blocking)
When we make a request, we construct a cachekey based on the Url,imageview attribute, and then get it first from the LRUCache (this cache is constructed by ourselves, all of which implement the Imagecache interface); To determine if there is a hard disk cache, this step is obtained from the Getcachedir (default 5M), if not, from the network request;
However, what can be found is volley picture loading, and there is no LIFO this strategy; seemingly for the download of the picture, but also the full add to the memory, and then compression, so see, for the giant map, large files such a waste;
It seems quite simple, but after reading, for how to better the library and how to design a picture loading library still has great help;
If you are interested, you can also look at the source analysis at the same time, think of some details of the implementation, such as:
Dispatcher are some infinite loops of threads that can look at how volley ensures that they are closed.
For the image compression code, you can see in the Imagerequest parsenetworkresponse inside, how to compress.
So on ...
Finally put a general flowchart to facilitate memory: