Volley簡單學習使用三——源碼分析一,volley源碼分析
一、Volley架構圖
根據圖簡單猜測Volley工作的流程,見右下角的注釋,藍色表示主線程(main thread),綠色表示緩衝線程(cache thread),黃色表示網路線程(network threads);再尋找圖中的關鍵字:queue(RequestQueue),cache queue,CacheDispatcher,NetworkDispatcher; 流程可簡單那描述為:RequestQueue的add()操作將Request添加到緩衝隊列cache queue中。CacheDispatcher將Request從queue中取出,如果發現緩衝中已經儲存了相應的結果,則直接從緩衝中讀取並解析,將response結果回調給主線程。如果緩衝中未發現,則將Request添加到網路隊列中,進行相應的HTTP transaction等交易處理,將網路請求的結果返回給主線程。
二、源碼分析: 由可以得出流程圖的入口在於RequestQueue的add()方法,先從RequestQueue的建立看起:
(一)RequestQueue的使用:
RequestQueue mRequestQueue = Volley.newRequestQueue(this);
看一下Volley.newRequestQueue的事務邏輯,Volley類中總共就兩個方法:
/** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. */ public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, null); }代碼的事務主體在這裡:
/** Default on-disk cache directory. */ private static final String DEFAULT_CACHE_DIR = "volley"; /** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @param stack An {@link HttpStack} to use for the network, or null for default. * @return A started {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context, HttpStack stack) { //建立cache File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } /** 根據博文http://blog.csdn.net/guolin_blog/article/details/12452307,HurlStack是用HttpURLConnection實現的; HttpClintStack是由HttpClient實現的;由Android2.3之前的版本宜使用HttpClient,因為其Bug較少; Android2.3之後版本宜使用HttpURLConnection,因其較輕量級且API簡單; 故會有此HurlStack和HttpURLConnection的使用分類 */ if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } //建立以stack為參數的Network對象 Network network = new BasicNetwork(stack); //建立RequestQueue對象 RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start();//繼續向下分析的入口 return queue; }
附I)
、HurlStack中的部分代碼,可以看出其是基於HttpURLClient實現的:
private static HttpEntity entityFromConnection(HttpURLConnection connection)
對應的HttpClientStack的建構函式可以看出其實基於HttpClient實現的:
public HttpClientStack(HttpClient client) { mClient = client; } 而兩者都是基於HttpStack介面的:
/** An HTTP stack abstraction.*/ public interface HttpStack { public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders) throws IOException, AuthFailureError; }
由於Android 2.3版本之前,因為HttpURLConnection的BUG較多,HttpClient的API已經較完備,故宜使用HttpClient,故這裡版本9之前,選擇使用HttpClientStack;
Android2.3之後版本,HttpURLConnection不斷髮展,因其較為輕量級,且API使用較為簡單,其也在不斷最佳化效能等,故這裡使用基於其的HurlStack;
附II)、 這裡引出一個Network對象,看一下建構函式,其用以處理stack傳來的網路請求,與主線關係不大,可以不看
/** * A network performing Volley requests over an {@link HttpStack}. */public class BasicNetwork implements Network { ... private static int DEFAULT_POOL_SIZE = 4096; protected final HttpStack mHttpStack; protected final ByteArrayPool mPool; public BasicNetwork(HttpStack httpStack) { // If a pool isn't passed in, then build a small default pool that will give us a lot of // benefit and not use too much memory. this(httpStack, new ByteArrayPool(DEFAULT_POOL_SIZE)); } /** * @param httpStack HTTP stack to be used * @param pool a buffer pool that improves GC performance in copy operations */ public BasicNetwork(HttpStack httpStack, ByteArrayPool pool) { mHttpStack = httpStack; mPool = pool; } ...} 儲存了建立的stack,並建立一個位元組數組池(ByteArrayPool)
附III)、 回到重要的RequestQueue,其建構函式:
/** Number of network request dispatcher threads to start. */ private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4; public RequestQueue(Cache cache, Network network) { this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE); } public RequestQueue(Cache cache, Network network, int threadPoolSize) { this(cache, network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper()))); } /** * Creates the worker pool. Processing will not begin until {@link #start()} is called. * * @param cache A Cache to use for persisting responses to disk * @param network A Network interface for performing HTTP requests * @param threadPoolSize Number of network dispatcher threads to create * @param delivery A ResponseDelivery interface for posting responses and errors */ public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; } 在這裡建立了之前分析中一個重要的對象:NetworkDispatcher;並且可以看到其類似線程池似的,建立了大小為threadPoolSize的NetworkDispatcher數組;其中的處理邏輯暫且不看,首先可以知道其是一個線程:
public class NetworkDispatcher extends Thread
總結第一部RequestQueue中add方法所作的工作:1)建立了Cache;2)建立了HttpStack,並由HttpStack為基建立了Network對象;3)建立RequestQueue對象,並在RequestQueue建構函式中建立了大小為threadPoolSize的NetworkDispatcher數組(注並未建立相應NetworkDispatcher對象)4)調用RequestQueue.start()函數
(二)從start方法看起:
1、RequestQueue.start():
/** * Starts the dispatchers in this queue. */ publicvoid start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } } /** Stops the cache and network dispatchers.*/ public void stop() { if (mCacheDispatcher != null) { mCacheDispatcher.quit(); } for (int i = 0; i < mDispatchers.length; i++) { if (mDispatchers[i] != null) { mDispatchers[i].quit(); } } } start()依然在做初始化,可以看到建立了一個CacheDispatcher線程(它也是繼承Thread的);又建立了threadPoolSize(預設為4)個NetworkDispatcher線程;則start()後加上主線程,一共有六個線程在運行;回顧之前的流程圖,黃色、綠色、藍色對應的線程都已集齊;黃色線程和綠色線程運行下後台一直在等待網路Request並進行dispatch; 則下面學習的主體落到了兩個主要的處理線程CacheDispatcher和NetworkDispathcer上來;試了下,直接看原始碼有些困難;先把之前使用Volley的流程走一遍;建立好RequestQueue之後,是建立自己的Request,前面文章已經做了學習;而後是將request通過RequestQueue的add()方法添加進來;
2、下面看一下RequestQueue.add()方法,它是前面流程圖啟動並執行入口函數:
/** * The set of all requests currently being processed by this RequestQueue. A Request * will be in this set if it is waiting in any queue or currently being processed by any dispatcher. */ private final Set<Request> mCurrentRequests = new HashSet<Request>(); /** * Adds a Request to the dispatch queue. * @param request The request to service * @return The passed-in request */ public Request add(Request request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); //見附I Request設定其對應的RequestQueue synchronized (mCurrentRequests) { //mCurrentRequests表示當前該RequestQueue持有的requests,由HashSet來儲存 mCurrentRequests.add(request); } // 為新添加的request進行一系列的初始化設定 request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // 見附II 判斷request是否允許緩衝 if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } //request如果允許緩衝 //Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { // 見附III String cacheKey = request.getCacheKey(); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
附I)、Request.setRequestQueue() 字面上可以看出是Request設定其對應的RequestQueue,簡單的setter函數:
/** The request queue this request is associated with. */ private RequestQueue mRequestQueue; /** * Associates this request with the given queue. The request queue will be notified when this * request has finished. */ public void setRequestQueue(RequestQueue requestQueue) { mRequestQueue = requestQueue; }
附II)、request.shouldCache()用以判斷該request是否允許緩衝(預設允許,可使用setShouldCache(false)來禁止緩衝);如果不允許緩衝,則直接將其添加到mNetworkQueue中返回。
/** The queue of requests that are actually going out to the network. */ private final PriorityBlockingQueue<Request> mNetworkQueue = new PriorityBlockingQueue<Request>();
RequetQueue其實並不是一個真正的Queue,真正儲存Request供處理線程去讀取和操作的Queue是mNetworkQueue,其類型是PriorityBlockingQueue;
附III)、mWaitingRequests
/** * Staging area for requests that already have a duplicate request in flight. * <ul> * <li>containsKey(cacheKey) indicates that there is a request in flight for the given cache * key.</li> * <li>get(cacheKey) returns waiting requests for the given cache key. The in flight request * is <em>not</em> contained in that list. Is null if no requests are staged.</li> * </ul> */ private final Map<String, Queue<Request>> mWaitingRequests = new HashMap<String, Queue<Request>>();
這個變數和緩衝策略相關函數:containsKey(cacheKey): true表明對於給定的cache key,已經存在了一個request get(cacheKey) : 返回對於給定cache key對應的waiting requests,即Queue<Request>其儲存request的整個工作流程為:
1)對於每個新add的request,先擷取它的CacheKey;
2)如果當前mWaitingRequests不存在當前cachekey,則會put(cacheKey, null);null表示當前Map中已經存在了一個對應cacheKey的請求;
3)如果mWaitingRequests已經存在了對應的cacheKey,通過get(Key)擷取cacheKey對應的Queue;如果Queue為null,由第二步知,當前cacheKey僅僅對應一個request,則建立對應的Map Value值——Queue<Request>(這裡由LinkedList來實現),然後添加進去即可;
附I)、Request.setRequestQueue() 字面上可以看出是Request設定其對應的RequestQueue,簡單的setter函數:
附II)、request.shouldCache()用以判斷該request是否允許緩衝(預設允許,可使用setShouldCache(false)來禁止緩衝);如果不允許緩衝,則直接將其添加到mNetworkQueue中返回。