Lrudiskcache Essentials--The disk cache utility class to be used

Source: Internet
Author: User
Tags md5 encryption readable

Lrudiskcache is a disk cache class that uses the LRU algorithm, and its function is to change the cache location in LRUCache from memory to disk, which is typically used in conjunction with caching of small files and pictures.


Here are some of the more important points in the reading process:


Get

When fetching cached data, Lrudiskcache uses the LINKEDHASHMAP algorithm, which is the most commonly used in the tail, and the least used is first traversed.

When you need to get the cached data, the first thing you get is a Snapshot object (if the data is OK: Write succeeds, in effect, etc.), snapshot is actually the input stream that holds the cache file, no other logical operation.

 private synchronized snapshot getbydiskkey (String diskkey)  throws  Ioexception {        checknotclosed ();         entry entry = lruentries.get (Diskkey);         if  (entry == null)  {             return null;        }         if  (!entry.readable)  {//data is written successfully              return null;        }         //  Judging time validity         if   (Entry.expirytimestamp < system.currenttimemillis ())  {             for  (int i = 0; i < valuecount; i++)  {                 File file =  Entry.getcleanfile (i);                 if  (File.exists ()  && !file.delete ())  {                     throw new  IOException ("failed to delete "  + file);                 }                 size -= entry.lengths[i];                 entry.lengths[i] = 0;             }             redundantopcount++;             Journalwriter.append (delete +  " "  + diskKey +  ' \ n ');             lruentries.remove (Diskkey);             if  (journalrebuildrequired ())  {                 executorservice.submit ( cleanupcallable);            }             return null;         }        //the same key may correspond to multiple caches          fileinputstream[] ins = new fileinputstream[valuecount];        try {             for  (int i = 0; i <  valuecount; i++)  {                 ins[i] = new fileinputstream (Entry.getcleanfile (i));             }        } catch   (filenotfoundexception e)  {             // A file must have been deleted manually!             for  (int i = 0; i <  valuecount; i++)  {                 if  (Ins[i] != null)  {                     ioutils.closequietly (Ins[i]);                 } else {                     break;                 }             }             return null;        }         redundantopcount++;        journalwriter.append (READ  +  " "  + diskKey +  ' \ n ');        if  (JournalrebuildrequirEd ())  {            executorservice.submit ( cleanupcallable);        }         return new snapshot (diskkey, entry.sequencenumber, ins, entry.lengths);     }

Set

When you increase the cached data, you call the Edit method, get the editor object, or null (when it is already in the edit state), and write an update log that is not the identity of the write cache success.

Note: Diskkey is the value of the original key after MD5 encryption.

Private synchronized editor editbydiskkey (string diskkey, long  Expectedsequencenumber)  throws IOException {         Checknotclosed ();         entry entry = lruentries.get ( Diskkey);        if  (expectedsequencenumber != any_ sequence_number &&                  (entry == null | |  entry.sequencenumber != expectedsequencenumber))  {             return null; // Snapshot is stale.         }        if  (entry ==  null) &NBSP;{&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;ENTRY&NBSP;=&NBSp;new entry (Diskkey);             Lruentries.put (diskkey, entry);        } else if  ( Entry.currenteditor != null)  {             return null; // Another edit is in progress.         }        Editor editor =  New editor (Entry);        entry.currenteditor = editor;         // flush the journal before creating  files to prevent file leaks.         Journalwriter.write (update +  " "  + diskKey +  ' \ n ');         journalwriteR.flush ();         return editor;    } 

Then through the editor object to get the file operation flow Faulthidingoutputstream object, the object in the operation of the file error when the HasErrors variable is assigned to FALSE, the variable to the final insertion of data is critical to success.

The code in Entry.getdirtyfile (index), there may be classmates have doubts, why is dirty. In fact, this is just as a temporary file, after the data is successfully written, the file will be renamed as an official file.

Readable indicates whether the data for the current entry has been written, and if so, it can no longer be written repeatedly.

Public outputstream newoutputstream (Int index)  throws IOException {             synchronized  (lrudiskcache.this)  {                 if  ( Entry.currenteditor != this)  {                     throw new illegalstateexception ();                 }                 if  (!entry.readable)  {                      written[index] = true;                 }                file  Dirtyfile = entry.getdirtyfile (Index);                 FileOutputStream outputStream;                 try {                     outputStream =  New fileoutputstream (Dirtyfile);                 } catch  (filenotfoundexception e)  {                     // attempt  to recreate the cache directory.              &nbsP;      directory.mkdirs ();                     try {                          outputstream = new fileoutputstream (Dirtyfile);                     } catch  ( FILENOTFOUNDEXCEPTION&NBSP;E2)  {                         // We are unable  to recover. silently eat the writes.                         return  Null_output_stream;                    }                 }                 return new  Faulthidingoutputstream (OutputStream);             }        }

Finally, the new cache will need to call the Editor object's Commit method;

At commit, the judgment is made, and if the write succeeds, just add a clean log (clean only means the data is inserted successfully)

Entry.readable = true; Journalwriter.write (clean + "+ Entry.diskkey +" "+ Expiry_prefix + Entry.expirytimestamp + entry.getlengths () + ' \ n ') ;

Otherwise, do dirty data processing, delete the file and write to the delete log.

Journalwriter.write (DELETE + "" + Entry.diskkey + ' \ n ');


Delete


When the cache is deleted, it is not possible to perform the delete operation and write the deleted record to the log file before deciding whether to edit the state.

if the same key corresponds to multiple cache files, delete all.

Private synchronized boolean removebydiskkey (String diskkey)  throws IOException  {        checknotclosed ();         entry entry = lruentries.get (Diskkey);         if  (entry == null | |  entry.currenteditor != null)  {             return false;        }         for  (int i = 0; i < valuecount; i++)  {             File file =  Entry.getcleanfile (i);            if  ( File.exists ()  && !file.delete ())  {                 throw new ioexception ("Failed to delete   " + file);            }             size -= entry.lengths[i];             entry.lengths[i] = 0;         }        redundantOpCount++;         journalwriter.append (delete +  " "  + diskKey  +  ' \ n ');         lruentries.remove (Diskkey);         if  (journalrebuildrequired ())  {             executorservice.submit (cleanupcallable);         }        return true;    } 

Other analysis:


At the end of many operations, a task to organize the data is added to ensure that the data is in a manageable range:

Private final callable<void> cleanupcallable = new callable<void> ()  {        public void call ()  throws Exception  {            synchronized  ( Lrudiskcache.this)  {                 if  (journalwriter == null)  {                     return null; // closed.                 }                 trimtosize ();// Delete infrequently used data when size is exceeded                 if   (journalrebuildrequired ())  {//More than 2000 logs need to be rebuilt after logging                      rebuildjournal ();//Generate a new log file based on current data, rename the original log file                      redundantopcount  = 0;//initialization                  }            }             return null;        }     };


Log files are limited in length and cannot be arbitrarily grown:

Private Boolean journalrebuildrequired () {final int redundantopcompactthreshold = 2000; return redundantopcount >= redundantopcompactthreshold//&& redundantopcount >= LRUENTRIES.S    Ize (); }

Some operations traverse Valuecount, a value that represents multiple caches under the same key, such as a picture that can be divided into large and small, which is valuecount.


(The code is derived from the Lrudiskcache in Xutils, which is clearly more complete than the Disklrucache in the Android website.) )


Lrudiskcache Essentials--The disk cache utility class to be used

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.