High-throughput systems
For example, we do projects need to schedule, each module can be multi-person in parallel multiple tasks, can also be one or more people serial work, but there will always be a key path, this path is the project duration. The response time of a system call is the same as the project plan, and there is a critical path, which is the system impact time. The critical path consists of CPU operation, IO, External system response, and so on.
For a system user, from the user click on a button, link or issued a command to start, to the system to the results in the form of the user wants to show up as the end, the whole process of the time spent is the user of the software performance of the intuitive impression, which is what we call response time. When the response time is short, the user experience is very good, of course, the response time of the user experience includes subjective factors and objective response time. When designing software, we need to consider how to better combine these two parts to achieve the user's best experience. Such as: The user in the large data volume query, we can first extract the data presented to the user, in the process of the user to continue the data retrieval, then the user does not know what we do in the background, the user is concerned about the user's response time operation.
We often say that a system throughput, usually by the QPS (TPS), the number of concurrent two factors, each system these two values have a relative limit value, under the application scenario access pressure, as long as a certain item reached the highest system value, the system throughput will not go, if the pressure continues to increase, the system throughput will decline , because the system is overloaded, context switches, memory, and other consumption results in degraded system performance, determining the system response time elements.
Buffering (buffer)
A buffer is a specific area of memory that is designed to improve the performance of the system by mitigating performance differences between the upper and lower layers of the application. In daily life, a typical application of buffering is the funnel. Buffering can coordinate the poor performance of the upper and lower components, which can effectively reduce the wait time of the upper component for the lower component when the upper component performance is better than the lower component. Based on such a structure, the upper-level application component does not need to wait for the lower component to accept all the data realistically, then returns the operation, speeding up the processing speed of the upper component, thus improving the overall performance of the system.
Buffering using the BufferedWriter
BufferedWriter is a buffer usage, in general, buffer should not be too small, too small buffer can not play a true buffer, buffer is not too large, too large buffer will waste system memory, increase the burden of GC. Adding buffers to the I/O components as much as possible can improve performance. A buffer example code is shown in Listing 1.
Listing 1. Add buffer before sample code
import java.awt.color;import java.awt.graphics;import java.awt.graphics2d;import Java.awt.image;import Javax.swing.japplet;public class Nobuffermovingcircle extends JApplet implements runnable{Image screenimage = null; Thread thread; int x = 5; int move = 1; public void init () {screenimage = CreateImage (230,160),} public void Start () {if (thread = = null) {thread = new thread (thi s); Thread.Start (); }} @Overridepublic void Run () {//TODO auto-generated method Stubtry{system.out.println (x); while (true) {x+=move; SYSTEM.OUT.PRINTLN (x); if (x>105) | | (x<5)) {move*=-1;} Repaint (); Thread.Sleep (10);}} catch (Exception e) {}}public void drawcircle (Graphics gc) {graphics2d g = (graphics2d) gc;g.setcolor (Color.green); G.fillrect (0, 0, 90, 90); G.setcolor (color.red); G.filloval (x, 5; public void Paint (Graphics g) {G.setcolor (color.white); g.fillrect (0, 0, max.);d rawcircle (g);}}
The program can complete the right and left translation of the red ball, but the effect is poor, because every time the interface refresh involves the re-drawing of the picture, which is longer, so the screen jitter and white light effect is obvious. For a better display, you can add buffers to it. The code is shown in Listing 2.
Listing 2. Add buffer after example code
Import Java.awt.color;import Java.awt.graphics;public class Buffermovingcircle extends nobuffermovingcircle{Graphics DoubleBuffer = null;//buffer public void init () {super.init (), DoubleBuffer = Screenimage.getgraphics ();} public void Paint ( Graphics g) {//use buffer to optimize the original Paint method Doublebuffer.setcolor (Color.White);//First draw in memory Doublebuffer.fillrect (0, 0, 200, 100); Drawcircle (DoubleBuffer); G.drawimage (screenimage, 0, 0, this); }}
I/O operation with Buffer
In addition to NIO, there are two basic ways of using Java for I/O operations:
- Use InputStream and OutputStream-based methods;
- Use Writer and Reader.
Regardless of the way you use file I/O, the performance of I/O can be improved effectively if buffering is used properly.
The following shows the buffering components that can be used with InputStream, OutputStream, Writer, and Reader.
Outputstream-fileoutputstream-bufferedoutputstream
Inputstream-fileinputstream-bufferedinputstream
Writer-filewriter-bufferedwriter
Reader-filereader-bufferedreader
Wrapping file I/O with a buffer component can effectively improve the performance of file I/O.
Listing 3. Sample code
Import Java.io.bufferedinputstream;import Java.io.bufferedoutputstream;import Java.io.datainputstream;import Java.io.dataoutputstream;import Java.io.fileinputstream;import Java.io.filenotfoundexception;import Java.io.fileoutputstream;import Java.io.ioexception;public class Streamvsbuffer {public static void Streammethod () Throws ioexception{try {long start = System.currenttimemillis ();//replace your own file DataOutputStream dos = new DataOutputStream ( New FileOutputStream ("C://streamvsbuffertest.txt")); for (int i=0;i<10000;i++) {dos.writebytes (Str Ing.valueof (i) + "/r/n");//loop 10,000 write Data}dos.close ();D atainputstream dis = new DataInputStream ("FileInputStream StreamVSBuffertest.txt ")); while (dis.readline () = null) {}dis.close (); System.out.println (System.currenttimemillis ()-start);} catch (FileNotFoundException e) {//TODO auto-generated catch Blocke.printstacktrace ();}} public static void Buffermethod () throws ioexception{try {long start = System.currenttimemillis (); Please replace your own file DataOutputStream dos = new DataOutputStream (New Bufferedoutputstream (n EW FileOutputStream ("C://streamvsbuffertest.txt")); for (int i=0;i<10000;i++) {dos.writebytes (string.valueof (i) + "/r/n");//Loop 10,000 write Data} dos.close (); DataInputStream dis = new DataInputStream (new Bufferedinputstream (New Fileinputstr EAM ("C://streamvsbuffertest.txt")); while (dis.readline () = null) {} dis.close (); System.out.println (System.currenttimemillis ()-start); } catch (FileNotFoundException e) {//TODO auto-generated catch block E.printstacktrace ();}} public static void Main (string[] args) {try {streamvsbuffer.streammethod (); Streamvsbuffer.buffermethod ();} catch (IOException e) {//TODO auto-generated catch Blocke.printstacktrace ();}}}
The result of the run is shown in Listing 4.
Listing 4. Run output
88931
It's obvious that using buffered code performance is a lot faster than not using buffering. The code shown in Listing 5 shows similar tests for FileWriter and FileReader.
Listing 5.FileWriter and FileReader code
Import Java.io.bufferedinputstream;import Java.io.bufferedoutputstream;import Java.io.bufferedreader;import Java.io.bufferedwriter;import Java.io.filenotfoundexception;import Java.io.filereader;import Java.io.FileWriter; Import Java.io.ioexception;public class Writervsbuffer {public static void Streammethod () throws ioexception{try {long S Tart = System.currenttimemillis (); FileWriter FW = new FileWriter ("C://streamvsbuffertest.txt");//replace your own file for (int i=0;i<10000;i++) {Fw.write ( String.valueof (i) + "/r/n");//loop 10,000 times Write Data}fw.close (); FileReader FR = new FileReader ("C://streamvsbuffertest.txt"); while (Fr.ready ()! = False) {}fr.close (); System.out.println (System.currenttimemillis ()-start);} catch (FileNotFoundException e) {//TODO auto-generated catch Blocke.printstacktrace ();}} public static void Buffermethod () throws ioexception{try {long start = System.currenttimemillis (); BufferedWriter FW = new BufferedWriter (New FileWriter ("C://streamvsbuffertest.txt"));//Please replace your own file for (int i=0;i< 10000;i++) {Fw.write (string.valueof (i) + "/r/n");//Loop 10,000 write Data} fw.close (); BufferedReader fr = new BufferedReader (New FileReader ("C://streamvsbuffertest.txt")); while (Fr.ready ()! = False) {} fr.close (); System.out.println (System.currenttimemillis ()-start); } catch (FileNotFoundException e) {//TODO auto-generated catch block E.printstacktrace ();}} public static void Main (string[] args) {try {streamvsbuffer.streammethod (); Streamvsbuffer.buffermethod ();} catch (IOException e) {//TODO auto-generated catch Blocke.printstacktrace ();}}}
Run the output as shown in Listing 6.
Listing 6. Run output
129531
As can be seen from the above example, whether for reading or writing files, the appropriate use of buffering, can effectively improve the system's file read and write performance, to reduce response time for users.
Cache
Caching is also a piece of memory that is designed to improve system performance. The primary role of caching is to stage data processing results and provide the next access to use. In many cases, data processing or data acquisition can be very time consuming, and when the volume of requests for this data is large, frequent data processing can deplete CPU resources. The role of caching is to save these hard-won data processing results, when there are other threads or clients need to query the same data resources, you can omit the processing process of the data, and directly from the cache to get processing results, and immediately return to the request component, so as to improve the system response time.
There are many Java-based caching frameworks, such as Ehcache, Oscache, and JBossCache. The EHCache cache is from Hibernate and is its default data caching solution, and the Oscache cache is opensymphony designed to cache any object, even cached portions of a JSP page or HTTP request; JBossCache is a jbo SS Development, a caching framework that can be used for data sharing between JBoss clusters.
Taking EHCache as an example, the main characteristics of EHCache are:
- Fast
- Simple
- multiple caching strategies;
- There are two levels of cache data: memory and disk, so there is no need to worry about capacity;
- The cached data is written to disk during the virtual machine restart;
- It can be distributed cache through RMI, pluggable API and so on.
- A listening interface with cache and cache manager;
- Support multi-cache manager instances, as well as multiple cache areas of an instance;
- Provides a cache implementation of Hibernate.
Since EhCache is a cache system in the process, once the application is deployed in a clustered environment, each node maintains its own cached data, and when a node updates the cached data, the updated data cannot be shared among the other nodes, which not only reduces the efficiency of the node's operation, but also causes the data to become unsynchronized. For example, a Web site with a, B two nodes as a cluster deployment, when the A-node cache update, and the B-node cache has not yet been updated may occur when users browse the page, one will be updated data, one will be not updated data, although we can also pass the Session Sticky Technology to lock the user on a node, but for some systems that are highly interactive or non-Web, the Session Sticky is obviously not suitable. So we need to use the EhCache clustering solution. Listing 7 shows the EHCache sample code.
Listing 7.EHCache Sample Code
Import Net.sf.ehcache.cache;import net.sf.ehcache.cachemanager;import net.sf.ehcache.element;/** * First step: Generate CacheManager Object * Step two: Generate the Cache object * Step three: add element elements of key-value pairs consisting of Key,value to the cache object * @author Mahaibo * */public class EHC Achedemo{public static void Main (string[] args) {//Specifies the location of the Ehcache.xml String filename= "e://1008//workspace//ehcachetest Ehcache.xml "; CacheManager manager = new CacheManager (fileName); Remove all CacheName String names[] = Manager.getcachenames (); for (int i=0;i<names.length;i++) {System.out.println (names[i]);}//Generate a cache object based on CacheName//The first way: Cache Cache=man Ager.getcache (Names[0]); The second way, Ehcache must have defaultcache exist, "test" can be replaced by any value//cache cache = New cache ("Test", 1, True, False, 5, 2); Manager.addcache (cache); add element elements to the Cache object, element elements have key,value key-value pairs consisting of cache.put (new element ("Key1", "values1")); Element element = Cache.get ("Key1"); System.out.println (Element.getvalue ()); Object obj = Element.getobjectvalue (); SYSTEM.OUT.PRINTLN (String) OBj); Manager.shutdown (); }}
Object Reuse
Object Reuse Pool is one of the most commonly used system optimization techniques at present. Its core idea is that if a class is frequently requested, you do not have to generate an instance every time, you can save some instances of the class in a "pool" and get it directly from the pool when you need it. This "pool" is called the object pool. In the implementation details, it may be an array, a linked list, or any collection class. Object pooling is widely used, such as the thread pool and the database connection pool. The thread pool holds a thread object that can be reused, and when a task is committed to a thread, the system does not need to create a new thread, but instead gets an available thread from the pool to perform the task. At the end of the task, you do not need to close the thread and return it to the pool for the next time you continue to use it. Because thread creation and destruction is a time-consuming task, the thread pool can improve performance well in a system where threads are frequently dispatched. The database connection pool is also a special object pool that maintains a collection of database connections. When the system needs to access the database, it does not need to re-establish the database connection, but can be obtained directly from the pool; After the database operation is complete, the database connection is not closed and the connection is returned to the connection pool. Since the creation and destruction of database connections is a heavyweight operation, it is also important to avoid these two operations frequently to improve the performance of the system. At present, the database connection pool component with more extensive application has c3p0 and Proxool.
As an example of C3P0, it is an open source JDBC connection pool that implements data source and JNDI bindings and supports standard extensions of the JDBC3 specification and JDBC2. The open source projects that currently use it are hibernate,spring and so on. If configured in a JNDI manner, as shown in Listing 8.
Listing 8.Tomcat Data Source configuration
<resource name= "Jdbc/dbsource" type= "Com.mchange.v2.c3p0.ComboPooledDataSource" maxpoolsize= "50" Minpoolsize= "5" acquireincrement= "2" initialpoolsize= "Maxidletime=" factory= " Org.apache.naming.factory.BeanFactory " user=" xxxx "password=" xxxx " driverclass=" Oracle.jdbc.driver.OracleDriver " jdbcurl=" Jdbc:oracle:thin:@192.168.x.x:1521:orcl " Idleconnectiontestperiod= "Ten"/>
Parameter description:
- Idleconnectiontestperio: After the database restarts or for some reason the process is killed, C3P0 does not automatically reinitialize the database connection pool, when the new request needs to access the database, the error is reported (because the connection is invalid), while refreshing the database connection pool, Discard the failed connection and return to normal when the second request arrives. C3P0 currently does not provide an argument for the number of retries after a failed connection has been established, only a parameter that gets the number of retries after a new connection failure.
- Acquireretryattempts: The function of this parameter is to set the system to automatically check the connection pool in the connection is a normal frequency parameter, the time unit is seconds.
- Acquireincremen: When the connection in the connection pool is exhausted c3p0 the number of simultaneous fetches at the same time, that is, if the number of connections used has reached maxpoolsize,c3p0, a new connection will be established immediately.
- Maxidletim: In addition, C3P0 does not close the unused connection pool by default, but instead recycles it to an available connection pool, which causes the number of connections to become larger, so you need to set the MaxIdleTime (default 0, which means never expires), in seconds, maxidletime Represents the maximum time that an idle state's connection can survive.
If you use spring, and you do not use JNDI in your project, and you do not want to configure Hibernate, you can configure c3p0 directly into DataSource, as shown in Listing 9.
Listing 9.Spring Configuration
<bean id= "DataSource" destroy-method= "Close" ><property name= "Driverclass" ><value> Oracle.jdbc.driver.oracledriver</value></property><property name= "JdbcUrl" ><value>jdbc : Oracle:thin: @localhost: 1521:test</value></property><property name= "User" ><value>kay </value></property><property name= "Password" ><value>root</value></property> <!--the minimum number of connections that are kept in the connection pool. --><property name= "minpoolsize" value= "/><!--the maximum number of connections that are kept in the connection pool. Default:15--><property name= "maxpoolsize" value= "1800"/><!--maximum idle time, and the connection is discarded if not used in the same period. If 0, it will never be discarded. default:0--><property name= "maxidletime" value= "1800"/><!--when the connection in the connection pool is exhausted c3p0 the number of connections that are fetched at the same time. Default:3--><property name= "Acquireincrement" value= "3"/><property name= "maxStatements" value= "1000"/ ><property name= "initialpoolsize" value= "/><!--check idle connections in all connection pools every 60 seconds. default:0--><property name= "Idleconnectiontestperiod" ValUe= the "/><"!--defines the number of repeated attempts to obtain a new connection from the database after a failure. default:30--><property name= "acquireretryattempts" value= "/><property name=" Breakafteracquirefailure "value=" true "/><property name=" Testconnectiononcheckout "value=" false "/></ Bean>
Similar practices exist in many ways, and users can search their own web.
Conversion of calculation mode
The computational transformation is well known for the time-space approach, which is often used in embedded devices, or in memory, hard disk space. Use the CPU-sacrificing way to get work that would otherwise require more memory or hard disk space to complete.
A very simple time to change the space of the algorithm, the implementation of a, b two variable value exchange. The most common way to exchange two variables is to use an intermediate variable, and the introduction of additional variables means more space to use. The following method can be used to eliminate the intermediate variables, and to achieve the purpose of variable exchange, the cost is to introduce more CPU operations.
Listing 10. Sample code
A=a+b;b=a-b;a=a-b;
Another more useful example is the support for unsigned integers. In the Java language, unsigned integers are not supported, which means that when an unsigned Byte is needed, a short substitution is required, which also means a waste of space. The following code demonstrates the use of bit operations to simulate unsigned Byte. Although more CPU operations are required in the process of value and value setting, the need for memory space can be greatly reduced.
Listing 11. unsigned integer arithmetic
public class Unsignedbyte {public short getValue (byte i) {//convert byte to unsigned digit short li = (short) (I & 0xff); return Li;} Public byte tounsignedbyte (short i) {return (byte) (I & 0xff);//convert short to unsigned byte} public static void Main (string[] args) {Unsignedbyte ins = new Unsignedbyte (); short[] shorts = new short[256];//declares a short array for (int I=0;i<shorts.leng th;i++) {//array cannot exceed the upper limit of unsigned byte shorts[i]= (short) I;} byte[] bytes = new byte[256];//use a byte array instead of the short array for (int i=0;i<b ytes.length;i++) {bytes[i]=ins.tounsignedbyte (shorts[i]); The data of the//short array is stored in a byte array} for (int i=0;i<bytes.length;i++ {System.out.println (Ins.getvalue (bytes[i]) + "");//Remove unsigned byte}} from a byte array
The run output, shown in Listing 12, is limited to 10.
Listing 12. Run output
0 1 2 3 4 5 6 7 8 9 10
If the CPU is weak, the computational power can be increased by sacrificing space, as shown in Listing 13.
Listing 13. Increase computing power
Import Java.util.arrays;import Java.util.hashmap;import Java.util.map;public class Spacesort {public static int Arraylen = 1000000; public static void Main (string[] args) {int[] a = new Int[arraylen], int[] old = new Int[arraylen]; map<integer,object> map = new hashmap<integer,object> (); int count = 0; while (Count < A.length) {//initialize array int value = (int) (Math.random () *arraylen*10) +1; if (map.get (value) ==null) {Map.put ( value, value); A[count] = value; count++; }} system.arraycopy (A, 0, old, 0, a.length);//Copy all data from a array to the old array long start = System.currenttimemillis (); Arrays.sort (a); System.out.println ("Arrays.sort spend:" + (System.currenttimemillis ()-start) + "MS"); System.arraycopy (old, 0, a, 0, old.length);//restore original data start = System.currenttimemillis (); Spacetotime (a); System.out.println ("Spacetotime spend:" + (System.currenttimemillis ()-start) + "MS"); public static void Spacetotime (int[] array) {int i = 0; int max = array[0], int l = array.length; for (i=1;i<l;i++) {i F (ARray[i]>max) {max = Array[i];}} int[] temp = new int[max+1]; for (i=0;i<l;i++) {temp[array[i]] = array[i];} int j = 0; int MAX1 = max + 1; for (i=0;i<max1;i++) {if (Temp[i] > 0) {array[j++] = Temp[i];}}}
The function Spacetotime () implements the sorting of the array, it does not take the space cost, the index of the array to represent the data size, therefore avoids the comparison between the numbers, this is a typical space-time thinking.
Conclusion
There are many ways to deal with high-throughput systems, and the authors will introduce them in a series of steps to cover all areas. This paper mainly introduces the optimization and suggestions of buffer, cache operation, object reuse pool, calculation mode transformation and so on, and verifies the optimization suggestion and scheme from the actual code demonstration. The author always believes that there is no optimization scheme is hundred effective, need readers according to the actual situation to choose, practice.
Design optimization recommendations for high throughput systems