"JAVA Program Performance Optimization" Summary __java

Source: Internet
Author: User
Tags aop bitwise reflection
Design OptimizationSingle case mode: Delay loading (inner class). Reflection and serialization can break a single example of Agent mode: Delay loading
Static agent: Including subject interface, real subject, proxy class, Main. Use the proxy when initializing, and then load the real topic with the agent when it is actually used. Dynamic Agent: JDK dynamic proxy, Cglib, javassist based on dynamic code agent, ASM. Dynamic loading process (take cglib as an example):
Generates a class byte code based on the specified callback class and is saved in a byte array. By reflection, call Classloader.defindclass to define a byte code as a class. Use the reflection mechanism to generate an instance of the class. Classic implementations: AOP. There are several ways to implement AOP:
Static Weaving: The section is woven into the destination file at compile time. JDK Dynamic Proxy: The runtime generates proxy classes for the interface and then weaves the slices into the proxy class. Cglib and Javassist Dynamic bytecode: Runtime, after the target is loaded, dynamically build bytecode files to generate a subclass of the target class, weaving the slices into subclasses. Enjoy meta mode:
Reusing large objects saves memory and creates time. Unlike object pooling, the objects are different, each with its own meaning and purpose, while object pool objects are equivalent, such as connections in a database connection pool. Decorator Mode:
Functional components and performance components can be separated and need to be combined. Design principles: Use delegates, less inheritance. Classic examples: OutputStream and InputStream. Observer mode common optimization components and methodsBuffer:
Reconcile the performance differences between upper and lower components, most commonly used to increase I/o speed. Classic Example: Write file operations, FileWriter and BufferedWriter, the use of buffer writer performance will be increased by one times. Cache Object Reuse-pool
Only object pooling technology is used for heavyweight objects to improve system performance, and object pooling for lightweight objects can degrade system performance. In Apache, a Jakarta Commons Pool object Pooling component is already available. Parallel substitution serial: Data migration load Balancing time space: CPU and memory space swap: Caching Java Program OptimizationString:
Consists of a char array, an offset, and a string length. Its true length is positioned and intercepted in this char array by offset and length. Substring can easily lead to memory leaks. For string addition, if you use string addition directly, the compile period is automatically optimized to StringBuilder, so it is optimized. But if a string is added in the For Loop, the compilation period is not as good, so each loop creates a new StringBuilder for the string, so it's relatively straightforward to add the string directly to the append with the StringBuilder. List (ArrayList, Vector, LinkedList):
ArrayList and Vector-> abstractlist-> List.
Linkedlist-> abstractsequencelist-> abstractlist-> List. Both ArrayList and vectors are based on an array implementation, encapsulating the operation of an internal array.
LinkedList uses a cyclic bidirectional linked list data structure. The Precursor table entry (the last element) <-the current element (header)-> the Back-drive table entry (the first element). General new action (ie, direct add to last).
ArrayList
Public boolean Add (E e) {
Encurecapacity (size+1)//Ensure that the internal array has enough space, not enough to expand, 1.5 times times. Performance depends on this method
elementdata[size++] = e;//Adds an element to the end of the array to complete the add
return true;
}
LinkedList
Private Entry (E) Addbefore (e E, Entry Entry) {
The following three lines of code are key to performance consumption
Entry newentry = new Entry (e, Entry, entry.previous);
NewEntry.previous.next = newentry;//points The next element of the predecessor table key to the current new element
NewEntry.next.previous = newentry;//points The previous element of the back-drive table entry to the current new element
size++;
modcount++;
return newentry;
New to specified location based on subscript
ArrayList: Each insert operation, will be an array of replication, the next subscript, the performance is worse.
LinkedList: Where to insert performance is the same. Delete specified location
ArrayList: As new, all need to replicate the array, from the end of the head performance gradually increased.
LinkedList: Need to traverse, subscript as the middle is the least performance, need to traverse 1/2 of the list element. The foreach runtime is parsed into iterators by the compile-time, and the decompile code sees a further assignment, so the performance of the three loops is sorted for loop > Iterator > foreach. MAP:
Properties-> hashtable-> Dictionary, Map.
Hashmap-> abstractmap-> Map.
Treemap-> abstractmap-> Map.
Linkedhashmap-> hashmap-> abstractmap-> Map. Hashtable does not allow key or value to use null values, but HashMap can. HashMap principle: The key to do hash algorithm, and then map the hash value to the memory address, directly to obtain the key of the corresponding data. The underlying data structure is an array of memory addresses, which is the subscript index of the array. Why HashMap is high-performance:
hash algorithm is efficient (more navive local method and bit operation). The hash value to the memory address (array index) algorithm is efficient (bitwise and computed based on the hash value and the length of the group). The corresponding value can be obtained directly from the memory address (array index). hash algorithm and find the source code:
Hash algorithm:
int hash = hash (Key.hashcode ());//Calculate the hash value of key
public native int hashcode ()//Can be overridden, performance-critical points, so it is important that overridden hashcode methods are conflicting
static int hash (ITN h) {//bitwise BASED operation
H ^= (H >>>) ^ (h >>> 12);
Return h ^ (H >>> 7) ^ (H >>> 4);
Algorithm to find memory address:
int i = indexfor (hash, table.length);
static int indexfor (int h, int length) {
Return H & (LENGTH-1);
} HashMap high performance attention conditions:
The implementation of the Hashcode () method minimizes conflicts, and the operation of HashMap is almost random access to the array. If the conflict is more, it is equivalent to degenerate into several linked lists, equivalent to traversing the list, performance is poor. General Hashcode can be generated directly using the methods provided by the Eclipse IDE, or by introducing Third-party libraries such as Apache Commons. Capacity parameters, HashMap expansion will traverse the entire hashmap, the data in the new array to recalculate the position, so you should try to avoid expansion, in the initialization of the estimated good approximate capacity. Load factor = number of elements/internal array size, default is 0.75, try not to be greater than 1, this will bring conflict. The HashMap table-item structure is actually an array of linked lists.
Entry1 (each entry includes key, value, next, hash)
Entry2
...
...
Entryn-> entryn1-> Entryn2 (linked list structure of hash conflicts). Linkedhashmap: Maintains the hashmap of the element order table and adds before, after two properties to each entry object. There are two types of sorting, sorted by the order in which the elements enter the collection or by the order in which they are accessed. TreeMap: A sort of intrinsic order based on uniform speed (determined by comparator or comparable). Its internal implementation is based on the red-black tree, is a balanced lookup tree, performance is superior to the balanced binary tree, can be in O (log n) time to find, insert and delete. Set:
HashSet, Linkedhashset, TreeSet are just a package of maps, all of which are delegated to the HashMap object for completion. Nio:
Unlike streaming I/O, it is based on block. The most important two components: buffering buffer and channel channel. The channel represents the source or destination of the buffered data, which is used to read or write data to the buffer, and to access the buffered interface. Applications cannot read and write directly to channel, but must be done by using buffer. Examples of file copying by NiO:
public static void Niocopyfile (string resource, string destination) {
FileInputStream fis = new FileInputStream (Resource);
FileOutputStream fos = new FileOutputStream (destination);
FileChannel Readchannel = Fis.getchannel ();//Read file Channel
FileChannel Writechannel = Fos.getchannel ();//write File Channel
Bytebuffer buffer = bytebuffer.allocate (1024);//read into data cache
while (true) {
Buffer.clear ();
int len = readchannel.read (buffer);
if (len = = 1) {
Break
}
Buffer.flip ();
Writerchannel.write (buffer);
}
Readchannel.close ();
Writechannel.close ();
Buffer3 an important parameter: position (position), capacity (capacity), limit (upper bound). When the flip () operation is performed, the write mode is converted to read mode, the limit is set to the current position and the position is placed to 0. Three file Flow Performance comparison: The performance based on the buffer is one-fold higher than the average performance based on the flow, based on the buffer and mapping the file to the memory of the performance is higher than an order of magnitude. development and optimization of parallel programsFuture mode (used in a large method, some small methods are time-consuming, you can treat these small methods in this mode)
The core is to remove the main function wait time, and make the time period that would otherwise need to wait can be used to deal with other business, make full use of computer resources. An implementation of the future pattern is already built into the JDK's concurrency package, the key is the call () method of the callable interface, and the overload defines the business logic. Master-worker mode (can be used to prepare data before data migration, multiple processes collect data and compute results asynchronously)
The master process is the primary process, maintaining a worker process queue, child task queues, and child result sets. The worker process in the worker process queue extracts the subtasks to be processed from the task queue and writes the results of the subtasks to the result set. You can use the Forkjoinpool framework. Producer-Consumer model
Schema diagram: Producer-> memory buffers-> consumers. Producer Producer: Used to submit user requests, extract user tasks, and mount memory buffers.
Consumer consumer: Extracts and processes tasks in the memory buffer.
Memory buffer Blockingqueue: A cache of tasks or data submitted by a producer for use by consumers.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.