A talk about the understanding of MySQL InnoDB
Introduced:
The InnoDB engine is an important storage engine for MySQL databases, and the InnoDB engine has the advantage of supporting acid-compliant transactions (like PostgreSQL), as well as parameter integrity (with foreign keys), compared to other storage engines. Now innobase implement dual authentication authorization. MySQL5.5.5 The default storage engine is the InnoDB engine.
Features are:
1, with better transaction support: Support 4 transaction ISOLATION level, support multi-version read
2, row-level locking: Through the index implementation, the full table scan will still be a table lock, pay attention to the impact of Gap lock
3. Read-write blocking is related to transaction isolation level
4, with very efficient cache features: Can cache the index, but also can cache the data
5. The entire table and primary key are stored in cluster mode, forming a balance tree
6. All Secondary index will save primary key information
Applicable scenarios:
1. Need transaction support (with good transactional characteristics)
2, row-level locking for high concurrency has a good adaptability, but need to ensure that the query is done by indexing
3, the data update more frequent scene
4. High Data consistency requirements
5, the hardware device memory is large, can use InnoDB better cache ability to improve memory utilization, reduce disk IO as much as possible
Fundamentals of MySQL master and slave synchronization
MySQL supports one-way, asynchronous replication, during which one server acts as the primary server, and one or more other servers act as slave servers.
MySQL replication is based on the primary server that tracks all changes to the database in the binary log. Therefore, to replicate, you must enable binary logging on the primary server. Each slave server receives data from the primary server that the primary server has logged to the log.
When a primary server is connected from the server, it notifies the primary server where the last successful update was read from the server in the log. Receives from the server any updates that have occurred since then, and performs the same updates on this computer. Then block and wait for the master server to notify the new updates. Performing a backup from the server does not interfere with the primary server, and the master server can continue to process updates during the backup process.
A notable feature of the Java language is the introduction of a garbage collection mechanism, which everyone knows, the concept of garbage collection is not introduced here, the focus is garbage collection is when the start? What did you do to something?
When the GC starts:
All of the collector types are based on generational technology, so it's important to know how the objects are divided by their life cycle.
Young generation: Divided into three regions: the original area (Eden) and two small survival zones (Survivor), two survival zones are divided by function from and to. The vast majority of objects are allocated in the original area, and more than one garbage collection operation remains alive in the survival area. Most of the garbage collection takes place in younger generations.
Older generation: Stores objects in the young generation that survive multiple recycling cycles, and for some large memory allocations, may also be allocated directly to a permanent generation.
Persistent Generations: Store classes, methods, and their descriptive information, which basically does not produce garbage collection.
With these mats, we begin to answer when the GC starts:
After the Eden memory is full, the Minor GC (reclaimed memory from the young generation space is referred to as the Minor GC), and the object that rises to the old age is more than the old age when the space is larger than the remaining space of the older generation, but may be less than the remaining space. Forced full GC by handlepromotionfailure parameter)
What to do with what is the object of garbage collection:
The search for the object is not available from root, and after the first tag and cleanup, there are still no resurrected objects.
Did something:
Mainly do the cleanup object, the work of the memory. The specific extension is as follows
Type of garbage collector:
Serial (serial GC) collector
The serial collector is a new generation collector, single threaded execution, using a copy algorithm. It must suspend all other worker threads (user threads) when it is garbage collected. Is the default new generation collector in JVM client mode. For environments that limit a single CPU, the serial collector is naturally able to achieve the highest single-threaded collection efficiency due to the lack of thread interaction overhead.
Parnew (parallel GC) collector
If you want to learn Java engineering, high performance and distributed, in Layman's. Micro-service, Spring,mybatis,netty source analysis of friends can add my Java Advanced Group: 582505643, the group has Ali Daniel Live interpretation technology, as well as Java large-scale Internet technology video free to share to everyone.
The Parnew collector is actually a multithreaded version of the serial collector, and the rest behaves like a serial collector, in addition to using multiple threads for garbage collection.
Parallel Scavenge (parallel recycle GC) collector
The Parallel scavenge collector is also a new generation collector, it is also a collector using the copy algorithm, and is also a parallel multi-threaded collector. The parallel scavenge collector is characterized by its focus on other collectors, where the focus of collectors such as CMS is to minimize the downtime of user threads when garbage collection occurs, while the parallel scavenge collector's goal is to achieve a controllable throughput. Throughput = Program Run time/(program run time + garbage collection time), the virtual machine runs for a total of 100 minutes. Where garbage collection takes 1 minutes, the throughput is 99%.
Serial old (serial GC) collector
Serial old is an older version of the Serial collector, which also uses a single thread to perform the collection using the "mark-and-organize" algorithm. The virtual machine is primarily used in client mode.
Parallel old (parallel GC) collector
Parallel old is an older version of the Parallel scavenge collector, using multithreading and the "mark-and-organize" algorithm.
CMS (concurrent GC) collector
The CMS (Concurrent Mark Sweep) collector is a collector that targets the shortest recovery pause time.
The CMS collector is implemented based on the "tag-purge" algorithm, and the entire collection process is broadly divided into 4 steps:
①. Initial tag (CMS initial mark)
②. Concurrency token (CMS concurrenr mark)
③. Re-tagging (CMS remark)
④. Concurrency Cleanup (CMS concurrent sweep)
Where the initial token, the re-tagging of these two steps will need to pause other user threads. The initial tag simply marks the object that the GC ROOTS can directly relate to, fast, and the concurrent tagging phase is the GC ROOTS root search algorithm stage, which determines whether the object is alive or not. The re-tagging phase is to fix the tag record of the part of the object that caused the markup to change as the user program continues to run during the concurrency tag, and the pause time of this phase is slightly longer from the initial marking stage, but shorter than the concurrent tagging phase.
Because the collector thread can work with the user thread during the longest concurrent markup and concurrent cleanup process throughout the process, the memory reclamation process for the CMS collector is performed concurrently with the user thread.
The advantages of CMS collector: Concurrent collection, low pause, but the CMS is still far from perfect, there are three major shortcomings:
The 1,cms collector is very sensitive to CPU resources. In the concurrency phase, although the user thread does not pause, it consumes CPU resources and causes the reference program to slow down and the total throughput to decrease. The number of recycled threads that the CMS starts by default is: (Number of CPUs +3)/4.
The 2,cms collector is unable to handle floating garbage and may appear "Concurrent Mode Failure", resulting in another full GC after failure. Because the CMS concurrent cleanup phase user thread is still running, with the program running since the heat will have a new garbage generation, this part of the garbage appears after the tagging process, the CMS will not be able to process them in this collection, we have to leave the next GC to clean it off. This part of the rubbish is called "floating rubbish". Also because the user thread in the garbage collection phase also needs to run, that is, the need to reserve enough memory space for the user thread to use, so the CMS collector can not wait for the old age to be almost completely filled up as other collectors, and then to collect, the need to reserve a portion of the memory space to provide concurrent collection of program operation use.
By default, the CMS collector is activated when 68% of space is used in the old age, or it can provide a trigger percentage by the value of the parameter-xx:cmsinitiatingoccupancyfraction to reduce the number of memory recoveries to improve performance. The "Concurrent Mode Failure" failure occurs when the memory reserved during the CMS operation does not meet the needs of other threads of the program, and the virtual machine will start a fallback plan: temporarily enable the serial old collector to re-use the garbage collection of the older age, So the pause time is very long.
So the parameter-xx:cmsinitiatingoccupancyfraction set too high will easily lead to "Concurrent Mode Failure" failure, performance is reduced.
3, the last drawback, CMS is a collector based on the "tag-purge" algorithm, which is collected using the "mark-sweep" algorithm, resulting in a lot of fragmentation. Too much space debris will cause a lot of trouble with object allocation, such as large objects, where memory space cannot find contiguous space to allocate and have to trigger a full GC in advance. To solve this problem, the CMS collector provides a-xx:usecmscompactatfullcollection switch parameter that adds a defragmentation process after the full GC, and can also be-xx: The cmsfullgcbeforecompaction parameter sets the number of times the full GC is executed, followed by a defragmentation process.
If you want to learn Java engineering, high performance and distributed, in Layman's. Micro-service, Spring,mybatis,netty source analysis of friends can add my Java Advanced Group: 582505643, the group has Ali Daniel Live interpretation technology, as well as Java large-scale Internet technology video free to share to everyone.
Garbage collection algorithm:
Reference counting method
Mark Clear Method
Replication Algorithms
Tag compression algorithm
Split-generation algorithm
Partitioning algorithm
Class loading process in a virtual machine
Load loading:
A binary byte stream is obtained through the fully qualified name of a class, the static storage structure represented by this byte stream is converted into the runtime data structure of the method area, and a Java.lang.Class object representing the class is generated in memory as the access entry for various data of the class of the method area.
Verify Verification:
Ensuring that the information contained in the byte stream of a class file conforms to the requirements of the current virtual machine does not compromise the security of the virtual machine itself.
Prepare preparation:
Formally allocates memory for class variables and sets class variable initial values.
Parsing resolution:
A virtual machine replaces a symbolic reference within a constant pool with a direct reference procedure.
Initialize initialization:
The last step in the class loading process is to really start executing the Java program code defined in the class at this stage.
Use using:
Executes according to the behavior defined by the program code you write.
Uninstall unloading:
The GC is responsible for unloading, which is generally not discussed.
Tell me about the scope and life cycle of beans in spring
Scope
Singleton
There will only be one shared bean instance in the Spring IOC container, regardless of how many beans reference it and always point to the same object. The singleton scope is the default scope in spring.
Prototype
Each time a prototype-defined bean is obtained through the spring container, the container creates a new instance of the bean, each of which has its own properties and state, and Singleton has only one object globally.
Request
In an HTTP request, the container returns the same instance of the bean. For different HTTP requests, a new bean is generated, and the bean is valid only within the current HTTP request.
Session
In an HTTP session, the container returns the same instance of the bean. A new instance is created for a different session request, and the bean instance is valid only within the current session.
Global Session:
In a global HTTP session, the container returns the same instance of the bean, only valid when the portlet context is used.
Life cycle
Instantiate a bean, which is what we usually call new.
The instantiated bean is configured according to the spring context, which is the IOC injection.
If the bean implements the Beannameaware interface, it invokes the Setbeanname (String Beanid) method It implements, which is passed the ID of the bean in the spring configuration file.
If the bean implements the Beanfactoryaware interface, it invokes the setbeanfactory () that it implements, passing the spring factory itself (which can be obtained in this way to other beans).
If the bean implements the Applicationcontextaware interface, it invokes the Setapplicationcontext (ApplicationContext) method, passing in the spring context.
If the bean is associated with the Beanpostprocessor interface, the postprocessbeforeinitialization (Object obj, String s) method is called, Beanpostprocessor is often used as a change in bean content, and because this is called after at the end of bean initialization, it can also be used in memory or caching techniques.
If the bean is configured in the spring configuration file, the Init-method property automatically invokes its configured initialization method.
If the bean is associated with the Beanpostprocessor interface, the postafterinitialization (Object obj, String s) method is called.
When the bean is no longer needed, it passes through the cleanup phase, and if the bean implements the Disposablebean interface, it invokes its implementation of the Destroy method.
Finally, if the Destroy-method attribute is configured in the spring configuration of this bean, its configured destruction method is automatically invoked.
Please take 30 minutes to delve into this article, the system Master Java face Test Analysis skills