Before the blog, there are said performance tests common terms: connection pooling. It is probably a brief summary of the role of the connection pool, this blog, the introduction of the connection pool and the connection pool thread object principle, role and advantages ...
One, Connection pool
1. What is a connection pool? Why do we need it?
Connection pooling allows multiple clients to use cached connection objects that can connect to the database, which are shared and reusable.
Open/Close database connection is expensive, connection pooling technology allows us to maintain connection objects in the connection pool, which can improve the performance of the database execution commands. Multiple client requests can reuse the same connection object, and each time a client request is received,
The connection pool is searched to see if there are idle connection objects. If not, either all client requests are queued up, or a new connection object is created in the pool (depending on how many connections already exist in the pool and how many connections are supported by the configuration).
Once a request has finished using the Connection object, the object is re-placed in the pool and then reassigned to the queued waiting request (which request is dispatched to see what scheduling algorithm is used).
Because most requests are using existing connection objects, connection pooling technology greatly reduces the time it takes to wait to create a database connection, reducing the average connection time.
2. How do I use connection pooling?
Connection pooling is common in network-based enterprise applications where the application server is responsible for creating connection objects, adding them to the connection pool, assigning connection objects to requests, reclaiming used connection objects, and re-putting them back into the connection pool.
When a network application creates a database connection, the application server pulls the connection object out of the pool, and when it is closed, the application server is responsible for putting the used connection object back into the pool.
PS: You can also use the JDBC 1.0/JDBC 2.0 API to get physical connections (physical connnection), but this is rare because the database only needs to be connected once and no connection pooling is required.
3. How many connections can the connection pool handle? Who creates/releases the connection?
You can configure the maximum number of connections, the minimum number of connections, the maximum number of idle connections, and so on, all of which can be configured by the server administrator. When the server starts, a fixed number of connection objects (the minimum number of connections configured) are created and added to the connection pool.
When client requests consume all of the connection objects, new requests are made to create new connection objects, which are added to the connection pool and then assigned to the new request until the maximum number of connections is set.
The server also keeps viewing the number of idle connection objects, and when the number of unused connections is detected that exceeds the set value, the server shuts down idle connections, and then they are garbage collected.
4. Traditional connection pool vs manageable connection pool
connection pooling is an open concept, and any application can use this concept and manage it in the way it wants. The connection pooling concept refers to creating, managing, and maintaining connection objects.
But when the scale of the application increases, it becomes increasingly difficult to manage connections without a robust connection pooling mechanism.
Therefore, it is necessary to build a robust, manageable pool of connections.
PS: About connection pool content, refer to from http://www.importnew.com/8179.html
Second, Threads & Thread pools, connections & connection Pooling
threads : The smallest unit of a program execution flow, an entity in a process, a relatively independent, scheduled execution unit, is the basic unit of the system's independent dispatch and dispatch;
Multithreading technology, refers to in a process can create multiple threads "simultaneously" processing multiple transactions;
Thread Pool : Can be understood as a buffer, because the frequent creation of the destruction of the thread will bring a certain cost, can be pre-created, but not immediately destroyed, to share the service for others, one can provide efficiency, but also control the thread wireless expansion.
Connection : A point of connection with another point;
connection Pooling : The same beauty as the thread pool, but the connection pool can be based on multithreading, can be implemented by multiple processes, or it may be single-instance.
As an example:
socket in the service, can listen to multiple client connections at the same time, then its implementation principle is a bit like "connection pool"; Each client sends data to the server at the same time through multiple ports, which can be considered multi-threaded,
The server may have established n threads to wait to process/analyze the data sent by the client at the same time, but there is a "thread pool".
Three, several states of the thread
Threads under certain conditions, the state will change. There are several states for a thread:
1. New state: A new Thread object was created.
2. Ready state (Runnable): After the thread object is created, other threads call the object's start () method. The thread in this state is in the "Running thread pool" and becomes operational, waiting only for the CPU to be used,
That is, in the ready state of the process, except the CPU, all the other resources required to run are fully available.
3, running State (Running): The ready state of the thread gets the CPU, executes the program code.
4, blocking state (Blocked): Blocking state is the thread for some reason to abandon the use of the CPU, temporarily stop running. Until the thread is in a ready state, the opportunity to go to the running state is reached.
There are three types of blocking:
①. Wait for blocking: The running thread executes the wait () method, which frees all resources that are consumed, and the JVM puts the thread into the "waiting pool." After entering this state, it is not automatically awakened,
You must rely on other threads to invoke the Notify () or Notifyall () method to be awakened.
②. Synchronous blocking: When a running thread acquires a synchronization lock on an object, the JVM puts the thread into the lock pool if the synchronization lock is occupied by another thread.
③. Other blocking: The running thread executes the sleep () or join () method, or when an I/O request is made, the JVM puts the thread in a blocking state. When the sleep () state times out, join () waits for the thread to terminate, or time out,
Or when I/O processing is complete, the thread is re-transferred to ready state.
5. Dead State (Dead): The thread finishes executing or exits the run () method because of an exception, and the thread ends the life cycle.
The state transition diagram for thread changes is as follows:
PS: Gets the lock tag of the object, that is, the permission to use the object (the critical section). That is, the thread gets the resources it needs to run, enters the ready state, and can run with just the CPU.
Because when wait () is called, the thread releases the "lock flag" that it occupies, so that the thread is only able to get the resources to enter the ready state.
The following explanations are given below :
①. There are two ways to implement a thread, one is to inherit the thread class, and the other is to implement the Runnable interface, but in any case, when we new this object, the thread enters its initial state;
②. When the object invokes the start () method, it enters the ready state;
③. When it is ready, when the object is selected by the operating system, the CPU time slice gets into the running state;
④. After entering the operating state, the situation is more complicated;
(1) after the run () method or the main () method ends, the thread enters the terminating state;
(2) When a thread calls its own sleep () method or another thread's join () method, the process yields the CPU and then goes into a blocking state, which stops the current thread, but does not release the resources that it occupies.
That is, after the sleep () function is called, the thread does not release its "lock Flag". )。 When sleep () ends or join () ends, the thread enters the operational state and continues to wait for the OS to allocate CPU time slices;
Typically, sleep () is used to wait for a resource to be ready, and after a test discovery condition is not met, the thread is blocked for a period of time and then re-tested until the condition is satisfied.
(3) The thread calls the yield () method, which means to discard the currently obtained CPU time slice, return to the ready state, when the other process is in the same competition state, the OS may then let the process into the running State;
The effect of calling yield () is equivalent to the scheduler thinking that the thread has executed enough time slices to go to another thread. Yield () simply brings the current thread back to the executable state,
So the thread that executes yield () is likely to be executed immediately after entering the executable state.
(4) When the thread has just entered the operational state (note that it is not yet running), discovers that the resource to be called is Synchroniza (synchronous), acquires no lock token, will immediately enter the lock pool state, wait for the lock token
(At this point the lock pool may already have other threads waiting to acquire the lock tag, when they are in the queue state, first-come-first-served), once the thread acquires the lock tag, it goes into a ready state, waiting for the OS to allocate CPU time slices.
(5) Suspend () and resume () methods: Two methods are used, suspend () causes the thread to enter a blocking state, and does not automatically recover, it must be called by its corresponding resume () in order to enable the thread to re-enter the executable state.
Typically, suspend () and resume () are used when waiting for the result of another thread: After the test finds that the result has not yet been generated, the thread is blocked and the other thread produces the result, calling resume () to restore it.
(6) Wait () and Notify () methods: When a thread calls the Wait () method, it enters the waiting queue (which frees up all the resources it occupies, unlike the blocking state), which is not automatically awakened after entering this state.
You must rely on other threads to invoke the Notify () or Notifyall () method to be awakened (because notify () only wakes up a thread, but we are not sure which thread is specifically awakened, and perhaps the thread we need to wake up to cannot be awakened,
Therefore, in the actual use, usually use the Notifyall () method, wake up some threads), the thread will be awakened into the lock pool, waiting for the lock token.
Wait () causes the thread to enter a blocking state, which has two forms:
one allows you to specify the time in MS as the parameter, and the other without parameters. The former when the corresponding notify () is called or exceeds the specified time, the thread re-enters the executable state is ready state, the latter must be called by the corresponding notify ().
When wait () is called, the thread releases the "lock flag" that it occupies, making other synchronized data in the object that the thread resides in can be used by other threads.
Waite () and notify () because they operate on the object's lock flags, they must be called in the synchronized function or Synchronizedblock.
If the call is made in the non-synchronized function or Non-synchronizedblock, the illegalmonitorstateexception exception will occur at run time, although it can be compiled.
PS: Several states of thread, reproduced from open source China: Several states of threads
Performance Testing (ix) connection pooling and threading