When building an Oracle database, you should see an option on the Database Setup Assistant Wizard, which is how the connection mode of the database is used. In oracle9i or 10g, you can see 2 connection modes, one called a dedicated server connection (dedicated server), and another one called a shared server connection. Let's sort out the differences between the two ways of connecting. The dedicated server mode means that each time the Oracle server is accessed, the listener of the system receives the access request and then creates a new process for the access to service. So, for each client access, will generate a new process for service, is a similar one-to-one mapping relationship. An important feature of this connection pattern is that the UGA (user global domain) is stored in the PGA (Process global domain), which is also a good indication that the current user's memory space is allocated according to the process.
Another shared server connection is the concept of a pool of connections that is typically used when a program is written. In this mode, a batch of server-connected processes are created when the database is initialized, and then the connection processes are placed in a connection pool for management. The number of processes in the initialized pool can be set manually when the database initialization is established. When the connection is established, listener first accepts the client's request to establish a connection, and then listener to generate a process called the Scheduler (Dipatcher) to connect to the client. The scheduler puts the client's request in a request queue in the SGA (System global Domain), and then shares the server connection pool to find any connections that are free, and then lets the idle server handle it. After processing, the processing results are placed in the corresponding queue of the SGA. The scheduler returns the result and returns it to the client by querying the corresponding queue. The advantage of this connection mode is that the number of server processes can be controlled, and it is unlikely that server memory crashes due to excessive number of connections. However, because of the increased complexity and the request of the corresponding queue, the likelihood can be reduced.
In short, in the development phase, it may be better to use the first dedicated server, because there is less intermediate complexity, and the number of general connections is small when developing. In a practical application environment where multiple applications use a single database at the same time, the second approach may be better, because if there is a sudden 1000 or 10,000 requests to connect, the database server will be able to build 10,000 connections at the same time. Of course, also want to see how the actual situation of the time to make a decision, the two are not absolutely what kind of good difference.