The difference between a dedicated Oracle server and a shared server is that when you create an Oracle database, you should see this option in the database creation assistant wizard, that is, the connection mode of the database. There are two connection modes in Oracle9i or 10g, one is ** dedicated server connection (dedicated server )**, another type is ** shared server connection )**. Next we will classify the differences between the two connection methods. Dedicated server mode means that each time you access Oracle, the Listener of the Oracle server will get this access request and then create a new process for this access for service. Therefore, access to each client generates a new process for service, which is similar to one-to-one ing. An important feature of this connection mode is that UGA (User-wide) is stored in PGA (process-wide, this feature also demonstrates that the current user's memory space is allocated by process. The other shared server connection is a connection pool concept that is usually used during programming. In this mode, a batch of server connection processes will be created during database initialization, and these connection processes will be put into a connection pool for management. The number of processes in the initialized pool can be set manually during database initialization. When a connection is established, Listener first accepts the request for establishing a connection to the client, and then Listener generates a process called the dispatcher to connect to the client. The scheduler places client requests in a request queue of the SGA (system full-region), then shares the server connection pool to find whether there are idle connections, and then processes the idle server. After the processing is completed, the processing result is placed in the corresponding queue of the SGA. The scheduler queries the corresponding queue to obtain the returned result and then returns it to the client. The advantage of this connection mode is that the number of server processes can be controlled, and it is unlikely that the server memory will crash due to the large number of connections. However, with the increase in complexity and the corresponding request queue, the probability can be decreased. In short, in the development phase, the first dedicated server may be better, because there is less complexity in the middle, and the number of connections is usually small during development. In the actual application environment where multiple applications use one database at the same time, the second method may be better, because if 1000 or 10000 requests are suddenly connected at that time, if the database server establishes 10000 connections at the same time, it must be unable to stand it. Of course, it depends on the actual situation and how to make a decision. There is no absolute difference between the two.