Sybase configuration, sp_configure

Source: Internet
Author: User
Tags configuration settings dname rollback sybase sybase client sybase database system log server memory
Sybase configuration Parameters sp_configure ' max online engines ', 4
Go
--Configure the number of boot CPUs
sp_configure ' number of engines at startup ', 4
Go
--Configure Maximum memory count
sp_configure ' max memory ', 2097151
Go
--Allocating maximum stored procedure cache
sp_configure ' procedure cache ', 102400
Go
--Configure Caching
Sp_cacheconfig ' default data cache ', ' 700M '
Go
--Default cache allocation page size
Sp_poolconfig ' default data cache ', ' 200M ', ' 16K '
Go
--Network packet size
sp_configure ' max network packet size ', 1024
Go--Maximum number of connections
sp_configure ' number of user connections ', 500
Go
--Maximum Open object
sp_configure ' number of open object ', 9000
Go
--Maximum Index
sp_configure ' number of open index ', 10000
Go
--Maximum number of locks
sp_configure ' number of locks ', 100000
Go--Increase network memory
--sp_configure ' Additional network memory ', 1024
Go-Lock memory
sp_configure ' lock shared memory ', 512
Go-Optimizing tempdb
Select dbid, Name,segmap
From Sysusages, sysdevices
where Sysdevices.low <= sysusages.size +vstart
and Sysdevices.high >=sysusages.size+vstart-1
and dbid =2
and (status=2 or status=3)
Go
Use tempdb
Go
Sp_dropsegment ' Default ', Tempdb,master
Go
Sp_dropsegment ' logsegment ', tempdb,master
Go
Select dbid, Name,segmap
From Sysusages, sysdevices
where Sysdevices.low <= sysusages.size +vstart
and Sysdevices.high >=sysusages.size+vstart-1
and dbid =2
and (status=2 or status=3)
Go sp_cacheconfig tempdb_cache, ' 100M '
Go
Sp_poolconfig tempdb_cache, ' 50M ', ' 16K '
Go
Sp_bindcache ' Tempdb_cache ', tempdb
Go
Sp_helpcache Tempdb_cache Select Name,id from syscharsets DBCC TRACEON (3604)
DBCC MEMUSAGE 1 Database installation optimization
1.1 Application database using bare devices
If you are installing a database server under a UNIX (UnixWare) operating system, install the device (device) of the Sybase application database into a bare device. That is, when creating an application database device (such as: ivsp,db160,nap2000, etc.) with a bare device, point the file name to the corresponding file in the/dev/dsk/subdirectory. System database devices (such as master, etc.) still point to the file system.
1.2 Patch Program
To complete the Sybase database installation, please note that the original Sybase software will be with the latest patches, must be put on the patch, otherwise there will be some inexplicable problems. For NT version of the patch is a zip file, extracted to the c:/sybase subdirectory can be.
1.3 Installing SYBASE online Help
After installing SYBASE, there is a file under the installation directory:. /scripts/ins_syn_sql, execute the script on the server:
Sybase for UNIX version:./isql-usa-p-I.. /scripts/ins_syn_sql
Sybase for Winnt version: Isql-usa-p-i/sybase/scripts/ins_syn_sql
Once executed, you can connect SQL SERVER on any SYBASE client and get help online for any command:
Sp_syntax "keyword"
For example, Sp_syntax "alter" lists all commands that contain the "alter" character
2 Optimization of database configuration
2.1 Optimizing master Database
First enlarge the master equipment space, the initial default is 30M, increase to 150M. Then increase the master database space, the default data segments and log segments are 5M size, recommended to the data segment 100M, log segment 50M.
ALTER DATABASE Master on master=95
2.2 Optimizing the tempdb database
The tempdb device is created first, assigned to the tempdb database, the default tempdb database data segment and log segment are 2M sizes, and created on a master device, it is recommended to change to a data segment of 200M, log segment 50M, to be created on the tempdb device.
ALTER database tempdb on tempdb=200 SQL sevrer all users share the tempdb databases for worksheets and temporary tables, and the main bottleneck in tempdb is disk i/0. The solution is to put tempdb on a faster device. In a UNIX environment, put tempdb in a file system without the original device. Because tempdb automatically creates a 2M database on a master device when it creates data, the best way to reduce conflicts is to remove tempdb from the master device. To achieve this, you can implement the following: 1: Start SQL SERVER in Single-user state
Start Single User method: Dataserver-dmaster.dat-m
2: Login with SA
3: Create a dumb database in the file system.
4: Delete references to the existing tempdb database in the sysusages and Sysdatabase tables
5: Get the database ID of the dumb database and modify the references to tempdb by the sysusages and Ysdatabase tables accordingly
6: Restart the database
As an example of creating a 200M tempdb database in Newdevice, the following procedure is performed:
Create Database Newtemp on newdevice=200
Go/* Create a new database * *
BEGIN Tran
Go/* Start a transaction to prevent the entire SQL Server from running when the operation is wrong.
Delete sysusages where dbid = 2
Delete sysdatabases where dbid = 2
Go/* Deletes a reference to tempdb from the system table and executes it only in single user state * *
Select dbid from sysdatabases where name = ' newtemp ' Go/* get newtemp Database ID, assuming 10*/update sysusages set dbid = 2 where dbid= Update sysdatabases set name= ' tempdb ', dbid=2 where name= ' newtemp ' Go/* Modify the reference to the database, change the reference to Newtemp to a reference to tempdb/select name , dbid, from sysdatabases where name = ' tempdb ' select * from sysusages where dbid = 2 Go/* Test modification is correct, correct submission, otherwise available rollback rollback * * Commit TRAN Go/* modified successfully, reboot the system. This method is only valid for tempdb and cannot be used by other databases. Because tempdb is reinitialized each time when SQL Server starts.
2.3 Optimizing System Parameters
The following parameters are standard recommended values and can be modified according to actual conditions.
SQL script annotation to optimize system parameters
sp_configure ' total memory ', 100000 optimizing the memory of the database, should be based on different machine configuration settings, recommended to half the physical memory size. In Db_block, that is, each unit is 2k, the example above is 200M, and the default is 24M.
sp_configure "lock scheme", 1, "DataRows" system defaults to table-level locks, optimized for row locks
sp_configure "number of locks", 10000 enlarges maximum lock process, defaults to 5000.
(If you set an existing table (Tabel) as a lock for a row lock, execute:
ALTER TABLE TABLE_NAME LOCK DataRows)
sp_configure ' procedure cache percent ', 30 default value: 20 Recommended value: Procedure use a higher frequency when using a larger value, not more than 30
sp_configure "number of user connections", 100 maximum user connections, default to 25, 70k memory per connection
sp_configure ' number of devices ', 20 change maximum device file data to 15
sp_configure ' number of open databases ', 20 max Open database, you need to increase this parameter for multiple databases on a single database service, default is 15
sp_configure ' max online engines CPU ', 2 default value: 1 Recommended value: The number of CPUs using the actual machine
sp_configure ' total data cache size ', 60000 default value: 0 Recommended value: 30% using SQL Server memory
If Sybase startup is not normal after the above parameter changes, you can check Sybase's error log and modify the corresponding options in sybase.cfg to a smaller value.
Enclosure: Allocation of SYBASE SQL SERVER memory
1. SQL SERVER Executable code 3-4m
2. Static memory used by SQL SERVER 2.2-3.25m
3. Memory consumed by user configurable parameters, as shown in the following example (11.9.2 version):
The default values Occupy memory
Subscriber connections (user connections) 25 per 70k
Number of open databases (open database) 12 per 60k
Number of open objects (open objects) 500 per 1k
Number of indexes open (open Indexs) 500 per 1k
Lock number (locks) 5000 per approx 0.1k
Number of database devices (data device) 10 per 0.5k
4. The remaining portion is allocated to
Process caching (determined by procedure cache percent, default value is 20%)
Data caching (default is minus 1, 2, 3 items 80%)
2.4 Optimizing Database System Properties
Select the database properties in Sybase Center to select the following items in the Options option in the property.
Allow Select Into/bulk Copy
Truncate Log on Checkpoint
Checkpoint on recovery
Abort transction on full log
Free MySpace Accounting
Allow Nulls by default
Auto identity column in Non-unquie index
Method Two: Execute the following SQL script in Sqlplus
Such as:
sp_dboption mydb, "Abort tran on Log Full", True (sets the process to terminate when the log space of the database is filled, using the SA user)
sp_dboption mydb, "Select Into/bulkcopy", True
sp_dboption mydb, "Trunc Log on chkpt", True
sp_dboption mydb, "No chkpt on recovery", true
sp_dboption mydb, "No free Spaces ACCTG", true
sp_dboption mydb, "Allow Nulls by default", True
sp_dboption mydb, "Auto identity dbo use only", true
2.5 Creating threshold stored procedures
You can modify the following scripts or create multiple threshold stored procedures based on different applications, and specify the appropriate stored procedures when you configure thresholds.
CREATE PROCEDURE Sp_thresholdaction
@dbname varchar (), @segmentname varchar, @free_space int, @status int
As DUMP transaction @dbname with NO_LOG
Print "LOG DUMP: '%1! ' for '%2! ' dumped", @segmentname, @dbname
Go
2.6 Configuring multiple Thresholds
Method One:
Open Sybase and double-click the segment segments->; of the corresponding database (db) Logsegment, in the Thresholds page, you can set the threshold for automatically clearing the log. One of the last Chance is the system default opportunity threshold, which is the last chance to automatically clear the log when the system log free space is less than the value. When set, the threshold size can be set to about 20% of the total log space size.
Add more threshold values.
Method Two:
1, use the following instructions to detect the database log capacity (with the page)
Select sum (size) from master. sysusages where dbid=db_id ("database_name" and (SEGMAP&AMP;4) =4
2, using Sp_addthreshold to add a new threshold, the size of the log capacity of 50%, such as the above statement shows a value of 2048
Sp_addthreshold Database_name,logsegment,1024,proc_dump_display
Note: It is possible to cross the current threshold because of a large transaction, so you must add more than one threshold
Using the command SELECT @ @thresh_hysteresis to view the latency values of the database, such as the result is 64 pages, the next threshold is set to the "most recent threshold-(2*64)", then add two smaller thresholds to each of the thresholds you set.
3 optimization of the index
On the basis of good database design, it is necessary to use indexes efficiently and maintain indexes frequently, and the related contents of indexes are described below.
3.1 Creating an Index
Indexes are grouped into three categories: Clustered index (clustered indexes), nonclustered index (nonclustered indexes), overlay index (covering indexes)
Because indexing accelerates query speed, it slows down the speed of data updates. You can create a nonclustered index on another segment by building a table on a single segment, which improves operational performance on separate physical devices.
Create [unique][clustered|nonclustered] index index_name on table_name (column_name ...)
3.2 Reconstruction Index
As data rows are inserted, deleted, and data pages are split, some index pages may contain only a few pages of data, while in addition to performing bulk I/O, rebuilding nonclustered indexes can reduce fragmentation and maintain the efficiency of large I/O. Rebuilding an index is actually a re-organizing B-tree space. You need to rebuild the index in the following situations:
(1), data and usage patterns vary greatly.
(2), the order of the ordering has changed.
(3), to do a large number of inserts or has been completed.
(4) More disk reads than expected for queries that use large chunks of I/O.
(5), due to a large number of data modification, so that the data pages and index pages are not fully used, resulting in the use of space beyond the estimate.
(6) There was a problem with DBCC checking out indexes.
When a clustered index is rebuilt, all nonclustered indexes for this table are rebuilt.
3.3 Update of index statistics
When you create an index on a table that contains data, SQL Server creates a distributed data page that holds two statistics about the index: the distribution table and the density table. The optimizer uses this page to determine whether the index is useful for a particular query. But this statistic is not dynamically recalculated. This means that when the data for a table changes, the statistics are likely to be outdated, which affects the optimizer's pursuit of the most promising goals. Therefore, you should run the UPDATE STATISTICS command in the following situations:
(1), the insertion and deletion of data rows modify the distribution of the data.
(2) Adding data rows to a table that deletes data with TRUNCATE table.
(3), modify the value of the indexed column.
4 Query optimization
4.1 Not IN clause
I do not know if you like to use ' not in ' such an operation, if it is, then try to use (not) EXISTS substitution.
Example:
Statement 1
Select Dname, deptno from dept WHERE Deptno not in (SELECT deptno from EMP);
Statement 2
Select Dname, Deptno from dept where not EXISTS (select Deptno from emp where dept.deptno = Emp.deptno);
Obviously, 2 is a lot better than the 1 execution performance, because the EMP is full table scan in 1, which is a waste of time. And there is no use of the EMP index in 1 because there is no WHERE clause. The statements in 2 for the EMP are range scan.
4.2 Mass Query
In the mass query, as little as possible with format conversion.
If you use
WHERE a.order_no = B.order_no
And not
WHERE To_number (substr (A.order_no, InStr (B.order_no, '. ')-1) = To_number (substr (A.order_no, InStr (B.order_no, '. ')-1)
3 Query mass data can be used optimizer hints, such as/*+ordered * *
Such as
SELECT/*+ Full (EMP) * * E.ename from EMP E WHERE e.job = ' clerk ';
Instead of
SELECT e.ename from EMP E WHERE e.job | | ' = ' clerk '; For systems with large volumes of data and more complex business functions, Sybase's default parameters simply do not meet the needs and must be optimized. The optimization of the system database is done in two aspects, one is to adjust the value of some performance parameters of the database system, the other is the tuning of the application. Application tuning hsql and SQL, with a reasonable index of SQL, here is a summary of some basic performance parameters of Sybase system tuning.
1, Memory
Memory is the most important and most difficult place to tune performance. Memory tuning must be, commonly used to adjust the parameters are: sp_configure "max Memory", 0, "2600M" (set to 75% of shared memory, restart effective)
sp_configure "Allocate Max shared mem", 1 (automatically allocate max memory specified maximum memory at startup)
Sp_cacheconfig "Default data Cache", "1300m" (set cache as half of Max memory)
Sp_cacheconfig "Default data Cache", "Cache_partition=2″
sp_configure "procedure cache size", 102400 (process cache, usually Max mem20%-30%, here is 200M, this parameter must be larger when executing SQL in large quantities)
Sp_cacheconfig ' Tempdb_cache ', ' 100m ', ' mixed ' (create a 100M named cache Tempdb_cache to TEMDPB use)
Sp_bindcache ' Tempdb_cache ', tempdb (bind Tempdb_cache to tempdb) actually encountered a very headache, the 32-bit Windows version of Sybase maximum memory only to get to 3G, " Default data cache value more than 1500M Sybase instance will not get up, cause the server 16G memory, so if everyone's project and I this similar, server and operating system provided by customers, also can not change, memory may wish to request a 4G is enough, many also waste. 2, CPU
When the server has more than one CPU, you can consider multiple CPUs. In fact, the OS will be automatically scheduled, set it is only a more accurate control. The actual need according to the number of CPUs to modify, if cpu=n, generally set to N-1. Set this parameter, such as my server 8 CPU, like the following settings: sp_configure "max online engines", 7
sp_configure "number of engines at startup", 7
sp_configure "max parallel degree", 1 (parallel degree, greater than or equal to 1)
sp_configure "number of worker Processes", 7 (Parallelism * Concurrent connection * 1.5 times times) 3, the number of connections (this does not say, the number of tubes can be enough, the default number of 25, can be modified according to the application needs. )
sp_configure "number of user connections", 600 querying database dead process
SELECT * FROM Master. Syslogshold 4, lock
Database lock mechanism is actually a relatively complex topic, here can only be said briefly. Sybase database system Two-level lock mechanism: All page locks, data page locks. All page locks lock both the data page and the index page when the database is locked, and the data page lock locks the data page only and does not lock the index page when the database is locked.
Sybase supports three types of locks: data table locks, data page locks, data row locks. Some commonly used tuning commands and strategies are as follows: sp_configure "number of locks", 50000 (set quantity of locks)
System settings to the number of locks to set a larger, simply to pipe enough; If you need to save space, reduce maintenance, use all page lock mechanism, and if you need to speed up, enough space, the use of data page lock mechanism. Sp_sysmon "00:10:00", locks (test table usage)
When the lock competition over 15% is detected through monitoring, the lock mechanism of the most heavily locked table is modified first, then the data page lock is set to data row lock. If more than a spiral lock is found, a separate named cache is established for the table and the named cache is partitioned. 5. I/O
The overall idea of database tuning is to minimize and diffuse physical I/O, minimizing network I/O. The way to reduce physical I/O is to add large I/O buffer pools in the named cache, spread the data across multiple hard disks, RAID technology, build segments, make a table span multiple hard drives, and so on, and basically tune with other database software.
The way to reduce network I/O is to use large packets.
sp_configure "default network packet Size", 2048 sets the size of the network transfer package (requires a reboot)
sp_configure "Max network packet Size", 2048 6, equipment adjustment
Two major adjustments: one is the business database data devices and log devices must be separated, add temporary database equipment; the other is to adjust tempdb, which is important but easy to ignore.
Tempdb is a temporary library in the Sybase database that holds intermediate results and temporary tables. Because it is very frequent and the default size is small, we need to increase the size of its device space and library, put tempdb on the fastest hard drive possible, and establish a separate named cache.
Sp_cacheconfig ' Tempdb_cache ', ' 200m ', ' mixed ' (create a 200M named cache Tempdb_cache to TEMDPB use)
Sp_bindcache ' Tempdb_cache ', tempdb (bind Tempdb_cache to tempdb)


to go from

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.