Ms SQL Server 2000 administrator manual series-30. Microsoft SQL Server Management

Source: Internet
Author: User
Tags configuration settings sql server query server error log how to use sql

30. Microsoft SQL Server Management
Automatic Setting of SQL Server
Database Maintenance Plan
Using appropriate settings and performing routine maintenance tasks in the database is the key to optimizing the server. This chapter describes the special features about dynamic settings in SQL Server 2000, which simplify the procedure for setting databases. At the same time, we will also use the database maintenance wizard (Database Maintenance Plan wizard) to create an automatic maintenance plan for the database, so that the database remains in the best state.
Automatic Setting of SQL Server
SQL Server contains automated functions to reduce load related to setting and adjusting relational database management systems (RDBMS. As these functions have evolved from SQL Server 7.0, SQL Server 7.0 users should be familiar with these functions. This chapter describes how to operate these functions, how to use them to reduce the workload of databases, and how to eliminate automated function settings when necessary.
Dynamic Memory Management
Dynamic Memory Management allows SQL Server to allocate memory size based on the system size, so as to dynamically manage the buffer and program cache. Since SQL Server supports dynamic memory management, DBAs do not have to manually control the cache size. However, in some cases, you may need to limit the memory size that SQL server can use.
Operation of dynamic memory management
Dynamic Memory Management constantly monitors the available physical memory size in the system to achieve the management function. SQL Server decides to increase or decrease the Memory Sharing area of SQL Server as needed and the amount of memory available (as described in the next section ). This function is useful when the memory size is fixed. However, when the memory is used to process programs other than SQL server in the system, the problem arises because the SQL Server constantly changes the memory configuration.
If a computer system is mainly used as an SQL Server database server, it is suitable for dynamic memory management. In such systems, the memory size used for actions outside the scope of SQL Server processing is relatively fixed. Therefore, SQL Server will continuously automatically configure the memory to make the work more efficient, this configuration will continue until there is no more physical memory available for configuration. If the existing processing program does not need more memory, the system will remain in this state. If another program needs to use memory again, SQL Server will remove the memory size required by the new program and configure it for the new program to be used smoothly.
Dynamic memory is not suitable for computer systems used to process job programs. As the memory demand constantly changes, the processing program must increase or decrease the memory from time to time. In such a system, the memory usage often changes, and SQL server must constantly set and undo the memory configuration, increasing the burden on the system and making the processing process inefficient. If you can manually configure a fixed memory size in SQL Server or preset the maximum and minimum memory capacity that SQL server can configure, this type of system can work better, this chapter will provide more details later.
Whether it is using dynamic or manual memory management, the system's operation depends on whether the memory can be used to maximize the effect. By monitoring the memory configuration of SQL Server, you can determine whether the memory usage is fixed or remains in a stable State. Use Performance Monitor of Windows 2000 to monitor memory, such as the total server memory (kb Performance Counter) in SQL Server ). The memory Administrator (Memory Manager) object displays the memory size (Kb) Recently consumed by SQL Server ).
Memory Sharing Zone
SQL Server dynamically configures and revokes the memory in the Set area. The memory shared area consists of the memory of the following components:
• The buffer cache retains the data pages read by memory in the database. The buffer cache usually occupies most of the Memory Sharing areas.
• The Connection memory is used at the connection points of each SQL Server. The connection memory is composed of data structures that can continuously track each user's content, including cursor position information, query parameter values, and pre-stored program information.
• The data structure contains global information about the lock and database description, including information about the lock owner, the type of the lock, and various Archives and Archives.
• The record cache is used to store the transaction information written in the transaction record. It is also used to read the record information currently written into the record cache. The record cache improves the performance of record writing, which is different from the buffer cache.
• The program cache is used to store the statement-based execution plan of the transact-SQL statement and all the steps during the execution of the program.
Because the memory configuration is dynamic, if it can be managed using dynamic memory, the Memory Sharing area will continuously increase or decrease, in addition, the size of the five regions in the Memory Sharing area can be dynamically changed. This function is controlled by SQL Server. For example, if there is a lot of memory demand, there will be a lot of T-SQL statement stored in the program cache, SQL Server will take some memory from the buffer cache for use in the program cache.
Use additional memory
The memory size that SQL server can access depends on the usage of the Windows operating system. Windows NT Server 4 supports 4 GB of memory, of which 2 GB is used in user programs, and the other 2 GB is reserved for the system. This indicates that in NT 4, the memory size that SQL server can configure is limited to 2 GB. However, in Windows NT Server 4 Enterprise Edition, the virtual memory size configured for each program processing is 50% (3 GB) larger than the original one ). This can be expanded because the system configuration is reduced to 1 GB. The Virtual Memory Used in program processing increases, so the size of the Memory Sharing area can be expanded to nearly 3 GB. To use the support in Windows NT 4 Enterprise Edition, you must add the/3 GB mark to the boot data row of the boot. ini file, which can be completed through the System icon on the console.
For two versions of Windows 2000, SQL Server 2000 Enterprise Edition can use the Windows 2000 address drawing wing extensions (AWE) API to provide larger memory space. The Windows 2000 Advanced Server version supports nearly 8 GB of memory, while the Windows 2000 datacenter server supports nearly 64 GB of memory. Awe only supports these two operating systems and cannot support Windows 2000 Professional Edition. (For details, see "use awe memory in Windows 2000" in Chapter 2 of this book and <books> 」.)
Memory setting options
The following SQL server configuration settings are related to the memory configuration. You can use SQL Server Enterprise Manager or sp_configure to set the configuration. To view these configurations using sp_configure, you must set the show advanced options Option to 1.
• Awe enabled enables SQL Server to use extended memory (previously mentioned awe memory ). This option can only be used in SQL Server Enterprise Edition and can only be set with sp_configure.
• Index Create memory limits the memory size of sorting during index creation. The index Create memory option is self-set and does not need to be adjusted in most cases. However, if you encounter difficulties in creating an index, you may need to add the option value to the default value.
• Max Server Memory sets the maximum memory space that SQL server can allocate to the Memory Sharing area. If the SQL server needs to dynamically configure and withdraw the memory, the default value is left. If the memory needs to be configured statically (so the usage space remains unchanged ), you can set this option to the same value as Min server memory.
• Min memory per query specifies the minimum memory space allocated to the execution query (KB ).
• Min Server Memory sets the minimum memory space that SQL server can allocate to the Memory Sharing area, leaving the default value for dynamic memory configuration. To statically allocate memory, set this option to the same value as Max server memory.
• Set working set size specifies that the memory allocated by SQL Server cannot be exchanged, even if other programs can use the memory more effectively. When SQL server can dynamically configure memory, the set working SET SIZE option should not be used. It should only be used when Min server memory and Max server memory are set to the same value. In this way, the SQL server allocates a static memory size that cannot be changed.
To take advantage of AWE's memory options, you must run SQL Server 2000 Enterprise Edition together with Windows 2000 Advanced Server Edition or Windows 2000 Data Center Server Edition.
Other dynamic setting options
Some Dynamic settings do not belong to the server memory and can be used in SQL Server. If the default values of these options are retained, SQL Server dynamically sets all options. The default value can be overwritten. Although this is not necessary, you should still know how these options work when you use manual settings.
Use SQL Server Enterprise Manager or sp_configure to set options (not all options can be set through enterprise manager ). Use sp_configure to set options and enable query analyzer or osql online in the Command Prompt window. Run the pre-stored program with parameters as follows:
Sp_configure "option name", Value
The option name parameter is the name of the Set option, and the value is the value to be set. If no parameter value is included during command execution, SQL Server Returns the current value to the specified option. To view the list of all options and their values, execute sp_configure without any parameters. Some options are considered as advanced options. To view and set these options with sp_configure, you must set show advanced options to 1 first. The syntax is as follows:
Sp_configure "show advanced options", 1
The options set through Enterprise Manager are not affected by show advanced options.
In Enterprise Manager, select the server and press the right button. In the shortcut menu, select the content to enter the SQL Server properties (configuration) window, as shown in Figure 30-1.

Figure 30-1 General tab of the Properties window in Enterprise Manager
Some dynamic options can be accessed in the window tab. The following sections describe SQL and memory-independent dynamic options, and describe whether these options can be set on Enterprise Manager and where to find them.
Lock options
SQL Server dynamically sets the number of locks used in the system based on the current requirements. You can set the maximum number of available instances to limit the amount of memory that SQL server uses to lock. The default value is 0, which allows the SQL Server to dynamically request the allocation and revocation of locks based on system requirements. SQL Server allows up to 40% of the memory to be locked. Keep the locks parameter as the default value 0 and allow SQL Server to allocate locks as needed. This option is an advanced option and can only be set with sp_configure.
Recovery interval options
Restoration interval indicates the maximum number of minutes per database required by SQL Server to reply to the database. (See <automatic checkpoints> in this chapter .) The time required for SQL Server to reply to the database depends on the time when the last checkpoint occurred. Therefore, the value of the recovery interval (recovery interval) is used by SQL Server to dynamically determine when to execute an automatic checkpoint.
For example, each time SQL Server is disabled, the checkpoint will be executed in all databases. Therefore, when SQL Server is restarted, it takes only a little time to restore. However, if SQL Server is forcibly stopped (due to power interruption or other errors), it will return uncommitted transactions to the database when SQL Server is to be restarted, transactions that have been submitted but not written to the disk when SQL Server is down. If the last checkpoint in a specific database is executed just before a system error, the database response time is short. If the last checkpoint is executed long before the system error occurs, the response time is long.
SQL Server determines how often a checkpoint is executed based on the preset path and recovery interval. For example, if the restoration interval is set to 5, SQL Server will execute checkpoints in each database, so that the database will take about five minutes to reply after a mistake. The default restoration interval is 0, indicating that the interval is automatically set by SQL Server. When the default value is used, the reply time is less than 1 minute, and the checkpoint is executed almost once every minute. In many cases, frequent checkpoint execution reduces the efficiency. Therefore, in most cases, you should increase the recovery interval value to reduce the number of checkpoints. The value you select depends on your enterprise needs, depending on how long the user can wait after a system error occurs. Generally, set 5 ~ The value of 15 is enough (the reply time is 5 ~ 15 minutes ).
The recovery interval option is an advanced option. You can set it in the Properties window of Enterprise Manager. Click the database settings tab, as shown in Figure 30-2. Enter a value in the recovery interval (minutes.

Figure 30-2 set the reply time
User connection options
SQL Server dynamically sets the number of user connections, and allows a maximum of 32767 user connections. By setting the online user option, you can specify the maximum number of user connections allowed to access SQL Server (limited by applications and hardware ). User connections can still be dynamically set to the maximum value.
For example, if only 10 users log on, 10 user connection objects are allocated. If the maximum value is reached, and SQL server requires more users to connect, an error message is displayed, notifying you that the user connection has reached the maximum value.
In most cases, you do not need to change the default value of the online user option. Note that each connection requires about 40 kb of memory.
You can use SQL Server Query analyzer and the following T-SQL statement to determine the maximum number of user connections allowed by your system:
Select @ max_connections
Online User options are advanced options that can be set in Enterprise Manager. Click the online tab in the server Properties window, and enter a value in the user's online maximum value box, as shown in Figure 30-3.

Figure 30-3 Set User links
Enable object options
The open objects option is an advanced option and can only be set with sp_configure. It specifies the maximum number of database objects that can be enabled at the same time, such as data tables, View tables, pre-stored programs, triggering programs, rules, and default values can all be started at the same time. The default value 0 indicates that the system allows SQL Server to dynamically adjust the number of enabled objects. We recommend that you retain the default value. If you make a change and SQL server needs more open objects than set, you will receive an error message notifying you that the number of open objects allowed by the system has exceeded. In addition, each open object consumes some memory, so the system needs more physical memory to support the number of open objects.
The system requires column statistics to increase query efficiency. SQL Server can collect statistics related to the value distribution in data columns of a data table. Query Optimizer uses this information to determine the optimal query execution method. Statistics can be obtained in two data columns: Part of the index and used in the query statement instead of the index. Leave the default database value set by SQL Server, so that SQL server can automatically create two types of statistics. Both the index column data and the index data are created, while the non-index column data is created when you need to query (only a single data column, not multiple data columns, as described in <create statistics> later ). Once the statistics are old (not used for a period of time), SQL Server automatically removes them.
When creating non-index data columns and index data columns, SQL Server only uses data samples in the data table, rather than each data row. This reduces the excessive demand for jobs, but in some cases, the sample does not correctly display the data features, and the statistics will not be completely accurate.
In Enterprise Manager, you can set or cancel the automatic statistics creation function of the database. First, open the selected database Properties window and click the options tab. The check box that automatically generates statistics is displayed. (Figure 30-4 shows the check boxes of the sample distributions database ). This option is selected by default.

Figure 30-4 Properties window of the Distribution Database
In the database Properties window, you will also see the option to automatically update statistics. This option indicates that SQL server will automatically update the statistics of Data columns in the data table when necessary. When most of the values in a data table change, these statistics must be updated (through update, insert, or delete operations ). When most data changes, the current statistics are inaccurate. SQL Server automatically determines when statistics should be updated. If you choose to remove this option to establish statistical capabilities, you must manually execute the task to ensure that the database operates properly. The following sections show you how to manually create and update statistics.
Create statistics
You can use the T-SQL command create statistics to manually create statistics in a specific data column of a data table. Manual statistics and automatic creation are different. Manually, you can combine the statistics of multiple data columns to generate an average of duplicate and distinct values. The syntax of create statistics is as follows:
Create statistics stats_name on
Table_name (column [, column...])
[[With [fullscan | sample size percent]
[, Norecompute]
You must provide the statistical name, data table name, and at least one data column name you want to create. You can specify multiple data column names to collect statistics of the combined data column, but you cannot specify the data row, ntext, text, or image data type as the statistical data column. Either full scan or individual sampling can be used for statistical collection. Because it takes longer to scan the data table of each row than to sample data, it is more accurate. If you use individual sampling, you must specify the percentage of sample data. Norecompute specifies the automatic update capability to cancel statistics so that statistics no longer represent whether data can be used.
You may need to create statistics in the data column and use them together in the query statement. For example, you can create statistics in the firstname and lastname columns of the data table employees in the northwind database to search for employee data using the employee's last name and name. The usage of T-SQL program code is as follows:
Create statistics name
On northwind .. employees (firstname, lastname)
With fullscan, norecompute
This statement computes the statistics of all data rows in the firstname and lastname columns, and revokes the automatic check function of the statistics.
If you do not want to enter the create statistics statement in the column, you can use the sp_createstats pre-store program. The Deposit Program will be described in the next section.
You can create statistics for all the appropriate data columns in the user data table through the pre-Save program sp_createstats. Statistics will be created for columns that do not have statistics. Each group of statistics is only in a single data column. The syntax of sp_createstats is as follows:
Sp_createstats ['indexonly'] [, 'fullscan']
[, 'Norecompute']
The indexonly parameter specifies that only data columns in the index can be used for statistics. The fullscan parameter specifies that each data row performs a full scan instead of a random sample. The norecompute parameter cannot automatically update statistics on new statistics. The new statistics are all named after the original data column name.
Update statistics
SQL Server automatically updates statistics by default. You can use the update statistics command to undo this option. Instead, you can manually update statistics. This command allows you to update statistics on the index column and non-index column. You may need to create an update statistic script, and then regularly execute the script, just like SQL Server. This will help maintain the latest statistics and ensure better query performance. (See Chapter 17th <Re-indexing> for more details about syntax and update statistics options .) To set or remove the automatic update status for specific statistics, use the sp_autostats pre-stored program, as described below:
The system's pre-stored program sp_autostats can be used to set or remove automatic updates for specific statistics. The execution program does not cause statistical updates. More specifically, it determines whether Automatic Updates should occur. The pre-stored program should call together with one or two parameters: data table name, selective tag, and statistical name. Indicates the automatic update status and is set to on or off. Display the Update Status of all statistics (index and non-index statistics) in the data table, and run the command with the specified data table name. The following command displays the statistical status of the MERs data table:
Use northwind
Sp_autostats MERs
The output displays the name of each statistic, regardless of whether the automatic update is set to ON or OFF or when the latest update is made. Do not be troubled by the index name title in the first output column. It indicates all statistics, not just indexes. If you do not need to manually turn off these statistics updates, they will display the on status, just like the default value of SQL Server.
To remove automatic updates to all statistics in the northmers data table of the northwind database, you must use the following commands:
Use northwind
Sp_autostats MERs, 'off'
You can set the tag value to on to enable automatic statistics updates to run again. Changes the status of specific statistics or index statistics, including individual statistical names or index names. For example, the following command can set automatic statistics update for the pk_customers index:
Use nnorthwind
Sp_autostats MERs, 'on', 'pk _ MERs'
The status of all other statistics in the customers data table will not change.
File Growth
When using SQL Server 2000, you can set the automatic growth of data files as needed. This function is easy to use because it prevents you from accidentally using up space. However, the database size cannot be monitored because of this function, or the capacity planning can be performed occasionally. Pay attention to how fast data tables are growing. Then, you can determine whether to delete unnecessary files on a regular basis (maybe expired data in some data tables ). In this way, the growth of data tables can be slowed down. As the number of data in a data table increases, the query takes more time and the performance level decreases. When you create a database, set the title of Automatic File growth (as described in chapter 9), and you will learn how to change the growth options of existing databases. The Automatic File growth option can be set in Enterprise Manager. Follow these steps:
1. In the left pane of Enterprise Manager, expand a server and select a database folder. Click the right button on the database you want to modify (take modifying mydb database as an example), and select the content from the shortcut menu to enter the database Properties window.
2. Click the data file tab (shown in 30-5) to view the attributes of the database data file. The options in archive attributes are used to control the growth of data files. To enable automatic file growth, check the automatic growth check box. If you want your archive to grow automatically, you should limit it to keep it growing without limit.
Figure 30-5 data file tab in mydb Properties window
Use the option in the maximum file size to specify the maximum file growth limit. Click Restrict File growth and type the maximum value in the fine-tuning box. If you choose not to limit the file growth, you will find that the disk drive sub-system is filled with data accidentally, resulting in efficiency and Operation Problems.
You can use the option in the file growth area to limit the speed of file growth. If you click MB, once the data file is full, SQL Server will increase the size according to the specified value. If you click "%", SQL Server will increase the data file size according to the percentage of the current size.
3. Click the transaction record file tab (shown in 30-6) to set the automatic growth option of the transaction record. These options are used in the same way as the data file tab. Here we should also set a limit for the transaction record file so that the file will not grow without limit.
Figure 30-6 transaction record file tab in mydb database Properties window
The Automatic File growth function is very convenient in many cases, as long as you are sure that you have not accidentally consumed all the disk space of the system.
SQL Server automatically performs the checkpoint operation. The checkpoint frequency is automatically calculated based on the restoration interval value in the specified SQL Server setting option. This option specifies the time (in minutes) for the database to wait when a system failure event occurs ). The checkpoint frequency must be sufficient to ensure that the system response time is less than the specified number of minutes. When SQL Server is disabled in the shutdown statement or in the Service Control administrator, the checkpoint will also appear automatically. You can also use the checkpoint statement to manually set up a checkpoint.
If you want the system to perform optimization and allow a long response time, you can set the recovery interval to a large value, for example, 60. This means that if the system fails, it will take 60 minutes to complete the automatic response. Check Points cause a large number of disk writes, which will remove processing resources from users' transactions, thus slowing users' response time. This is why the execution of fewer checkpoints often helps improve the efficiency of all transactions. Of course, this value is too high. If the system fails, it will lead to a long downtime. The general recovery interval is between 5 and 15 minutes.
The default restoration interval is 0. This setting allows SQL Server to determine the optimal time for executing a checkpoint based on the system load. Generally, when the default value is used, the checkpoint is executed every minute. If you notice that checkpoints often appear, you may need to adjust the recovery Interval Settings. To determine whether the SQL Server overexecutes the checkpoint, use the SQL Server trace flag-t3502. This mark causes the checkpoint information to be written in the SQL Server Error Log. Note that the checkpoint will appear in each database.
Database Maintenance Plan
A maintenance plan is a group of tasks, and SQL server automatically runs in the database according to your specified schedule. The purpose of the maintenance plan is to automatically operate important management tasks so that the work will not be ignored and the manual workload of DBAs can be reduced. You can create individual plans for each database, create multiple plans for a single database, or create a single plan for multiple databases.
When creating a maintenance plan, you can schedule the following four main management task types:
• Optimization
• Integrity check
• Full database backup
• Transaction Record backup
Executing these tasks is important for maintaining good performance and recoverable databases. The types of optimized tasks you plan to include depend on your database performance and usage. Performing integrity checks is a good way to ensure sound and consistent databases. Regular backups are also required to ensure that the database can be restored when the system is invalid or the user is wrong. Because these backups are so important, you should set up an automatic backup policy. We will see more details about each task type in this section later.
Use Database Maintenance Plan Wizard to create a maintenance plan. In this section, you will learn how to use this genie, how to show jobs and how to edit plans in the maintenance plan.
Use Database Maintenance Plan Wizard to create a maintenance plan
Follow these steps to run the database maintenance plan Wizard:
1. You can start the database maintenance plan wizard in any of the following ways:
O select a database maintenance plan from the tool menu.
O select a database name in the left-side pane, and click Add maintenance plan under the maintenance title in the right-side pane. If no maintenance title is displayed, check whether a work list is selected in the View menu. You may need to scroll down the screen to see it.
O click a database name, select the wizard from the tool menu, and then select the wizard.
In the dialog box, manage the data folder and select the database maintenance plan wizard.
O expand the servers in the left pane, expand the management data folder, and press the right button on the database maintenance plan, and select Add maintenance plan from the shortcut menu.
O press the right button on the Database Name and select all jobs. Select the maintenance plan from the shortcut menu.
Once you enable the genie, you will see the welcome screen, 30-7.

Figure 30-7 welcome to database maintenance plan genie
2. Select next to go to the database selection screen, as shown in Figure 308-8. Select one or more databases for which you want to establish a maintenance plan.
3. Select next to go to the Data Optimization page, as shown in Figure 30-9. You can select the optimal type for the database you just created:
O reorganizing data and index pages this option uses a specific fill factor (or the number of available space per page) to improve update efficiency, removing or recreating indexes for all data tables in the database. This page is not required for read-only data tables. For data tables that are often inserted or updated, the initial available space on the index page starts to fill up, and the page starts to split. If this option is selected, indexes can be re-created to create available space for future file growth, avoiding latency and data occupation caused by PAGE splitting.

Figure 30-8 select database Screen
You can choose to recreate the index based on the original size of the available space, or you can specify a new ratio to keep blank pages. If the ratio you set is too high, you must take the risk of reducing the data read efficiency. If you select this option, you cannot select the update statistics used by query optimizer option.
Removing and recreating an index takes more time than using DBCC dbreindex, as discussed in chapter 17th <rebuilding an index>. You may want to create your own job to recreate the index without using this option.

Figure 30-9 update data optimization Information
O select the statistical data used for updating query optimization to enable SQL Server to re-sample the distribution statistics of all indexes in the database. It uses this information to select the best execution plan for the query. If you do not have to change the default option to update statistics (as described earlier in this chapter), SQL Server automatically generates statistics and samples data at a lower rate than the data in each index.
This option can be used to force SQL server to execute another sample, use a large ratio of specified data, or decide how long the SQL server should update statistics, rather than letting it decide. The larger the ratio of data sampling, the more accurate the statistics, but the SQL server also takes more time to generate statistics. This information helps improve efficiency when data in the index column is greatly modified. You can use SQL Server Query analyzer to check the query execution plan, determine whether the index is used effectively, and determine whether this option is necessary. If you select this option, you cannot select the previous reorganize data and index pages option.
O Remove unused space from the database file. This option can be used to remove unused space. This process is also known as file shrink ). You can specify how much space should be used for compression, and how much space should be reserved after compression. Once the available space is removed, you can use DBCC shrinkfile to reduce the file size. If necessary, it can also be smaller than when it was just created. This allows the disk space occupied by files to be used in other places. Also, removing unused space from compressed data can improve efficiency. Compression is unnecessary in read-only data tables.
You can select the change button to edit the task execution schedule, and specify the task execution time, as shown in Figure 30-10. These tasks can be executed at low system usage (such as weekends or evenings) because they take much time to complete and may delay the response time to the user.
4. Select next to go to the database integrity check screen, as shown in Figure 30-11. In this screen, you can choose whether to perform integrity check. The integrity check runs the DBCC checkdb command to check the configuration and structure integrity of data tables and indexes (if you select the index option ). You can choose to check whether the index, whether SQL Server should fix small issues found (this option is recommended), and whether all integrity checks should be performed before backup. If you select to perform a check before the backup and find problems during the check, no backup will be performed. Click Change to change the execution time of these tasks. The integrity check may take several hours, depending on the size of your database. Therefore, be sure to schedule them to be executed when the database usage is low. Check should be performed on a regular basis, perhaps once a week or every month, or before database backup.
Figure 30-10 dialog box for repeatedly executing a job

Figure 30-11 database integrity check Screen
5. Select next to go to the specified database backup plan, as shown in 30-12. You can choose whether to create an automatic backup plan (this plan is recommended ). Select "database backup" as a part of the maintenance plan to create automatic backup (Chapter 1 describes backup details ). You can instruct SQL Server to check the integrity of the backup when it is completed. SQL Server determines that the backup is complete and all the backup content can be accessed. You can also specify whether the backup needs to be stored on tape or disk first. Click Change to change the backup execution time.
6. Select next to go to the specified backup disk directory screen, as shown in Figure 30-13. This screen appears only when you specify backup to disk in the previous screen. If a tape is backed up, the image will not appear. You can specify the location of the backup file or use the preset Backup Directory. If you want to back up more than one database (such as master, model, and MSDB), you can put each database backup under its subdirectory to systematize the backup file. You can choose to automatically delete the backup file for a certain period of time to remove the disk space and specify the backup file extension.
Figure 30-12 specify the database backup plan

Figure 30-13 specify the backup disk directory
7. Select next to go to the backup plan of the specified transaction record file, as shown in Figure 30-14. This figure is similar to the figure of the backup plan for the specified database in Figure 30-12, but this screen option is used to create a backup transaction record. Transaction Record backup should be performed between Database backups. To back up any changes from the last database backup to the present, you can back up the changes using transaction records. In other words, transaction record backup allows you to restore data between Database backups.
Figure 30-14 specify the transaction record file backup plan
If you choose to store backups on a disk, the next screen will show you how to specify the disk directory for backup of transaction record files. In this screen, you can provide backup file location information.
8. Select next to generate the report, as shown in Figure 30-15. This screen provides options for you to create a report, including the results of the maintenance plan task execution. In this screen, you can select the report storage location, delete the report that exceeds a certain date, and send the report to the specified address by email.
9. Select next to go to the maintenance plan history record screen (Figure 30-16 ). You can choose whether to write the maintenance history report to the database data table of the local server or set the maximum report capacity. You can also write the report to a remote server and specify the maximum capacity of the report.
Figure 30-15 generate a report

Figure 30-16 maintenance plan history
10. Select next to go to the database maintenance wizard screen, as shown in Figure 30-17. This screen shows the summary of the maintenance plan. This plan has a default name, but you can also Type a name in the text box of the Plan Name. Check the summary. If you want to change any options, go back and modify it. If the plan is correct, select complete.
Figure 30-17 restore the database
Display Work in maintenance plan
In the maintenance plan example, we create a task in each of the four types. To view the work list or schedule task, open the manage data folder in the left pane of Enterprise Manager, expand the SQL Server Agent program, and select a job, as shown in 30-18.

Figure 30-18 jobs created with maintenance plans
Edit maintenance plan
To edit a maintenance plan, click the name of the created database in the left-side pane of Enterprise Manager, then select the scheduler name under the maintenance title in the right pane (you may have to scroll down to see the title). The database maintenance plan dialog box appears, as shown in 30-19.
After the data is modified, the General Tab allows you to specify which databases are applicable to the maintenance plan. Other tabs can change the settings in the original database maintenance plan wizard. Click confirm when the plan is modified. The maintenance plan will be started immediately according to the new arrangement.
You must have the SQL Server Agent run as scheduled starting from the automatic maintenance plan. For more information, see Chapter 31st.

Figure 30-19 General tab in the "Database Maintenance Plan" dialog box
In this chapter, you have learned the dynamic setting function of SQL2000, which helps you reduce the database usage when you execute DBAs. You can also create database maintenance plans to automatically execute management tasks. The next chapter will show you how to use SQL Agent to define work and warnings. By setting work and alarms, You can automate management tasks.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.