Procedure for generating Statpack from Oracle

Source: Internet
Author: User

1. telnet to a remote database server
CMD> telnet the IP address of the remote database
2. Users switching to the database
DQXXDBS01:/> su-oracle
3. Use a super user to access the database
$ Sqlplus "/as sysdba"
4. Check the parameters and whether a job can be used (you can use a job to automatically collect Statpack Report data)
SQL> show parameter job_queue_processes
5. This parameter can collect operating system information
SQL> show parameter timed_statistics
6. In order to create a tablespace, which of the following table space data files is better? Copy codeThe Code is as follows: SQL> select file_name from dba_data_files;

7. Check which directory has a large space and determine the location of the data file in the perfstat tablespace.
$ Df-g
8. Create a tablespace:Copy codeThe Code is as follows: SQL> create tablespace perfstat datafile '/oracle/product/10.2.0/db_1/dbs/perfstat. dbf' size 500 M;

-- Tablespace is used to store the data collected during snapshot storage.
9. scripts required to create Statspack
SQL> @:/oracle/product/10.2.0/db_1/rdbms/admin/spcreate. SQL -- "/oracle/product/10.2.0/db_1/rdbms/admin/" is the script directory
After running the script, enter perfstat_password, default_tablespace, temporary_tablespace
10. test whether a database snapshot is generated successfully.
SQL> execute statspack. snap
11. Generate a snapshot
SQL> execute statspack. snap
12. Take the Statspack report for the time period between two snapshots
SQL> @/oracle/product/10.2.0/db_1/rdbms/admin/spreport. SQL
Enter the start point and end point for generating the Statspack report. For example, a snapshot of two time points is generated. Enter begin_snap: 1 and end_snap: 2. Then enter the name of the generated snapshot (for example, 1.txt)
-- Note: If no report location is specified, the path of the report when sqlplus is started
13. Use a job to periodically generate a Statspack snapshot for data collection (one hour by default)
SQL> @/oracle/product/10.2.0/db_1/rdbms/admin/spauto. SQL
14. Delete historical data
Select max (snap_id) from stats $ snapshot; -- deletes the corresponding data in the stats $ snapshot table. The corresponding data in other tables is also deleted.
Delete from stats $ snapshot where snap_id <3;
You can also run the sptrunc. SQL script to directly Delete the information of these statistical tables.
Note:
1. After generating the required Statspack, remember to remove the task and view the task:
SQL> select job, log_user, priv_user, last_date, next_date, interval from user_jobs;
2. Remove the task for generating a Statspack snapshot:
SQL> execute dbms_job.remove ('41 ')
3. If an error occurs when running the spcreate. SQL script, you can run the spdrop. SQL script to delete the created object and then run the spcreate. SQL script.
4. After running the spcreate. SQL script, you can find the relevant lis file to view the installation information (in the default path of sqlplus)
Other related scripts
1. spuexp. par script-dmp that can be used to export statpack Report-related data, as shown below
Exp userid = perfstat/fyzh parfile = spuexp. par
Note: The spuexp. par file must be in the current path of sqlplus login. For example, the path to sqlplus is C: \ Documents ents and Settings \ Administrator,
So spuexp. par will be in this directory, otherwise the prompt "LRM-00109: Unable to open the parameter file 'spuexp. par '"
2. sprepsql. SQL script-used to generate an SQL Report based on the given SQL hash Value
Adjust the STATSPACK collection threshold
Statspack has two types of collection options:
Level: controls the types of collected data.
Threshold (threshold): set the threshold of collected data.
1. level)
Statspack has three snapshot levels. The default value is 5.
A. level 0: general performance statistics, including wait events, system events, system statistics, rollback segment statistics, row cache, SGA, sessions, locks, and buffer pool statistics.
B. level 5: Add SQL statements. In addition to all content of level0, SQL statement collection is also included, and the collection results are recorded in stats $ SQL _summary.
C. level 10: Add sub-lock statistics. includes all content of level5. the attached sub-locks are stored in stats $ lathc_children. exercise caution when using this level. We recommend that you use this level under the guidance of Oracle support.
You can use the statspack package to modify the default level settings.
SQL> execute statspack. snap (I _snap_level => 0, I _modify_parameter => 'true ');
If you only want to change the collection level this time, you can ignore the I _modify_parameter parameter.
SQL> execute statspack. snap (I _snap_level => 0 );
2. snapshot threshold
The Snapshot threshold applies only to the SQL statements obtained from the stats $ SQL _summary table.
Because each snapshot collects a lot of data, each row represents an SQL statement in the database when the snapshot is obtained, so stats $ SQL _summary will soon become the largest table in Statspack.
The threshold is stored in the stats $ statspack_parameter table:
Executions_th this is the number of SQL statements executed (the default value is 100)
Disk_reads_tn this is the number of disk reads executed by the SQL statement (default value: 1000)
Parse_calls_th: Number of resolution calls executed by SQL statements (default value: 1000)
Buffer_gets_th this is the number of buffers obtained by SQL statement execution (default value: 10000)
A record is generated when any of the above thresholds is exceeded.
Call the statspack. modify_statspack_parameter function to change the default value of the threshold:Copy codeThe Code is as follows: SQL> execute statspack. modify_statspack_parameter (I _buffer_gets_th => 100000, I _disk_reads_th => 100000 );

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.