When talking about tracking, many people think of scenes in the Police film. ldquo is also in our SQLServer database, and rdquo is also everywhere. If we make good use of it
When talking about tracking, many people think of scenes in the Police film. ldquo in our SQL Server database and rdquo tracking are also everywhere. If we make good use of it
Preface
When talking about tracking, many people think of scenes in the Police film. Similarly, "tracking" is everywhere in our SQL Server database. If we make good use of tracking techniques, in this way, you can perform targeted analysis for specific scenarios and find sufficient evidence to solve the case.
A few simple application scenarios:
Why is the online production database suddenly down? Why are hundreds of data tables missing? Why have the systems that have just been patched been hacked? Why are newly added information tables frequently lost? What exactly does a table field suddenly change? Behind these anonymous access attempts, are they actually human beings? Is the incremental data suddenly increased right or wrong? What is behind the explosive growth of hundreds of megabytes of logs? Is there a BUG in the application or a lack of user quality behind this?
Please pay attention to this Article. Let's use the database Trace to go to the database and view its internal principles.
I believe that anyone who has used the SQL Server database will use the SQL Profiler tool more or less. This is a graphical operation tool formed by using SQL Trace. We will go directly to the topic in this article.
1. View Default Trace information of the system)
As a good database tracing tool, Trace is integrated into the system functions in SQL Server 2005 and is enabled by default. Of course, we can also turn it off manually, it is located in the sp_config configuration parameters. We can view it using the following statement:
Select * from sys. tolerations where configuration_id = 1568
We can also use the following statement to find the record of this trail.
Select * from sys. traces
If it is not enabled, you can use the following statement to enable or disable it.
-- Enable Default Tracesp_configure, 1; GORECONFIGURE; GOsp_configure, 1; GORECONFIGURE; GO -- test whether EXEC sp_configure; GO -- disable Default Tracesp_configure, 0; GORECONFIGURE; GOsp_configure, 0; GORECONFIGURE; GO
Run the following command to find the default path of the file to be tracked:
Select * from: fn_trace_getinfo (0)
The result value returned by the preceding command. The meaning of each value (property) is as follows:
First: 2 indicates a rolling file;
Second: indicates the path of the currently used trace file. Based on this, we can find other trace files. The default path is under the same directory.
Third: indicates the size of the rolling file (unit: MB). When this value is reached, a new rolling file will be created.
Fourth: the stop time of the trail. Null indicates that there is no fixed stop time.
Fifth: the status of the current trail: 0: stopped; 1: Running
Find the Directory and view the file:
,
By default, the system provides five trace files, each of which is 20 MB by default. SQL Server maintains these five files by itself. When the instance restarts or reaches the maximum value, then, a new file is generated, and the earliest trace file is deleted and updated in a rolling manner.
Run the following command to view the content in the tracking file:
The default trace file provides a wide range of trace information, from which we can find the login user, operation information, and so on. The above is only part of the information. We can use this statement to perform our own processing and obtain more useful information.
-- Obtain the first 100 rows of execution content in the trace file select top 100 gt. [HostName], gt. [ServerName], gt. [DatabaseName], gt. [SPID], gt. [ObjectName], gt. [objecttype] [ObjectTypeID], sv. [subclass_name] [ObjectType], e. [category_id] [CategoryID], c. [Name] [Category], gt. [EventClass] [EventID], e. [Name] [EventName], gt. [LoginName], gt. [ApplicationName], gt. [StartTime], gt. [TextData] FROM fn_trace_gettable (, DEFAULT) gt left join sys. trace_subclass_values sv ON gt. [eventclass] = sv. [trace_event_id] AND sv. [subclass_value] = gt. [objecttype] inner join sys. trace_events e ON gt. [eventclass] = e. [trace_event_id] inner join sys. trace_categories c ON e. [category_id] = c. [category_id] WHERE gt. [spid]> 50 AND -- if the spid is less than 50, the system uses gt. [DatabaseName] = AND -- filter gt Based on DatabaseName. [ObjectName] = AND -- filter e by objectname. [category_id] = 5 AND -- category 5 indicates the object, AND 8 indicates Security e. [trace_event_id] = 46 -- trace_event_id -- 46 indicates the Create Object (Object: Created), and -- 47 indicates the Drop Object (Object: Deleted ), -- 93 indicates the automatic growth of Log files (Log File Auto Grow), -- 164 indicates the Alter Object (Object: Altered), and -- 20 indicates the error Log (Audit Login Failed) order by [StartTime] DESC
I have created a table. Through the above tracking, we can track the information of this record. Based on Different filtering information, we can query the change information of a table in a database tracked, including: 46 Created, 47 Deleted, 93 Log File Auto Grow, 146 Alter, and 20)
In the production environment, the above categories are commonly used. The identification of some problems can be followed by sufficient evidence, for example, if a shard deletes the database data and does not acknowledge it, the Login Failed information in this shard can trace the user's attempt to log on to the database and fail, if this happens in a large area, we need to guard against hacker attacks.
Of course, here I can also use the Profile tool that comes with SQL Server to open and view the content in the tracking file.
This image-based tool is familiar and can be opened directly for filtering.
This method seems good, but it also has its own shortcomings. Let's look at it:
1. These five files are updated in a rolling manner, and each file is up to 20 MB by default, and no interface for modification is provided. Therefore, the files will be deleted after they are filled, therefore, the content cannot be found too long ago;