The SQL Server database has three file types, data files, secondary data files, and log files, where the log files contain all the log information used to recover the database, SQL Server always writes the log file LDF first, and data changes are written to MDF, which can lag So the speed of log writes determines the amount of write transactions that SQL Server can host in a certain program, so what is the LDF write size?
To know the size of SQL Server write log, use the tool process Monitor
Here a filter is set to satisfy only the collection of SQL Server write logs, as shown in:
The results in an OLTP production environment are analyzed as follows:
From this you can see that the log write of this userdb.ldf is sequential, and the write size is mainly 1024byte :
Through the above results, for the log file LDF write requirements, we select the disk and RAID, how to do? It is recommended to use the tool Sqlio to do a test for single-threaded sequential write 1KB hardware performance, to simulate log writes, which will be tested in a later article.
So, does SQL log write to LDF must be up to 1024byte? This is not necessarily the case, the following are the results obtained under Additional OLTP:
Here is the tempdb:
So, how much are you? Can be measured with the process monitor.
Process Monitor
The collected data is stored as logfile.csv.
The following statements are then used for analysis:
?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
USE tempdb GO CREATE TABLE [dbo].[Logfile]
(
[
Time of Day
] [nvarchar](
MAX
)
NULL ,
[Process
Name
] [nvarchar](
MAX
)
NULL ,
[PID] [nvarchar](
MAX
)
NULL ,
[Operation] [nvarchar](
MAX
)
NULL ,
[Path] [nvarchar](
MAX
)
NULL ,
[Result] [nvarchar](
MAX
)
NULL ,
[Detail] [nvarchar](
MAX
)
NULL
) ON
[
PRIMARY
]
GO
BULK
INSERT [Logfile]
FROM ‘D:\Tmp\Logfile.CSV‘ WITH (
FIELDTERMINATOR =
‘,‘
,
FIRSTROW=2 )
SELECT
[
Time of Day
] ,
[Process
Name
] ,
PID ,
Operation ,
REPLACE
(PATH,
‘DBname‘
,
‘myDB‘
)
AS Path ,
Result ,
Detail FROM
[Logfile] --WHERE PATH LIKE ‘%DBname%‘
SELECT
SUBSTRING
(detail, CHARINDEX(
‘Length:‘
, detail) + 7,
CHARINDEX(
‘Flags‘
, detail) - CHARINDEX(
‘Length:‘
, detail)
- 13)
AS size INTO
# FROM
[Logfile] WHERE
PATH
LIKE ‘%HighEndSeekerDB%‘
SELECT
size AS [
SIZE
(Bypte)] ,
COUNT
(*)
AS CNT ,
LTRIM(
CAST
(
COUNT
(*) * 1.0 / (
SELECT
COUNT
(*)
FROM
#
) * 100
AS NUMERIC
(18, 4))) +
‘%‘ AS ratio FROM
# GROUP BY size --ORDER BY count(*) desc --ORDER BY CAST(REPLACE(size, ‘,‘, ‘‘) AS BIGINT) ORDER BY ratio
DESC
|