The possible causes are: statistics, the sort of such operations too much, too often. Solution: or to optimize the statement.
In the tempdb library properties, in [Enterprise Manager], in the [file properties] of [transaction log], take a bite in front of [File Auto growth], and see if you have enough disk space on your log file. If it's not enough, save it for another place after backup.
[Note: tempdb your database name.] ]
1. Empty log
DUMP TRANSACTION Library name with NO_LOG
2. Shrink the database file (if not compressed, the database file will not be reduced
Enterprise Manager--right--the database you want to compress--all tasks--shrink the database--Shrink the file
--Select Log file--in the shrinkage method to choose to shrink to XXM, here will give a allow to shrink to the minimum m number, directly enter this number, you can determine
--Select the data file--in the contraction way to choose to shrink to XXM, here will give a allow to shrink to the minimum m number, directly enter this number, OK
First, provide a complex method to compress logs and database files as follows: 1. Empty log
DUMP TRANSACTION Library name with NO_LOG
2. Truncate the transaction log:
BACKUP LOG database name with NO_LOG
3. Shrink the database file (if not compressed, the database file will not be reduced
Enterprise Manager--right--the database you want to compress--all tasks--shrink the database--Shrink the file
--Select Log file--in the contraction way to choose to shrink to daily internet m, here will give a permit to shrink to the minimum m number, directly enter this number, to determine on the
--Select Data file--in the contraction way to choose to shrink to daily internet m, here will give a permit to shrink to the minimum m number, directly enter this number, you can determine
You can also use SQL statements to complete
--Shrinking the database
DBCC shrinkdatabase (Customer information)
--Shrink the specified data file, 1 is the file number, you can query through this statement: SELECT * from Sysfiles
DBCC Shrinkfile (1)
4. In order to maximize the reduction of log files (if it is SQL 7.0, this step can only be done in Query Analyzer)
A. Detaching a database:
Enterprise Manager--server--database--right key--detach database
B. Delete log files on my Computer
C. Additional databases:
Enterprise Manager--server--database--right Key--additional database
This method will generate a new log with a size of more than 500 k
or in code:
The following example separates pubs and attaches a file in pubs to the current server.
A. Separation
E X e C sp_detach_db @dbname = ' pubs '
B. Deleting a log file
C. Additional
E X e C sp_attach_single_file_db @dbname = ' pubs ',
@physname = ' C:/Program files/microsoft SQL server/mssql/data/pubs.mdf '
5. In order to be able to automatically shrink later, do the following settings:
Enterprise Manager--server--right key database--Properties--Options--select "Auto Shrink"
--sql Statement setting:
E X e C sp_dboption ' database name ', ' autoshrink ', ' TRUE '
6. If you want to not let it grow too big log
Enterprise Manager--server--right key database--attribute--transaction log
--Limit file growth to XM (x is the maximum data file size you allow)
How the--sql statement is set:
ALTER DATABASE name modify file (name= logical filename, maxsize=20)
Special attention:
Please follow the steps and do not follow the steps above.
Otherwise, you may damage your database.
Generally do not recommend to do 4th, 62 steps
4th step is unsafe, it is possible to corrupt the database or lose data
6th step if the log reaches the upper limit, subsequent database processing will fail, and the log will not be restored until it is cleaned.
In addition to provide a more simple method, I have tried and recommended that everyone use.
A simpler approach:
1. Right-built Database Properties window--a failure-reduction model-set to Simple
2. Right build database All tasks-shrinking database
3. Right-Build Database Properties Window--Failover model--set to bulk-logged