Server Everyone good, a long time did not write things, recently in Hao Fang Gan Interstellar, really lost even mom do not recognize, forget, or come back to make the database has a future, at least here I can barely find so little self-confidence!!!
Doing backup is like buying insurance, you can not do it, everyday life is scary, you can do it, let the system run normally unimpeded, have tenacious regeneration ability, at the same time everyone will ignore your existence, always want to go out of trouble, and then you play the Savior, so that everyone will know that your existence is valuable, Your salary should be xxx, so personal feeling do backup is a little flattering thing!!!
MICROSOFT SQL Server Backup mechanism is much more powerful than Sybase's, first of all, Sybase does not support partial recovery of the database, and SQL Server's file and filegroup backups are well implemented. Sybase has struggled with the error of deleting records because the load Tran does not support specifying Point-in-time or named transactions, and Sybase does not have a recovery model, which does not have an intuitive dimension to the determination of the recovery mechanism between different enterprises. I'll explain SQL in the following text These advantages of the server (perhaps the next article)
Recovery model:
SQL Server has 3 recovery models, simple recovery, full recovery, and bulk-logged recovery, each with its own characteristics, also applicable to different enterprise backup requirements, and the model can switch between the models, which is also a relatively easy place to go wrong.
Simple recovery:
Like its name, this recovery model is the simplest, he only supports full database backups and differential backups, and also has good support for saving log file space because logs are automatically truncated at checkpoints, so log files are almost never growing, but when large transactions are committed, when vessels, due to limited backup capabilities, Its risk index is also very high, if the database is corrupted, can only revert to the last full backup or the last differential backup, for the subsequent transaction is powerless, the more accurate should be the inactive transaction log has been deleted
Bulk-Logged Recovery:
This model looks much like the full recovery that will be mentioned later, but there are still a lot of differences, and he supports database integrity, variance, and log backups, which provide performance optimizations for operations such as select Into,bulk insert,bcp, and in the full recovery model, The log will record the details of the above operation, and this mode will only be recorded in the log there is such a thing, the details are negligible, so that the storage speed has been greatly improved, but also saved the log space, the problem also appeared, because the log records contain the minimum log changes to ignore details, So the bulk recovery model does not support specific point-in-time restores, that is, the STOPAT clause cannot be specified in the Restore log, and the following are action and error messages
/* Bulk-Logged Recovery model
Restore Database chj733 from disk = ' e:\chj733_dat.bck ' WITH NORECOVERY
Go
Restore log chj733 from disk = ' e:\chj733_log.bck ' with file = 1,stopat = ' 2004-11-09 15:06:04.810 ', recovery
Go
Processed 2016 pages that belong to the file ' chj733 ' of the Database ' chj733 ' (located on file 1).
Processed 1 pages that belong to the file ' Chj733_log ' of the Database ' chj733 ' (located on file 1).
The RESTORE DATABASE operation successfully processed 2017 pages and took 3.687 seconds (4.479 mb/sec).
Server: Message 4327, Level 16, State 1, line 1
The logs in this backup set contain minimal log-record changes. No point-in-time recovery is allowed. Restore rolls forward to the end of the log without restoring the database.
Processed 7152 pages that belong to the file ' chj733 ' of the Database ' chj733 ' (located on file 1).
Processed 1736 pages that belong to the file ' Chj733_log ' of the Database ' chj733 ' (located on file 1).
The RESTORE LOG operation successfully processed 8888 pages and took 6.668 seconds (10.919 mb/sec).
*/
While backing up transaction logs in bulk-logged mode requires access to data files, which are largely inaccessible to data files after many disasters occur, you are likely to lose all of your data since the last transaction backup. In general, you can switch the model to bulk-logged when you are ready to load a large amount of data or build a larger index, and then switch back to full mode when the operation is complete.
Full recovery:
The risk of this pattern is minimal, the online help claims that this pattern restores the database to any point in time, theoretically reducing the likelihood of data loss to 0 (which is not the case), where the two models of no are basically able to play yes, he supports all backup and recovery methods, and supports the designation of instant points, Log has a detailed record of large data loading operations such as BCP, and so on, these characteristics also cause some problems, the log file overhead, the need to frequently dump log
This is the recovery model, we can try to use BCP to load some data to see, in each mode, observe the log file size change is not in line with the above, it is best to compress the log file, if there is enough space in the log file, you load the data is not enough, You will not see the changes in the file, save the time to scold me here nonsense!!!
Backup type:
Put aside file backup, SQL Server has 3 types of backup, full backup, differential backup, log backup, you must first understand where each backup starts, where it ends, what is inside him
A full backup is a full copy of the database, including the transaction log, all objects in the database, and so on
A differential backup starts after the last full backup and creates copies of all modified pages, noting that he started after the last full backup, so that the only final backup is applied to the differential backup, not all of the differential backups, of course, you can do this while you're doing it.
Log backups are all transaction records that have been backed up since the last log backup, remember, not the last full backup. All transaction records are rolled forward when transaction log backups are applied, and the records should be contiguous, and no commit log records will be rolled back
These old words should be well understood, at least in the interview should be very fluent back out, the following see a more common mistake.
Question 1:
/*
Server: Message 4305, Level 16, State 1, line 2
The log in this backup set starts at LSN 641000000005900001, which is too late to be applied to the database. An older log backup that contains LSN 641000000005600001 can be restored.
Server: Message 3013, Level 16, State 1, line 2
The RESTORE LOG operation terminated abnormally.
*/
Believe this information as long as people who have done backup know that when applying full backup + log backup to restore the database prompted only to apply the database backup, and log backup because LSN too early or too late can not be applied, this is what??? LSN represents the unique ordinal number of a transaction log record, and SQL Server records each operation of the database, which is always a arrival, the LSN is the sequence number that the system sends to them, and a continuous LSN chain is generated when all backup sets are superimposed when log backup is restored, which is the problem ( If you do not understand the above log concepts, you can read the essence of the SQL log concept of this article, or see Bol. Check the backup file to know the answer
RESTORE HEADERONLY from disk = ' E:\CHJ733_DAT.BCK '-database backup
RESTORE HEADERONLY from disk = ' e:\chj733_log.bck '-log backup
Results (I only listed a few of the more useful items)
―――――――――――――――――――――――――――――――――――――
Position FirstLSN LastLSN
1 641000000005400001 641000000005600001―― Data backup
1 641000000005900001 641000000006100001―― Log Backups
―――――――――――――――――――――――――――――――――――――
Does the position represent the location of the backup set in this device, not being able to back up multiple times in one device? Each backup generates a backup set, sorted sequentially, and position can locate the backup set you want to apply, corresponding to the value of the with file in restore, which I only backed up once, so there's only one set
FirstLSN and LastLSN: Identifies the start and end transaction chain numbers for this backup set separately
When you use these two backups to restore the database, the system will read the backup set header information, to determine whether these chain numbers are continuous, it is clear that the final data backup is 641000000005600001, the log at the beginning of the 641000000005900001, a cut in the middle, So all the log backups from this post are out of use, just like a train compartment, the front is broken, you can not run back even better, you will find that sometimes the log firstlsn will be less than the lastlsn of the data, this is also true that the log backup is from the end of the last log backup to start the argument, However, the lastlsn of a log backup cannot be less than the LASTLSN of the data backup
Generally easy to appear like this kind of log disjointed operation is to switch the recovery model, from simple switch to full recovery, many beginners are like this, the database built a few days, do a full backup, and then do a log backup, the results of the error that the simple model can not do log backups, and then switch to the full model to continue the log backup, So the log chain is disjointed, the solution is to use RESTORE HEADERONLY after backup to see if the log chain is complete, incomplete, you need to redo a full or differential backup, and then continue the log backup, if the database fails to check again, then you are ready to pack up and leave
This should be more fine things, not really done and how will know the ins and outs of it? I remember the company has a colleague once said to me: "Backup is very simple Ah, is a few load statements, remember to do it" I never said that backup is very difficult, but I'm sure, call him to do backup, do not cry Dad shouted Niang is strange!!! And if you're just running the various statements in the 50M database like pubs very smoothly, you can tell everyone that you will write a backup statement, but do not say that you will back up, encountered a database of more than 900 g, the situation is very different, like playing StarCraft, master How to create soldiers, How is the order, each soldier's range of attack is very easy, but how in a short time to scout the enemy, to prevent harassment, but also to create a considerable number of arms, flexible match to deal with a variety of war scenes is difficult
The final topic:
Some people may not understand the WITH file clause after the recovery statement, or for backup sets, family members, media sets these concepts are confusing, in fact, these concepts do not need to remember, you just know what form the backup is available, when backing up to a device, you can append backup, You can also overwrite the previous backup, each additional time, the file on this device is one more, this file means that the backup set, with file = 2 to identify you should