In AlwaysOn, SQL Server uses the "step-on" Record of the memory table, and sqlalwayson
Preface
Recently, memory tables were used in a database in the online alwayson environment. After about a week, the monitoring program found a very serious problem. The log files of this database will not be truncated, And the used space has been increasing (there is a regular log backup every hour ), at the same time, the database files in the memory table cannot be deleted. Next we will introduce my processing process. Let's talk about it later. Let's take a look at the detailed introduction.
Database:SQL Server2014 Enterprise Edition (64-bit)
Delete an object
Use a database test that is independent of the alwayson environment.
I. Create a memory table
--- Create a memory table FILE group alter database [test] add filegroup [test_ag] CONTAINS MEMORY_OPTIMIZED_DATA GO ---- create a memory table database file alter database [test] add file (NAME = 'test _ memory ', FILENAME = 'd: \ database \ memory ') to filegroup [test_ag]; GO
Ii. Delete memory table database files
USE [test]GOALTER DATABASE [test] REMOVE FILE [test_memory]GO
Note:The table has not been created yet. After the database file is deleted, it cannot be deleted. Next, try the online document deletion method.
Iii. Official deletion Methods
Even if the "dbcc shrinkfile" operation has been used to clear the FILESTREAM container, the database may still need to retain references to the deleted files for various system maintenance reasons. Sp_filestream_force_garbage_collection (TRANSACT-SQL) can safely perform these operations when the FILESTREAM Garbage Collector deletes these files. The alter databaseremove file operation cannot delete the FILESTREAM container and return an error unless the FILESTREAM Garbage Collector has deleted all files from the FILESTREAM container. We recommend that you use the following procedure to delete the FILESTREAM container.
1. Run dbcc shrinkfile (TRANSACT-SQL) with the EMPTYFILE option to move the active content of this container to another container
USE test; GO -- Create a data file and assume it contains data. ALTER DATABASE test ADD FILE ( NAME = Test1data, FILENAME = 'D:\database\t1data.ndf', SIZE = 5MB ); GO -- Empty the data file. DBCC SHRINKFILE (test_memory, EMPTYFILE); GO
2. Make sure that log backup has been performed in the FULL or BULK_LOGGED recovery model.
3. Make sure that the copy Log Reader job is running (if relevant ).
The status of log_reuse_wait_desc shows that no log backup is required for the current database. Of course, I have performed log backup.
4. Run sp_filestream_force_garbage_collection (TRANSACT-SQL) to force the Garbage Collector to delete any files in this container that are no longer needed
USE [test]GO EXEC sp_filestream_force_garbage_collection @dbname = N'test' @filename = N' test_memory ';
5. Execute alter database with the remove file Option to delete the container.
USE [test]GOALTER DATABASE [test] REMOVE FILE [test_memory]GO
Still cannot be deleted !!!
Iv. Problem Analysis
At first, it was deleted in the alwayson environment, prompting that the copy could not be deleted. Subsequent database tests in a non-alwayson environment cannot be deleted, at first, I thought it was the reason for creating a memory table. Later I tested that only creating file groups and files and then deleting files could not be deleted. I guess it may be because of buffer; files related to memory tables exist in the buffer. You cannot clear memory table objects in the buffer by running the dbcc dropcleanbuffers command. I still cannot delete it, and finally I can only surrender !!! The online environment cannot continue. You can only use the least-expected method to generate a table structure and export data to recreate a database.
Generate scripts to recreate the database
Create a new database and ensure that the current database is available (rename the current database and use the previous name to ensure that the application does not need to be changed ), in this way, you can switch back in time if any problem occurs.
The procedure is as follows (where maintenance downtime is allowed ):
1. Disable all related jobs
2. Disable application login user
At the same time, ensure that the related process transactions are completed.
Alter login [test] DISABLEGOUSE [master] goalter database [test] SET SINGLE_USER with rollback immediate; -- SET the DATABASE to a single user and roll back the current connection USE [test]; --- keep the connection operation to prevent other users from connecting to GO
3. Execute checkpoint to refresh all dirty pages
CHECKPOINT --- return the size of the buffer occupied by each database in the current buffer and the size of dirty pages in the buffer WITH CTE1 AS (select count (*) * 8/1024 AS dirty_cached_size_MB, COUNT (*) AS dirty_pages, CASE database_id WHEN 32767 THEN 'resourcedb' ELSE DB_NAME (database_id) end as database_name FROM sys. dm_ OS _buffer_descriptors WHERE is_modified = 1 GROUP BY DB_NAME (database_id), database_id), CET2 AS (SELECT COUNT (*) x 8/1024 AS cached_size_MB, COUNT (*) AS pages, CASE database_id WHEN 32767 THEN 'resourcedb' ELSE DB_NAME (database_id) end as database_name FROM sys. dm_ OS _buffer_descriptors GROUP BY DB_NAME (database_id), database_id) SELECT CET2.database _ name, CET2.cached _ size_MB, -- CET2.pages, CTE1.dirty _ cached_size_MB -- CTE1.dirty _ pages FROM CTE1 inner join CET2 ON CTE1.database _ name = CET2.database _ name --- change the DATABASE option to multiple users to access alter database [test] SET MULTI_USER;
4. Generate database scripts
5. Rename the old database
Note:If the database is in alwayson, you must first delete it from the availability database; otherwise, the database cannot be renamed.
/* 1. Disconnect all database connections and prohibit new connections. 2. Disable User logon and set the instance to single-user mode. */---- 1. set offline use [master] alter database [test] set offline with rollback immediate; ---- 2. manually modify the physical database file name, for example, test. change mdf to test_old.mdf ---- 3. statement Modification USE [master] alter database [test] modify file (NAME = test, FILENAME = 'C: \ Program Files \ Microsoft SQL Server \ MSSQL12.MSSQLSERVER \ MSSQL \ DATA \ test_old.mdf '); GOALTER DATABASE [test] MODIFY FILE (NAME = test_log, FILENAME = 'C: \ Program Files \ Microsoft SQL Server \ MSSQL12.MSSQLSERVER \ MSSQL \ DATA \ test_old_log.ldf '); GO---4. set database online USE [master] alter database [test] SET ONLINE----5. MODIFY the DATABASE logical file name use [test] goalter database [test] modify file (NAME = n' test', NEWNAME = n' test _ old ') GOUSE [test] goalter database [test] modify file (NAME = n' test _ log', NEWNAME = n' test _ old_log ') GO---6. rename the database USE [master] EXEC sp_renamedb N 'test', N 'test _ old'; ---- 7. query SELECT * FROM sys. master_filesWHERE database_id = DB_ID ('test _ old ');
6. Create a new database and import the script to the new database.
If the table structure and data exported at the same time are executed in ssms, the script may be too large to be executed. You can use sqlcmd to execute the script import. The specific method can be Baidu. Of course, there are other ways to synchronize data only by exporting the table structure and then "export data \ import data.
Note:If you use the "export data \ import data" method to synchronize data, check "enable mark insertion"
7. Others
1. If alwayson exists, add the new database to the availability database group.
2. Add the new database to the backup job.
3. Check whether the number of tables in the new and old databases is the same.
4. Configure new database permissions for the login user.
Summary
The memory table is a new feature introduced by 2014. Therefore, you must be cautious when using the first version of the new feature, especially in the online environment. Although we did tests before going online, it is obvious that the backup test is easy to ignore because there is no online environment. Fortunately, the impact of this time is that the data volume and concurrency of a new project are very small and maintenance on holidays is allowed; if the system is very large, it is a headache to import and export data. The key is the duration of downtime. Because I step on the pit in the production environment, I hope that the people behind this article can avoid it.
Note: The memory table cannot be synchronized to the secondary copy in alwayson of version 2014, which leads to a major reduction in its role. Version 2016 can be synchronized to the secondary copy, we recommend that you add 2016 directly if necessary.
Well, the above is all the content of this article. I hope the content of this article has some reference and learning value for everyone's learning or work. If you have any questions, please leave a message to us, thank you for your support.