1. char () is a fixed-length field. For example, char (20) is saved to Microsoft for nine characters. Each storage can contain only 20 characters, followed by spaces, if you are not sure about the number of data, it will be a waste of space.
Varchar () is uncertain. For example, if it is stored in Microsoft, it takes only nine characters at a time, excluding spaces. As nvarchar () is more powerful than varchar (), it can actually store more types of data.
Methods for searching data of the XML type: the query method is used to extract XML from the XML data type; the value method is used to return a single value from the XML document; the exist method is used to determine whether a specified node exists in the XML document.
Which of the following methods for searching data is more efficient? Each method has its own purpose;
The biggest difference between using a stored procedure and a t-SQL statement is that the stored procedure is compiled at the time of creation and does not need to be compiled later. This is fast. The T-SQL statement is compiled every time it is executed,
It will waste a lot of time, and the speed is slow
2. MASTER: it contains a system table set, which is the central repository of the entire instance. It maintains logon accounts, other databases, file distribution, system configuration settings, disk space, resource consumption, endpoints, and
Link to the server. It records sql2005 initialization information, so it is extremely important to the instance. In the process of creating, modifying, or deleting a user database, changing the server or any database configuration, and modifying
Or the database must be backed up after the user account is added.
Model: it is the template for all new databases in the sql2005 instance. When executing the create database Command, sql2005 will simply copy the content in the template database to the new database. if you want
Each newly created database contains tables, stored procedures, Database options, and licenses. You can add these elements to the model database and then add these elements when you create a new database.
MSDB: this database is used by SQL Server Agents to store scheduled tasks, modify, back up, and restore historical information. You need to back up automatic tasks and DTS commands that have been modified.
Tempdb: it is a temporary workspace in the instance range of sql2005 for various operations.
Resource: it is a new database introduced by sql2005 and is read-only. Contains all system objects, such as system stored procedures, system extended stored procedures, and system functions.
When the master database is damaged and there is no available backup, you can use the rebuild database option in the automatic settings to restore it to the status after the instance is installed. This operation can only be used after careful consideration, because it will clear
All server-related configurations, including logon, have to redo everything from the beginning.
3. Use of tempdb: 1) store dedicated and global temporary variables without considering the database context;
2) worksheets related to the order by clause, cursor, group by clause, and hash plan
3) explicitly created temporary objects, such as stored procedures, cursors, tables, and table variables.
4) if the Snapshot isolation function is enabled, record the update records of all versions.
5) Specify the temporary sorting result when you create or recreate an index in sort_in_tempdb.
If the tempdb exception increases, the possible cause is that the data is frequently used by the tempdb database, resulting in a bottleneck. This phenomenon is rare. You can consider installing multiple sql2005 instances on the same server and dividing the database into multiple
Instance. Because each instance has its own tempdb, This will effectively distribute the use of tempdb.
4. Focusing on indexes: data must be stored in order.
Non-focused index: it can store data in an unordered manner
Include index: an index other than the index key value.
Index view: Question 5
Full-text index: it is used for pattern retrieval in text strings.
XML index: Used to quickly access XML data.
Index advantage: Fast query speed disadvantage: it takes time to create an index that does not block data insertion.
How to create an appropriate index for an SQL statement: sort, focus function, group, join query, and where condition fields are usually used as candidate fields for creating an index.
What should I pay attention to when creating indexes? I was asked during the interview !!!!!!
Whether or not the index you created is used: It depends on the Validity comparison with other indexes and focused indexes. The optimizer then determines whether the index is used. Because the index is a secondary structure, and only in the optimizer
You can use the DBCC statistics command to determine which index is being used and which is not used to improve the query performance. The specific value is Baidu.
Reorganize and regenerate;
Optimize indexes: You usually need to know the application very well. Index optimization is often caused by viewing the output of configuration files, and the SQL statements that involve the most read operations are processed first. Because the primary benefit of indexing is to reduce the need to obtain data
I/O operations, so the SQL statement that first processes many read operations is usually very effective.
5. You can create an index. The benefit is that the query performance can be improved. Disadvantage: The indexing overhead will be increased accordingly.
Difference: 1) The first index created for the view must be a unique focused index. 2) The view must be defined using the schemabinding option. Architecture binding
Bind the view to the architecture of the underlying base table. 3) The base table referenced by the view must be in the same database as the view and have the same owner with the view. 4) The table referenced by the view and user-defined functions
The view must be referenced by Two-segment names. Single-segment, three-segment, and four-segment names are not allowed. You should know how to create an index for a table.
6.
My MSDB database does not have all tables;
SYS. all_columns, SYS. all_objects, SYS. columns;
Set statistics Io
Set statistics time to get
7. I personally think that 20 million of the data will be stored locally, and then exported to other servers and deleted.
9. Usage of temporary tables and table Variables
A. if the statements are complex and have too many connections, you can use temporary tables and table variables for step-by-step execution.
B. If you need to use the same part of the data of a large table multiple times, use temporary tables and table variables to store the data.
C. If you need to combine the data of multiple tables to form a result, you can use temporary tables and table variables to summarize the data of these tables step by step.
D. In other cases, use of temporary tables and table variables should be controlled.
E. For temporary tables and table variables, this option mainly refers to the data volume stored in the temporary table. In a large number of cases, temporary tables are faster.
F. select into and Create Table + insert into are generated for temporary tables. Generally, select into is much faster than create table + insert into. However, select into locks the system tables sysobjects, sysindexs, and syscolumns of tempdb. In a multi-user concurrent environment, other processes are easily blocked. Therefore, in a concurrent system, try to use create table + insert, for a single statement with a large data volume, select into is used.
G. Pay attention to the sorting rules. Use the temporary table created in create table. If you do not specify the sorting rules for fields, the default sorting rules of tempdb will be selected.
CTE is equivalent to a temporary table, but it only integrates complex statements.
10. Non-committed read, committed read (default), committed read snapshot (Database option), Repeatable read, Snapshot isolation, and serialization
Share, update, exclusive, and intention lock. Four.
To achieve a new Snapshot isolation level, a new function called row version control is used through committed read snapshots and Snapshot isolation. The two snapshot options are different: the submitted read snapshot only affects
Statement-level locking, and Snapshot isolation affects the entire transaction. They all use row version control to create snapshots of their own modified data. The method is to store copies of the Data image before modification in tempdb so that
Tempdb accesses consistent data snapshot views without blocking actual table data writing or locking actual table data.
11. -- find the cause of deadlock and blocking:
If exists (select * From DBO. sysobjects
Where id = object_id (n' [DBO]. [sp_who_lock] ')
And objectproperty (ID, n' isprocedure ') = 1)
Drop procedure [DBO]. [sp_who_lock]
Go
-- Description: view database blocking and deadlocks
Use master
Go
Create procedure sp_who_lock
As
Begin
Declare @ spid int, @ BL int,
@ Inttransactioncountonentry int,
@ Introwcount int,
@ Intcountproperties int,
@ Intcounter int
Create Table # tmp_lock_who (
Id int identity (1, 1 ),
Spid smallint,
BL smallint)
If @ error & lt; & gt; 0 return @ Error
Insert into # tmp_lock_who (spid, BL) Select 0, blocked
From (select * From sysprocesses where blocked & gt; 0)
Where not exists (select * from (select * From sysprocesses
Where blocked & gt; 0) B
Where a. Blocked = spid)
Union select spid, blocked from sysprocesses where blocked & gt; 0
If @ error & lt; & gt; 0 return @ Error
-- Find the number of records in the temporary table
Select @ intcountproperties = count (*), @ intcounter = 1
From # tmp_lock_who
If @ error & lt; & gt; 0 return @ Error
If @ intcountproperties = 0
Select 'no blocking and deadlock information' as message
-- Start of Loop
While @ intcounter & lt; = @ intcountproperties
Begin
-- Retrieve the first record
Select @ spid = spid, @ BL = bl
From # tmp_lock_who where id = @ intcounter
Begin
If @ spid = 0
Select '+ Cast (@ BL as varchar (10 ))
+ 'Process number. The SQL syntax executed by the process is as follows'
Else
Select 'process No. spid: '+ Cast (@ spid as varchar (10) + 'be'
+ 'Process No. spid: '+ Cast (@ BL as varchar (10) +' blocking. The SQL syntax executed by the current process is as follows'
DBCC inputbuffer (@ BL)
End
-- Move the loop pointer down
Set @ intcounter = @ intcounter + 1
End
Drop table # tmp_lock_who
Return 0
End
-- Call it directly when needed to find out the process and SQL statement that caused the deadlock.
Exec sp_who_lock
After the problem is found: optimize the statement to avoid the above phenomenon again. When tracking profiler, it mainly looks at textdata, applicationname, username, loginname, CPU, read and write.
Duration (this is important), spid
12. I don't have much experience. Please call it Xiao f or Sunday.
13. the purpose of a partition table is to distribute data in the same table in multiple physical locations to improve access performance. In most cases, horizontal partitioning is performed based on a series of values in a specific column, physical partitioning
Multiple File groups.
The partition view is horizontally connected to the partitioned data in a group member table across one or more servers, making the data appear as if it was from a table, the partition view uses the Union all clause to select all Member tables
Statement results are merged into a single result set.
The main reason for implementing partition tables is that it is easier to manage different datasets in the same table. The use of partition tables can effectively improve manageability, mainly reflected in the following:
1) Independent backup policies can be implemented. Different datasets may have different backup requirements.
2) controllable storage media. Partitioning tables allows you to select storage for data based on data access requirements.
3) Implement index management. In addition to partitioned tables, you can also partition their indexes. This allows you to reorganize, optimize, and reconstruct indexes by partition. This is faster and less interference than managing the entire index. In addition, partition Indexes
Minimizes fragments.
If the tables in the partition view are on different servers or on a multi-processing computer, you can perform parallel scans on each table involved in the query to improve query performance. In addition
You can perform maintenance tasks such as rebuilding indexes or backing up tables more quickly.
15.
Checkpoint: it is an sqlserver operation that writes all modified data in the buffer cache to the disk to synchronize physical data with the current state of the buffer cache. Checkpoint
The page will not be put back into the Free List as the inert writer does. In addition, the checkpoint thread forcibly writes all the pending transaction logs in the buffer cache to the log files on the disk.
Lazywriter: responsible for regular check and ensures that the idle buffer list is no smaller than the specified size (the value depends on the cache size of the buffer tune). If the Free List is already smaller than this value
The inert writer scans the cache, recycles unused pages, releases dirty pages, and sets the reference counter to 0.
DDL: It is database-level and rarely used.
DML: divided into after and instead of which are created between tables. 1) This trigger is most useful when the functions supported by the constraint cannot meet the functional requirements of the application;
2) triggers can be used to cascade changes to relevant tables in the database. However
Integrity constraints allow you to perform these changes more effectively.
16. If the server you purchased does not have cluster-Supported Hardware, how can we achieve high availability?
In sql2005enterprise edition, database images are another high availability solution for failover clusters. database images support automatic failover, but do not require hardware with cluster capabilities. Therefore, it provides a low-cost method for implementing failover clusters.
The database image can be implemented through standard hardware, and all its management tasks are performed within sqlserver.
Database images require multiple sqlserver instances. These instances should be installed on independent computers to prevent server faults.
Server role in database image
Principal server: It carries valid copies of the database (called "Principal Database") and requests from the service client. The principal server forwards all transactions to the image server before applying them to the principal database.
Backup storage: it carries a copy of the primary database (referred to as the "backup database") and applies transactions forwarded by the primary database to keep the backup storage synchronized with the primary database.
Witness server: an optional component of the database image solution. if a witness server exists, it monitors the master server and the backup server to ensure continuous connectivity and participation in the image SESSION (this process is called "arbitration "). if any principal server loses arbitration, the witness server assigns a principal server role to an image server and, if necessary, facilitates automatic failover from the principal server to the backup server. the witness server is required for automatic failover, but a witness server supports multiple image sessions at the same time because of its high working intensity.
Database mirroring:
When a single database requires Redundancy
When you do not want to invest in hardware with cluster capabilities
When less management overhead is required than the fault recovery cluster
Log shipper is a low-cost method for creating backup servers using standard hardware. log transmission first restores the complete backup of the database on the master server to the secondary server. then, the transaction logs on the master server are periodically applied to the slave server. log transfer can be used for user databases, but not for system databases.
Log transmission is a highly available technology that periodically restores transaction logs of the main server to the backup server. you can set the log backup cycle to fully meet your availability and performance requirements. In addition to providing redundancy, the backup server also provides readable queries on the client, to reduce the burden on the main server.
This method will not automatically fail over when the main server fails to run. you must manually promote the slave server to the master server and reconfigure all clients to connect them to the slave server. (new principal server)
You can choose to create a monitoring server that records all issues related to log transmission, the last backup, and restoration operations. once a server fails, the monitoring server is disconnected from the master server and the slave server.
; Oracle never used
17. Setup steps: http://guobaoguo.blog.163.com/blog/static/1091625820091179247175/
There are three differences: the high availability mode provides the most robust coverage. It consists of a subject, an image, and a witness server.
The advanced protection mode consists of the subject and image of synchronous communication.
The high-performance model only consists of the subject and the asynchronous communication image.
To synchronize images, the image needs to receive data. The confirmation operation has been submitted to the image database, and then a confirmation or response is sent back to the subject. The confirmation operation is completed before it is submitted to the subject, and the customer
Go to the next operation. The key concept is that the client waits until the operations on the remote backup storage are completed.
An asynchronous image is a "Give and remove" method. The data is sent to the backup storage as available resources, but the customer will not wait for confirmation before continuing the operation.
Check whether database logs are growing rapidly and perform proper processing. I don't know anything else.
21. sqlserver profiler is already very powerful; DMV (dynamic management view to write the corresponding code)
22. Check the query performance of the SQL statement. That is, the execution plan is used to determine which statement the optimizer has made great overhead, and the optimizer can make proper processing.
23. Experience
24. perform a full backup at every Sunday, perform a differential backup at every night, and back up transaction logs every 15 minutes every day.
25. It seems like
26. Attaching and splitting do not consider the availability of the original database, but they are also the fastest
29. I have had a similar experience. I just lost more than 1 million of the data and turned my face green. Fortunately, it was a test database.
30. Every time I go to a big company for an interview, I will ask questions about human resources. I will say that I will be a senior DBA within two years, even though I graduated in June.