1. There are 14 200 GB disks on the ark, and the database size includes log files. How to set the disk (which indicates how the 14 disks are used )?
This problem should be about hardware knowledge and physical database deployment.
First, you need to know whether these disks are used to store Database Backup files and database performance (read/write) requirements. To determine the raid level.
1) if you are focusing on performance and do not need to store database backup files, use raid0 + 1. The available disk capacity is 14*73 * 50% = 511 GB.
2) If the read/write performance is not high and it is still tricky, you can consider RAID 5. The available disk capacity is 13*73 = 949 GB.
How to use it should be the deployment of physical database files. Note that you can store tempdb, data file, and log file separately to reduce I/O competition. In fact, the current strip-based disks usually automatically store files, and human distribution is becoming less and less important.
2. there are two Server Clusters, node1 and node2. Now we need to install win200 system patches. After the patches are completed, we need to restart the system. How to install the patches does not affect the user's use (we need to describe the terms of the cluster in detail ).
The specific operation is a bit forgotten. Generally: First, check which node is in use and use the node IP address (private) to access another idle node and patch it, then stop the node in the Cluster Manager (you can also use the command line method) and restart the node. After the node is started, the node will be switched to use and patched for another node. Then restart.
3. There is a database a that is replicated to database B and database c B respectively. Each data update is required to be updated at the same time. Database C can be updated once a day. How can we develop a replication policy!
This should examine the knowledge of replication.
A-> B
1) If SQL Server replication is used, make a-> B use transactional replication (synchronous replication ).
2) if there are not many tables, you can also write the trigger by yourself and use linkserver + distribute transaction.
A-> C
1) if you use the SQL Server replication function, let a-> B use the snapshot replication method for one-time replication at a certain point in time.
2) You can also write bat by yourself. After a is backed up, the backup media is transmitted over FTP to restore C. (Not recommended)
4. There is an order table with 90 fields, 20 indexes, and 15 composite indexes. Three of them have more than 10 index fields. How can we optimize it?
This question is relatively inadequate. If you don't describe how this table is used (read/write or almost static), you can ask how to optimize it ?!! It is better to ask about the distributed access principle of indexes.
He wants you to say: the three indexes have more than 10, and the efficiency of the B-tree examples is very low. Reduce the number of fields as appropriate. If sql2005 is used, you can place fields with poor selection in "Additional index fields" to ensure index coverage. In addition, due to the lock upgrade problem of SQL Server, you can consider splitting the table.
5. There is a database with a size of GB. An increase of 50 MB per day allows users to access the database at any time and create a backup policy (detailed description ).
In this case, Incremental Backup can be used. Perform full backup every Sunday and Incremental backup from Monday to Saturday (because the data volume is small, you can consider Incremental Backup every 30 minutes ). In this way, the performance consumption can be minimized, and if the transaction log is lost, the data may be lost for up to 30 minutes.
6. the daily task of managing 50 databases is to check whether database jobs are completed. How do you complete this check?
This is relatively simple. Create a linkserver on each machine, and then create a distributed view on the DBA management server. Each time you query this view, the job status on each machine is clear at a glance. Distributed view writing:
Create view vw_job
As
Select 'machine 1' as mname, * From linkserver1.. sysjobactivity
Union all
Select 'machine 2' as mname, * From linkserver2.. sysjobactivity
Union all
Select 'machine 3' as mname, * From linkserver3. sysjobactivity
...
7. What is the difference between a user-defined function and a stored procedure? Under what circumstances can only a user-defined function be used, and under what circumstances can only a stored procedure be used
This should be a test of Stored Procedure writing experience. Generally, user-defined functions are mainly used for calling other SQL statements, such:
Select yourfunc (...) from table
In this case, it can only be implemented through functions.
The functions of stored procedures are far stronger than those of functions, such as the use of dynamic execution SQL (sp_executesql) and some special functions. udfs are not supported and can only be implemented using stored procedures.
8. What are the new features of SQL 2005? What is the difference with Oracle?
New Features of SQL 2005 are generally learned from Oracle.
The following is written by leimin at the time. You can make a reference:
I. Database Design
1. Field type.
The introduction of the varchar (max)/nvarchar (max) type greatly improves programming efficiency. You can use string functions to operate the clob type, which is a highlight. However, this raises an old question about the efficiency of varchar and char. In the end, how do I allocate varchar data and whether there will be large fragments? Does fragmentation cause efficiency problems? These are all things that need further exploration.
Replacing image with varbinary (max) also makes the Field Types of SQL Server more concise and unified.
The XML field type better solves XML data operations. XQuery is really good, but I personally don't like it. (Csdn developers should be quite familiar with it !)
2. more scalable cascade of Foreign keys
Most of the peers may be reluctant to create Foreign keys when designing the OLTP system. They all use programs to control the integrity of parent and child data. However, foreign keys can be created in the development, debugging, and OLAP environments. The new version includes the set null and set default attributes to provide better cascade settings.
3. Additional index fields
This is a good new feature. Although no index key value is highly efficient for the additional fields of the index, the efficiency of ing to the data table is much higher. I have performed experiments. In my experiment environment, it is about 30% more efficient than ing to tables.
4. Computing field persistence
The original calculated field is actually very similar to the virtual field. It's just about better management, but not much performance improvement. However, sql2005 provides the persistence of calculated fields, which improves the query performance, but increases the burden of insert and update. OLTP is used with caution. OLAP can be used on a large scale.
5. Partition Table
Partitioned Tables are a highlight! From the partition table, we can also see that Microsoft wants to strengthen the confidence of SQL Server. There are a lot of materials, which are not detailed here. But it is important to know that the current SQL server2005 tables are all partition tables by default. Because it must support this feature of sliding windows. This feature is very helpful for Processing historical and real-time data.
However, it is also a problem I found during usage. After function-> Schema-> table is created, if no explicitly declared clustered index is created for an existing partitioned table, the partitioned table is automatically changed to a non-partitioned table. I am confused about this. If you think that my non-partition index cannot start sub-partitions,
You can remind me! Without any reminder, it directly becomes a non-partition table. I don't know if this is a bug. You can also try it.
The efficiency of Partitioned Tables must be a concern. In my experiment, the efficiency of querying (filtering) by partition field is higher than that of the same statement in the non-partition table. However, if you query by non-partition field, the efficiency will be lower than the same statement of the non-partition table. However, as the amount of data increases, the cost gap will gradually decrease and tend to be equal. (Only about 5 million difference between 10% and)
6. CLR type
Microsoft publicized CLR in a large space because database products were finally integrated into the. NET system. At the beginning, we were also excited and felt that some concepts of the object database could be implemented. However, after some experiments, we found that when the stored procedure or function using CLR reaches a certain threshold value, the system performance will decline exponentially! This is very dangerous! There may be no problem when using only a few items. If you use them on a large scale, it will cause serious system performance problems!
In fact, we can make an analogy. Oracle and other database products have long supported Java programming and provided Java pool parameters as user configuration interfaces. But what systems are using Java stored procedures ?! Why can't I use Oracle applications ?! Performance is not a problem! Otherwise, the object-oriented database has long been implemented!
We recommend that you use CLR in scenarios where the application complexity or operating system environment is highly coupled. For example, you want to build complex algorithms and use a large number of pointers and advanced data models. Or use socket communication with the operating system. Otherwise, exercise caution!
7. Index View
2 K for index views. However, 2005 has made some improvements to its efficiency, but the scope of Schema. viewname really limits its application. There are also a lot of environmental parameters and restrictions that are a little prohibitive.
8. Statement and transaction Snapshot
Statement-level snapshots and transaction-level snapshots have finally brought a breakthrough to SQL Server's concurrency. I personally think that statement-level snapshots should be applied. Transaction-level snapshots should be used with caution if they are highly concurrent systems. If a user is always prompted that the modification is unsuccessful and requires a retry, the user will be killed!
9. Database snapshots
The principle is very simple. It is very helpful to require a long period of time to calculate the report generation and prevent user operation errors. However, compared with Oracle10g flash back technology, it is still not fine-grained. Unfortunately!
10. Mirror
Mirror can be regarded as the data guard of SQL Server. But I don't know if it can be used by everyone.
Ii. Development
1. ranking function set
The most famous one is row_number. This finally solved the history of generating serial numbers using temporary tables, and SQL server2005's row_number is more advanced than Oracle's. Because it integrates order by, it does not need to be encapsulated with subqueries like Oracle. But pay attention to it. Example:
Select row_number () over (order by AA)
From TBL
Order by bb
The AA sorting is executed first, and then the BB sorting is performed.
Some may complain about the integrated order by function. In fact, if the ranking function is used, order by is indispensable. If you are worried that order by will affect the efficiency, you can create a clustered index for the order by field. The query plan will ignore the order by operation (because it was originally sorted ).
2. Top
Parameters can be passed in dynamically, saving the spelling of dynamic SQL statements.
3. Apply
It is helpful for tree traversal of recursive classes.
4. CTE
I personally think this is really amazing! Reading is clear and has a sense of the times.
5. Try/catch
It replaces the original Vb-based error judgment. It is much more advanced than oracle.
6. Logs/unlogs
I personally do not feel case-insensitive. In addition, the default Third Field (or more) as the group by field can easily cause errors for new users.
Iii. DBA management
1. Database-level triggers
I remember using this feature when I first started using 2 K. Unfortunately, 2 K didn't. Now I am very happy to have a solution.
2. More system views and real-time system information
These things are very helpful for DBAs, but the granularity is still not very fine.
3. optimizer Improvement
I have always felt that SQL Server optimizer is smarter than oracle. Sql2005 is much smarter than 2 K. (Some of the prompts that some statements are faster than the same statement of level 2 million in show_text are not explained. It's always strange .)
Forum example:
Http://community.csdn.net/Expert/topic/4543/4543718.xml? Temp =. 405987.
4. profiler's new event observation
This improves the profiler function. However, when profiler is mentioned, you should pay attention to it. In Windows2003, you must install the SP1 Patch to start profiler. Otherwise, the click does not respond.
5. sqlcmd
Friends who are familiar with command line may be more comfortable. However, the function is limited. It is suitable for users who cannot run SQL Server Management studio on machines.
Iv. Regret
1. login Control
It is always a pity that the number of indicators such as CPU/memory usage cannot be allocated during SQL Server login. If your SQL Server assigns another user the permission to read only a few tables, and this guy performs connection query in an infinite loop, it will put a lot of burden on your system. If SQL server can allocate 5% CPU and 10% memory for login like Oracle. This vulnerability can be solved.
2. The physical framework of the database has not changed.
Both undo and redo are placed in the database's transaction. I personally think it is a failure. If we consider dividing multiple databases when designing databases, I/O efficiency may be avoided to some extent. However, it will also cause problems for indexed views and other applications. Looking at Row-level and transaction-level snapshot data in tempdb, you can feel the embarrassment of the current architecture.
3. There is still no logical backup
Backup may be difficult. It is always uncomfortable to back up several tables separately. Flexible backup does not know when to solve the problem.
4. SSIS (DTS) is too complicated
The heterogeneous migration function of SQL Server is preferred. (If you compare the connection server of SQL Server with the transparent gateway of Oracle, you will find that the sp_addmediaserver (openquery) heterogeneous database series of SQL Server is much better than that of Oracle .)
Previously, DTS was lightweight and simple. However, although the current SSIS feature is much more powerful, it is always too troublesome. Check the SSIS posts in the Forum. The functions are so powerful that many users will not use them.
What is the difference with Oracle?
This problem is quite abnormal! I can tell him the difference one day! The names are different first !! :)
9. What are the qualities of DBAs, what are you and what are your shortcomings?
Omitted
10. If you want to configure which software should be installed on the server for SQL mail!
What software is required? Install Outlook Express. SQL Server provides interface stored procedures, which is very simple.
----------------------------
First, from the perspective of database designers:
1. Partition tables are not supported before SQL server2005. Therefore, when designing the system, consider the data transfer issue after a large data volume in the future.
2. For the tree table design, SQL server does not start... in the connect by query method, it is best to add the treepath field in addition to the ID and parentid when designing the table to avoid recursive loops.
3. Due to SQL Server lock upgrade problems, it is best to reduce the number of fields in frequently DML tables to reduce the blocking caused by lock upgrade!
4. When designing the physical distribution of the database, each database of SQL Server has its own transaction log (including undo and redo information). In order to reduce the I/O contention of the transaction log, you can consider multiple databases (except for the clustered index view clustered view ). Oracle is the one-to-one correspondence between databases and instances (except RAC). Multiple tablespaces use public undo segement and redo file.
5. SQL Server indexes include cluster index and nocluster index, while Oracle indexes include btree index, bitmap index, and function index.
6. The most basic storage structure of SQL Server is page (8 K), while the most basic structure of Oracle is that block can be different based on OLTP and DSS applications (the latter can be larger, for search efficiency), select 2-32 k different block size.
7. The structure of SQL Server is instance> multiple databases> tables and stored procedures .... Oracle databases = instances (RAC corresponds to one database storage)-> Schema (User) tablespace-> tables, stored procedures ....
Write these first...
-----------------------------
1. There are 14 200 GB disks on the ark, and the database size includes log files. How to set the disk (which indicates how the 14 disks are used )?
A: Yes.
A. there are plenty of disks for raid 10, that is, 14 hard disks are divided into half, 2 and 7 disks are used for RAID 0, and the capacity is 7*73g = 511G. Then they are combined into RAID 1, last capacity (14/2) * 73g = 511G. raid10 has the advantage that data needs to be written into two blocks at a time, and can be read from any of the faster places at a time, which is highly efficient and costly.
B. RAID5, 13 hard disks for RAID 5, 1 for hot redundancy; capacity: (14-2) * 73g = 876g, RAID5 advantages, high stability, high cost efficiency.
C. raid51, 7 hard disks are used as RAID 5, and the other 7 are also used as RAID 5. Then, these two RAID 5 are used as RAID 1, with the capacity: (14/2-1) * 73g = 438G. Advantages: it is more stable than raid10, and the efficiency is equivalent to raid10.
2. there are two Server Clusters, node1 and node2. Now we need to install win200 system patches. After the patches are completed, we need to restart the system. How to install the patches does not affect the user's use (we need to describe the terms of the cluster in detail ).
A:
A. Assume that node1 is online and resources are controlled. Patch node2 and restart node2 first. The online and resource control status will not change without affecting customer service.
B. after the patch is restarted for node2, manually switch the services and resources of node1 to node2 so that node2 is online. Then, patch node1 and restart the patch, after switching the services and resources of node2, node1 operates normally.