SQL Server 2014 installation

Source: Internet
Author: User
Tags server memory strong password

First, plan and successfully complete the installation of SQL Server 2014 1, planning system

Before installing SQL Server, the first step should be a reasonable plan. The following are the planning tasks and points:

① benchmarks for current workloads

② Estimating the growth of workloads

③ minimum hardware and software requirements

④ appropriate storage-system size and I/O requirements

⑤sqlserver version

⑥sql server collation, file location, and tempdb size

⑦ Service Account Selection

⑧ Database maintenance and backup schedule

⑨ minimum online time and response time service level

⑩ Disaster recovery Strategy

1.1 Hardware Selection

Reference official Website: https://msdn.microsoft.com/zh-cn/library/windows/apps/ms143506 (v=sql.120). aspx

①, processor

Processor capability is represented by a high and large number of clock speeds on the processor. Several slower processors perform better than a single, fast processor. The new processor model can provide multiple cores (multicore processors) in one physical slot, and multi-core processors allow multiple instances of sqlserver2014 to run as named instances or virtual machines within the same physical server. Advantages include the ability to save space and power consumption.

SQL Server 2012 uses core-based licensing and continues to 2014.

② Memory

Memory is an important resource for optimal performance of database services. A good system design should be to read the data from the data pages in the memory buffer as much as possible, thus making reasonable use of the memory.

A good starting point when deciding how much memory to take is to consider the number of pages of data for each database hosted in the SQL Server instance, and query execution statistics, such as the minimum, maximum, and average memory used by typical workloads. The goal should be to have SQL Server keep as many pages of data in the cache as possible to keep as many execution plans in memory to avoid reading data pages from disk and compiling execution plans, which are expensive.

Also need to know is the specific SQL Server version of the memory limit. SQL Server 2014 Enterprise Edition supports 2TB RAM, Standard Edition and BI supports 128G ram,express support for 1GB of RAM.

SQL Server 2014 introduces a new feature-built-in memory online transaction processing (OLTP). Restricted to use in Enterprise Edition only. The data can now be fully stored in memory, reducing the I/O overhead associated with accessing the disk.

③ Storage System

When planning a storage system, the database needs to consider its requirements for availability, reliability, throughput, and scalability.

To test and verify the performance of your storage system, you need to collect some important metrics information. For example, the maximum number of I/O requests per second (IOPS), throughput (MBPS), and I/O latency.

Wonderful requests (IOPS): The number of concurrent requests that the storage system can process in one second. The better the value, the better for a single 15K rpm SAS drive, typically 150~250iops, and 1000~100 0000IOPS for Enterprise SSDs and Sans.

Throughput (MBPS): The size of data that the storage system can read and write in 1 seconds. The higher the value, the better.

I/O latency (ms): The time delay between I/O operations. This value is preferably 0 or close to 0.

These metrics can be collected through a number of free tools. such as Sqlio,sqliosim,iometer and Crystalldiskmark. See document: Https://msdn.microsoft.com/en-us/library/cc966412.aspx

SQL Server installations primarily use Das and San two storage systems.

  Direct-attached storage (direct attached storage,das)

In this type of storage system, the disk drives are located in the server chassis and connected directly to the disk controller. They can also be located externally, cabled directly to the host bus adapter (ADAPTER,HBA). No additional devices, such as switches, are required.

The main advantage of DAS is its ease of implementation and low maintenance costs, and the main drawback is its limited scalability. Although in recent years DAS systems have been able to function in the original SAN, some limitations remain, such as the number of disk drives and volume sizes that can be scaled to and managed, the number of servers that can be connected to, and the distance between storage units and servers.

Server links and distances are the largest differences between Das and Sans. Das typically requires a direct physical connection between the storage unit and the server, which limits the number of servers that can be connected to each other, and the distance between the storage unit and the server (usually just a few feet).

  Storage Areas Network (Storage area Network,san)

  This is a dedicated network that connects the storage devices that are provided to the server as direct-attached storage. There are 2 ways to connect this storage device network: By telling a dedicated fibre channel called a fabric switch (Fibre CHANNEL,FC), or by using the iSCSI protocol of a regular Ethernet switch.

One of the main advantages of the SAN is that it can span large geographic areas by using dedicated wide area networks (WANs) and TCP/IP routing. This allows data to be replicated between remote data centers in the case of disaster recovery, among other features.

Sans provide a high level of reliability and scalability for mission-critical data systems.

A properly architected San can provide much better throughput compared to Das, and can reduce I/o latency and process more disk arrays. The main drawback is higher costs and greater difficulty in achieving and maintaining them.

  

  Choosing the right Storage System

  A key factor is the disk technology used in the storage system, and how these disk drives are arranged together. Both Das and Sans are arrays of disk drives, and they are typically configured as storage pools to provide them as a single entity for use by the server.

  Disk drives

  In order to meet the large throughput requirements, it is often necessary to spread the read and write operations to a large number of fast-speed disk drives.

Decentralized IO operations mean that a small amount of data is stored on each disk drive that is grouped together. In this distributed storage, none of the disk drives contain complete data. Therefore, one disk failure means that all the data will be lost! Be sure to consider reliability, and you can use a special method called data array or RAID to organize your disks to meet both throughput and reliability requirements. The following table is the disk RAID level that is commonly used in SQL Server environments.

RAID level Describe
RAID 0 Also known as a stripe set or striped volume. Combine two or more disks together to form a single larger volume. Can not fault-tolerant, read and write fast
RAID 1 Also known as a mirrored drive. Write the same data to two drives. Even if one of the disks fails, no data is lost. The write operation is relatively slow. Only half of the original storage space can be used.
RAID 1+0 Also known as RAID 10. A mirrored set in a stripe set. Write operation is good and can be fault-tolerant. Only half of the original storage space can be used.
RAID 0+1 The stripe set in the mirror set. Fault tolerance is slightly worse than raid 10. Write operation performance is good.
RAID 5 The ability to tolerate one of the disks fails. Write operations are distributed to individual disks. The read operation is faster and the write operation is slow. Some of the original storage space will not be available.
RAID 6

Can tolerate two disk failures. Read operations are faster and write operations are slower than RAID 5 because parity calculations increase overhead. Some raw storage space will not be available

Second, the installation must be done after the configuration 1, memory

Minimum server memory, which is the amount of memory that SQL Server does not return to the operating system after it is allocated. However, SQL Server allocates minimal memory immediately after it is started, and in fact, only when the request is received does SQL Server allocate memory, which can be greater than or less than the lowest memory value. On the one hand, you should avoid releasing too much memory to the operating system, which causes the SQL instance to be missing enough memory, and on the other hand, the maximum server memory should not be equal to or greater than the total server available memory, which should generally be less than 4GB of total server memory.

  

2. Network packet size

The SQL Server default network packet size is 4096 bytes. Setting this value to a large point can improve the performance required to perform a large number of batch operations and transfer data. If the server hardware and Network Foundation support and enable Jumbo Frames, it is best to increase the network packet size to 8192 bytes.

View and enable Jumbo Frames (some servers are called Jumbo Mtu or Jumbo frames):

  

To view and modify the network packet size:

  

3. Instant file Initialization

Each time the database file is created or needs to grow, the operating system populates the database file with 0, and the new space can be written. All write operations will be blocked until the 0 is populated! To avoid such blocking and waiting, you can enable instant file initialization. There are two ways to do this: 1, add the SQL Server service account to the list of users who perform a large number of maintenance task policies (sometimes called perform volume maintenance tasks) under User Rights Management on the servers, 2, Or, you would have a user with a large number of maintenance tasks (typically administrator) as a service account for SQL Server:

Give the SQL Server service account the ability to maintain a large number of tasks, run the box input secpol.msc, enter the ' Local Security policy ':

  

To change the service account method for SQL Server, open the Services directory:

  

4. tempdb and user Database

The tempdb section uses the following:

    • Bulk load with triggers
    • Common table expressions
    • DBCC operations
    • Event notification
    • Index rebuilds, including SORT_IN_TEMPDB, partitioned index ordering, and online index operations
    • Large object type variables and parameters
    • Multi-activity result set operations
    • Query notifications
    • Row version Control
    • Table variables
    • Sort operations
    • Overflow operation

The tempdb requires special consideration and planning. For environments where tempdb is heavily used, creating additional tempdb files can significantly improve performance. Depending on the load, you can consider creating a large number of tempdb files that are proportional to each logical CPU and that the SQL Server scheduler worker thread can loosely pair it to a file. In general, the ratio of acceptable tempdb files to logical CPUs is between 1:2 and 1:4.

The tempdb location is important, and the tempdb file should be separated from the database and log files to avoid IO contention. If you are using more than one tempdb file, consider isolating each tempdb file on your own LUN and physical disk (for LUNs, as detailed later). Also consider setting the initial size of tempdb to an appropriate value, and the default size may cause expensive file growth. Here is a less accurate way to estimate the tempdb initial size: The query operator in the query plan in the workload reports the number of rows and the row size used to estimate the space required by the operator, the actual or estimated number of rows multiplied by the estimated row size, to calculate the required space. Of course, only experience and testing is the best guarantee.

The initial size of the user database should be sufficient to handle the amount of transactions expected to occur over a long period of time, avoiding frequent increases in database size. If the auto-Grow feature is enabled, you should choose a large enough file size increment to avoid frequent autogrow operations.

5. sa account

You should set a strong password for the SA account and never use that password. Then disable the SA account

6. TCP/IP Port

SQL SERVER uses port 1433 to communicate by default, while named instances are dynamically assigned ports when the service starts. Sometimes, for security reasons, you need to modify the default port. This can be done through the SQL Server Configuration Manager:

  

7. Service Patches and updates

Do not start Automatic Updates in a production instance, before the update is applied to the production instance, all updates should be tested in a controlled test environment

8. Backup

You must define a storage location for the backup schedule and backup for the system and user databases. If encryption is used, the encryption key is also backed up.

    • You should always create a backup file on a shared network drive or backup device set, rather than on a backup server.
    • Should be backed up in a full or incremental manner
    • There is also a need to define a backup retention policy to avoid storing unnecessary historical backups

SQL Server 2014 installation

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.