DAS (direct Acess storage-directly attached storage) refers to connecting a storage device directly to a computer through a SCSI interface or Fibre Channel.
NAS (Network attached Storage)-networked-attached storage that connects storage devices to a group of computers via a standard network topology, such as Ethernet. NAS is a part-level storage approach that focuses on helping workgroups and departmental agencies address the need to rapidly increase storage capacity. The engineering team that needs to share large CAD documents is a typical example.
The SAN (Storage area Network) is connected to a group of computers via Fibre Channel. A multi-host connection is provided in this network, but not through a standard network topology
On the disk storage market, the storage classifications (table one below) are categorized according to the server type:
Closed system storage and storage of open systems, the closure of the main means mainframe, AS400 and other servers,
Open systems are based on servers that include Windows, UNIX, Linux and other operating systems; Open system storage is divided into: built-in storage and external storage
Open System plug-in storage is divided into: direct-attached storage (direct-attached Storage, das) and networked storage (fabric-attached Storage, FAS) Open System networked storage is also divided into: Network access storage (network-attached Storage, NAS) and Storage Area network (Storage, SAN) according to the Transport protocol
Das
Limitations
Direct-attached storage is dependent on the server host operating system for data IO Read and write and storage maintenance management, data backup and recovery requires server host resources (including CPU, system IO, etc.)
Data flow requires a return host to the server attached to the tape drive (library), data backup usually occupies the server host resource 20-30%, so many enterprise users of daily data backup often late at night or business system is not busy, so as not to affect the operation of normal business systems. The larger the amount of data that is stored directly, the longer the backup and recovery time, and the greater the dependency and impact on the server hardware.
The connection channel between direct-attached storage and server host usually uses SCSI connection, bandwidth is 10mb/s, 20mb/s, 40mb/s, 80mb/s, etc., as the server CPU processing power is more and more strong, the storage hard disk space is more and more, the number of hard disks in the array is more and more. SCSI channel will become an IO bottleneck, the server host SCSI ID resource is limited, can establish a limited SCSI channel connection
Whether direct-attached storage or server-host extensions, a cluster (Cluster) that extends from one server to multiple servers, or an expansion of storage array capacity, can lead to downtime for business systems
Nas
NAS products are truly plug-and-play products. NAS devices typically support multi-computer platforms where users can access the same documents via network support protocols, so that NAS devices can be used in hybrid unix/windows NT LANs without modification
Second, the physical location of the NAS device is also flexible. They can be placed in workgroups, close to the application server in the datacenter, or can be placed in other locations, connected to the network through physical links. Without application server intervention, NAS devices allow users to access data on the network, which can reduce CPU overhead and significantly improve network performance
Limitations: NAS does not address a critical issue associated with file servers, which is bandwidth consumption during backup
SAN
The Storage Area network (Storage region networks, or SAN) uses Fibre Channel (Fibre channel, or FC) technology to connect storage arrays and server hosts through Fibre Channel switches and to establish a regional network dedicated to data storage. San has matured over a more than 10-year history, becoming the industry's de facto standard (but the fiber-switching technology of each vendor is not identical and its server and SAN storage have compatibility requirements)
The architecture of the San allows any server to connect to any storage array so that the server can access the required data directly, regardless of where the data is placed. San also has higher bandwidth thanks to fiber interface
San solutions are stripped of storage functionality from basic functionality, so running backup operations eliminates the need to consider their impact on the overall performance of the network. The San solution also simplifies management and centralized control, especially when all storage devices are clustered together. Finally, the fiber interface provides a connection length of 10 km, which makes it very easy to physically separate and not store in the room
NAS: Users access data through the TCP/IP protocol, using industry-standard file sharing protocols such as: NFS, HTTP, cifs for sharing.
SAN: Access Data via a dedicated Fibre Channel switch with SCSI, Fc-al interface.
The most essential difference between Nas and SAN is where the file management system is.
As can be seen in the SAN fabric, the File management system (FS) is also on each application server, while the NAS is each application server through a network sharing protocol (such as: NFS, CIFS) using the same file management system. In other words: the difference between NAS and SAN storage systems is that NAS has its own file system management.
NAS is focused on applications, users and files, and the data they share. Sans are a reliable infrastructure for focusing on disks, tapes, and joining them. A comprehensive solution that will be managed from the desktop system to the centralized storage device in the future is NAS plus SAN
DAS NAS SAN