Moosefs is very good, has been practical for half a month, easy-to-use, stable, small file is very efficient.
MogileFS is said to be good at storing pictures for Web 2.0 applications.
Glusterfs feel that advertising is doing better than the product itself.
Openafs/coda is a very distinctive thing.
Lustre complex, efficient, suitable for large clusters.
PVFS2 with custom applications will be good, the dawning of the parallel file system is said to be based on PVFS.
It is suitable to do common file system with Moosefs,glusterfs,lustre.
================================================================
Dcache
-Reliance on PostgreSQL
Xtreemfs
* Server-side is Java implemented
-Performance is not high
Cloudstore (KOSMOSFS)
+ Hadoop as one of the backend of distributed file system
-File meta information is not supported
-Kfs_fuse too slow, not available
-Multiple compilation dependencies, poor documentation, simple script
-Development is not active
Moosefs
+ Support file meta information
+ Mfsmount is very easy to use
+ Less compile-dependent, document complete, default configuration very good
+ mfshdd.cfg Add * will be transferred to other chunk server for this chunk server to exit safely
+ does not require chunk server to use File system format and consistent capacity
+ Development is very active
+ can run as a non-root user
+ Can enlarge online
+ Support Recycle Bin
+ Snapshot Support
-The master server has a single point of failure
-Memory is consumed by master server
MogileFS
-Not suitable for common file system, suitable for storing static read-only small files, such as pictures
Glusterfs (Feature)
+ No single point of failure
+ Support Recycle Bin
+ Modular Tiered Architecture
-Required for file system format, EXT3/EXT4/ZFS is officially supported, XFS/JFS may be available, ReiserFS tested (System requirements)
-Need to run as root (trusted xattr,mount with the user_xattr option is useless, the official claim is GLUSTERFSD need to create a different owner of the file, so required root permissions)
-Cannot expand online (add storage nodes when not umount), plan to implement in 3.1
-Distributed storage in file units, striped distribution storage immature
GFS2
Http://sourceware.org/cluster/wiki/DRBD_Cookbook
http://www.smop.co.uk/blog/index.php/2008/02/11/gfs-goodgrief-wheres-the-documentation-file-system/
Http://wiki.debian.org/kristian_jerpetjoen
http://longvnit.com/blog/?p=941
Http://blog.chinaunix.net/u1/53728/showart_1073271.html (High-availability solution based on Red Hat rhel5u2 gfs2+iscsi+xen+cluster)
Http://www.yubo.org/blog/?p=27 (Iscsi+clvm+gfs2+xen+cluster)
Linux.chinaunix.net/bbs/thread-777867-1-1.html
* Not Distributed File system, but shared disk cluster file system, which requires some mechanism to share disks between machines and lock mechanisms, and therefore requires DRBD/ISCSI/CLVM/DDRAID/GNBD Do disk sharing, and DLM to do lock management)
-dependent Red Hat Cluster Suite (debian:aptitude install redhat-cluster-suite, Graphics Configuration Toolkit System-config-cluster, SYSTEM-CONFIG-LV M
-Suitable for small clusters of around 30 nodes, the larger the DLM overhead, the default configuration of 8 nodes
OCFS2
* GFS Oracle Replica, which is said to perform better than GFS2 (debian:aptitude install Ocfs2-tools, Graphics Configuration toolkit ocfs2console)
-ACL, flock not supported, just for Oracle database design
OpenAFS
+ Mature and Stable
+ Develop actively, support Unix/linux/macos x/windows
-Performance is not good enough
Coda
* Copy files from server to local, file read and write is local operation so it's efficient
* Send to server after file is closed
+ Support offline operation, connect and then sync to server
-Cache file-based, not based on data block, open files need to wait from server cache to local finish
-Concurrent write version conflict issues
-Concurrent reading has a great latency, and requires a client to close the file, such as not suitable for tail-f Some.log
-research projects that are not mature enough to be used widely
PVFS2
Http://blog.csdn.net/yfw418/archive/2007/07/06/1680930.aspx
* High Performance
-No lock mechanism, does not conform to POSIX semantics, need to apply the coordination, not suitable for common file system
(Pvfs2-guide chaper 5:pvfs2 User APIs and Semantics)
-static configuration, cannot dynamically expand
Lustre
* Suitable for large cluster
+ Very High performance
+ Support for dynamic expansion
-Requires patches to the kernel, deep reliance on the Linux kernel and ext3 file system
Hadoop HDFS
* Local Write cache, a certain size (MB) when passed to the server
-Not suitable for common file system
Fastdfs
-can only be used through the API, does not support fuse
NFSV4 Referrals
+ Simple
-No load balancing, fault tolerance
NFSv4.1 PNFS
-No popularity
Spnfs
* An implementation of PNFS on Linux
Ceph (http://ceph.newdream.net/)
-Early development, unstable
-Reliance on Btrfs