Test Method for FastDFS distributed storage module of nginx

Source: Internet
Author: User
Tags crc32

Looking back at fastdfsupdates, we can see fastdfs-nginx-module_v1.01.tar.gz nginx module. So we tested it on a test machine today. We are considering replacing lustre with a waste of resources!

Environment:

storage1:192.168.6.100storage2:192.168.6.101tracker:192.168.6.102

1. Download and install FastDFS on each machine

Nginx $> wget http://fastdfs.googlecode.com/files/FastDFS_v2.04.tar.gz# decompress nginx $> tar zxvf fastdfs_v2.04.tar.gzng133 $> cd FastDFS # Because I added the fastdfs module to nginx, therefore, fastdfs is not required to support http, so I do not need to remove the comments before # WITH_HTTPD = 1 and directly compile nginx $>. /make. shnginx $>. /make. sh install

2. modify the configuration files of tracker and storage.
# Tracker Modification

tracker $> vim /etc/fdfs/tracker.conf
Disabled = false # bind_addr = 192.168.6.102 # bind IPport = 22122 # service port connect_timeout = 30 # connection timeout network_timeout = 60 # network timeout of tracker server, in seconds. Base_path =/home/yangzi # directory address, where data (Storage Server Information), logs, log File max_connections = 256 # maximum number of connections provided by the system work_threads = 4 # Number of threads, usually set the number of CPU store_lookup = 2 upload group (volume) Mode 0: polling Mode 1: specify group 2: Load Balancing (select the largest remaining space group (volume) to upload) If you specify to upload to a fixed group at the application layer, this parameter is bypassed when store_group = group1 is set to 1 (store_lookup = 1, that is, when the group name is specified), this parameter must be set to a group name that exists in the system. If you select another upload method, this parameter is invalid. store_server = 0 indicates which storage server is selected for the upload operation (after a file is uploaded, this storage server is equivalent to the storage server Source of this file. It will push the file to the same group for synchronization.) #0: polling method #1: select the first server Based on the ip address (the least ip address) #2: sort by priority (the upload priority is set by the storage server, and the parameter name is upload_priority) store_path = 0 select the directory in the storage server for upload. The storage server can have multiple base paths for storing files (which can be understood as multiple disks ). #0: In turn, multiple directories store files in sequence #2: select the directory with the largest available space to store files (Note: The remaining disk space is dynamic, therefore, the directory or disk to be stored may also change.) download_server = 0 which storage server is selected as the download server #0: polling mode, you can download any storage server of the current file #1: which is the source storage server) that is, the storage server to which the file was uploaded is the space reserved on the reserved_storage_space = 4 GBstorage server to ensure the space required by the system or other applications (if the disk size of the servers in the same group is the same, take the minimum value as the standard, that is, as long as one server in the same group meets this standard, this standard takes effect because it is backed up.) log_level = info # select the Log Level Run_by_group = # user group for operating systems to run FastDFS run_by_user = # allow_hosts, user for operating systems to run FastDFS = * # ip address range that can be connected to this tracker server (all types of connections are affected, including the client, storage server) sync_log_buff_interval = 10 # interval between synchronizing or refreshing log information to the hard disk, measured in seconds # Note: tracker server logs are not always written to the hard disk, instead, write the memory first. Check_active_interval = 120 # interval between storage server survival detection, measured in seconds. # The storage server periodically sends a heartbeat to the tracker server. If the tracker server does not receive a heartbeat from the storage server within a check_active_interval, the storage server is considered offline. Therefore, the value of this parameter must be greater than the heartbeat interval configured by the storage server. Usually configured to 2 or 3 times the storage server heartbeat interval. Thread_stack_size = 64KB # size of the thread stack. FastDFS server adopts the thread mode. Correction: The tracker server thread stack should not be smaller than 64 KB, not kb. # The larger the thread stack, the more system resources a thread occupies. If you want to start more threads (the parameter corresponding to V1.x is max_connections and V2.0 is work_threads), you can reduce the value of this parameter as appropriate. Storage_ip_changed_auto_adjust = true # This parameter controls whether the cluster is automatically adjusted when the storage server IP address changes. Note: automatic adjustment is completed only when the storage server process is restarted. Storage_sync_file_max_delay = 86400 # parameters introduced by V2.0. Maximum latency of files synchronized between storage servers. The default value is 1 day. Adjust the parameters introduced by storage_sync_file_max_time = 300 # V2.0. The maximum time required by the storage server to synchronize a file. The default value is 300 s, that is, 5 minutes. Http. disabled = true # Whether the HTTP service is invalid. Of course, when compiling, I have removed the with_httpd macro, http. server_port = 80 # HTTP service port # The following parameters can be used only when the http service is enabled. check_alive_interval = 30http. check_alive_type = tcphttp. check_alive_uri =/status.html http. need_find_content_type = true

# Modify two storage. conf instances

storage $> vim /etc/fdfs/storage.conf
Disabled = false # Whether the configuration takes effect group_name = group1 # bind_addr of the storage group (volume) = 192.168.6.100 # bind an IP address, the other too storage IP address is 192.168.6.101client _ bind = true # bind_addr is usually for server. This parameter is valid only when bind_addr is specified. Port = 23000 # indicates the storage service port connect_timeout = 30 # connection timeout time. For the socket function connectnetwork_timeout = 60 # storage server network timeout time, in seconds. Heart_beat_interval = 30 # Heartbeat interval, in seconds stat_report_interval = 60 # The time interval for the storage server to report the remaining disk space to the tracker server, in seconds. Base_path =/home/eric # base_path directory address. The root directory must exist in subdirectories and will be automatically generated # data will be generated (where data is stored), logs Log File max_connections = 256 # maximum number of connections buff_size = 256KB # Set the buffer size of the queue node. Work_threads = 4 # Number of worker threads disk_rw_separated = true # Whether disk I/O reads and writes are separated by default. Disk_reader_threads = 1 # Number of read threads for a single storage path; default value: 1disk_writer_threads = 1 # Number of write threads for a single storage path; default value: 1sync_wait_msec = 200 # When Synchronizing files, if the file to be synchronized is not read from the binlog, sleep for N milliseconds and then re-read. 0 indicates that the file is not sleep and immediately tries to read the file again. Sync_interval = 0 # interval between syncing the last file and synchronizing the next file, in milliseconds. 0 indicates that the file is not sleep and the next file is synchronized directly. Sync_start_time = 00: 00sync_end_time = 23:59 # Time Range for system synchronization (all day by default ). It is generally used to avoid some problems arising from peak synchronization. I believe sa will understand it. Write_mark_file_freq = 500 # interval between regularly synchronizing the storage mark file to the disk, in the unit of second store_path_count = 1 # When storing files, the storage server supports multiple paths (such as disks ). The number of base paths for storing files is configured here. Generally, only one directory is configured. Store_path0 =/home/eric # configure store_path paths one by one, and the index number is based on 0. Note that the configuration method is followed by 0, 1, 2.... You need to configure 0 to store_path-1. # If base_path0 is not configured, it is the same as base_path. Subdir_count_per_path = 32 # two levels of directories are used when FastDFS stores files. Here, configure the number of directories for storing files tracker_server = 192.168.6.188: 22122 # log_level = info # Log Level run_by_group = # Run storage user group run_by_user = # Run storage user group allow_hosts = * # Allow access to IP address list file_distribute_path_mode = 0 # the file is stored in a distributed storage policy in the data directory. #0: storage in turn #1: Random storage file_distribute_rotate_count = 100 # this parameter is valid when the preceding parameter file_distribute_path_mode is set to 0 (storage in turn. # When the number of files in a directory reaches the value of this parameter, the files uploaded subsequently are stored in the next directory. fsync_after_written_bytes = 0 # When writing large files, each time N Bytes are written, the system function fsync is called to forcibly synchronize the content to the hard disk. 0 indicates the time interval from never calling fsyncsync_log_buff_interval = 10 # synchronize or refresh log information to the hard disk, measured in seconds sync_binlog_buff_interval = 60 # synchronize binglog (update operation log) to the hard disk, the Unit is seconds sync_stat_file_interval = 300 # interval between the storage stat file and the disk. The unit is seconds. Thread_stack_size = 512KB # size of the thread stack. FastDFS server adopts the thread mode. # The larger the thread stack, the more system resources a thread occupies. Upload_priority = 10 storage server as the source server. The priority of the file to be uploaded can be negative. The smaller the value, the higher the priority. Here, the configuration corresponds to if_alias_prefix = check_file_duplicate = 0 # Whether to check whether the uploaded file already exists. If the file already exists, the file content does not exist. Create a symbolic link to save disk space. Used in combination with fastdfh. 1 is detection, 0 is not detection, we do not use fastdfh, of course, 0 key_namespace = FastDFS # When a parameter is set to 1 or yes (true/on is also possible ), in FastDHT, The namespace keep_alive = 0 # connection method with FastDHT servers (whether it is a persistent connection) # The following is the http configuration. disabled = truehttp. domain_name = http. server_port = 80http. trunk_size = 256KBhttp. need_find_content_type = true

3. Create the root directory of tracker and storage

# trackertracker $> mkdir -p /home/eric# storagestorage $> mkdir -p /home/yangzi

4. Download on a storage device. For example, I downloaded nginx and fastdfs-nginx-module on 192.168.6.100.

storage $> wget http://www.nginx.org/download/nginx-0.8.53.tar.gzstorage $> svn export http://fastdfs-nginx-module.googlecode.com/svn/trunk/ fastdfs-nginx-module-read-only

5. Compile and install nginx with the fastdfs-nginx-module

Storage $> tar zxvf nginx-0.8.53.tar.gzstorage $> cd nginx-0.8.53storage $>. /configure -- prefix =/usr/local/nginx -- add-module =/root/fastdfs-nginx-module-read-only/srcstorage $> makestorage $> make install # copy mod_fastdfs.conf to/etc/fdfs/storage $> cp/root/fastdfs-nginx-module-read-only/src/mod_fastdfs.conf/etc/fdfs/

6. Modify the nginx configuration file to add

storage $> vim /usr/local/nginx/conf/nginx.conf

# Add
Location/M00 {
Alias/home/eric/data;
Ngx_fastdfs_module;
}

7. Make a soft connection to the storage directory

storage $> ln -s /home/yangzi/data /home/yangzi/data/M00

8. Start two storage and tracker nginx

# Start trackertracker $>/usr/local/bin/fdfs_trackerd/etc/fdfs/tracker. conf # Start storagestorage $>/usr/local/bin/fdfs_storaged/etc/fdfs/storage. conf # Start storage2storage2 $>/usr/local/bin/fdfs_storaged/etc/fdfs/storage. conf # Start nginxstorage $>/usr/local/nginx/sbin/nginx in storage

9. upload files for testing.
# Modifying the client configuration file

Storage $> vim/etc/fdfs/client. confconnect_timeout = 30network_timeout = 60base_path =/home/yangzitracker_server = 192.168.6.102: 22122log_level = info # The following parameters do not matter. http service http is not used anyway. tracker_server_port = 80
storage $> vim a.htmltest FastDFS!
storage $> /usr/local/bin/fdfs_test /etc/fdfs/client.conf upload a.htmlThis is FastDFS client test program v2.04Copyright (C) 2008, Happy Fish / YuQingFastDFS may be copied only under the terms of the GNU GeneralPublic License V3, which may be found in the FastDFS source kit.Please visit the FastDFS Home Page http://www.csource.org/for more detail.base_path=/home/yangzi, connect_timeout=30, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0tracker_query_storage_store_list_without_group:    server 1. group_name=group1, ip_addr=192.168.6.100, port=23000group_name=group1, ip_addr=192.168.6.100, port=23000storage_upload_by_filenamegroup_name=group1, remote_filename=M00/00/00/wKgGvEz3Y9MAAAAAAAAADigvbpc73.htmlsource ip address: 192.168.6.100file timestamp=2010-12-02 17:16:03file size=14file crc32=674197143file url: http://192.168.6.100/group1/M00/00/00/wKgGvEz3Y9MAAAAAAAAADigvbpc73.htmlstorage_upload_slave_by_filenamegroup_name=group1, remote_filename=M00/00/00/wKgGvEz3Y9MAAAAAAAAADigvbpc73_big.htmlsource ip address: 192.168.6.100file timestamp=2010-12-02 17:16:03file size=14file crc32=674197143file url: http://192.168.6.100/group1/M00/00/00/wKgGvEz3Y9MAAAAAAAAADigvbpc73_big.html

Open IE browser:

This article is from the "linuxer" blog, please be sure to keep this source http://deidara.blog.51cto.com/400447/440175

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.