Distributed File storage Fastdfs (vii) FASTDFS configuration file Detailed _ Distributed

Source: Internet
Author: User
Tags reserved syslog usleep disk usage server port

When configuring Fastdfs, modifying the configuration file is a very important step and understanding the meaning of each item in the configuration file is more important, so I refer to the great God's post and organize the explanation of the configuration file. The original post is as follows: Http://bbs.chinaunix.net/thread-1941456-1-1.html, because the original post in the earlier version, so I have to modify the existing version, the original post did not client.conf introduction, here I add.

The following are annotated profile downloads: http://download.csdn.net/detail/xingjiarong/9445515

First, tracker.conf

# This profile is invalid, false indicates valid # is this config file disabled # False for enabled # True for disabled Disabled=false # whether to bind IP # b Ind_addr= is followed by the binding IP address (often used for servers with multiple IPs but only one IP service).  If not filled, it means all (generally do not fill in OK) # bind an address of this host # empty to bind all addresses of this host bind_addr= # service Port # The Tracker Server Port port=22122 # Connection Timeout time for socket socket function Connect # Connect timeout in seconds # default value is 30s connect Network timeout for _timeout=30 # Tracker server in seconds. When sending or receiving data, if the data cannot be sent or received after the timeout period, this network communication fails # network timeout in seconds # default value is 30s network_timeout=60 # the base p Ath to store data and log files Base_path=/home/yuqing/fastdfs # Base_path directory address (root directory must exist, subdirectories will be created automatically) # with directory description: # tracker SE     RVer directory and file structure: # ${base_path} # |__data # |     |__storage_groups.dat: Storing Group Information # | |__storage_servers.dat: Storage Server List # |__logs # |__trackerd.log:tracker server log file #数据文件storage_groups. DAT and storage
The records in _servers.dat are separated by a newline character (\ n) and the fields are separated by a Latin comma (,). The fields in #storage_groups. Dat are: # 1. Group_namE: Group name # 2. The Storage_port:storage server port number records Storage server-related information #storage_servers. dat, and the fields are: # 1. Group_name: Group name # 2. IP_ADDR:IP Address # 3. Status: # 4. SYNC_SRC_IP_ADDR: The source server for the existing data file is synchronized to the storage server # 5. Sync_until_timestamp: Synchronize the time of the existing data file (Unix timestamp) # 6. Stat.total_upload_count: Number of files uploaded # 7. Stat.success_upload_count: Number of successful uploaded files # 8. Stat.total_set_meta_count: Change meta Data number # 9. Stat.success_set_meta_count: Successfully changed meta Data number # 10. Stat.total_delete_count: Number of files deleted # 11. Stat.success_delete_count: Number of files successfully deleted # 12. Stat.total_download_count: Download number of files # 13. Stat.success_download_count: Number of successfully downloaded files # 14. Stat.total_get_meta_count: Get Meta Data number # 15. Stat.success_get_meta_count: Successfully obtained meta data number # 16. Stat.last_source_update: Last Source Update time (update operation from client) # 17. Stat.last_sync_update: Last Sync update time (update operation from other Storage server sync) # The maximum number of connections when the system provides services.
For v1.x, a connection is serviced by one thread, which is the number of worker threads. # for v2.x, there is no relationship between the maximum number of connections and the number of worker threads # max Concurrent connections This server supported MAX_CONNECTIONS=256 # work thread count, SH Ould <= MAX_CONNECTIONS # Default value is 4 # since V2.00 # V2.0 introduced this parameter, the number of worker threads, usually set to CPU number work_threads=4 # Upload Group (volume) mode 0: Polling mode 1: Specify Group 2: Balance Load (select Maximum remaining space (volume) upload) # If you specify the upload to a fixed group in the application layer, then this parameter is bypassed by # The method of selecting group to upload files # 0:round Robin # 1:speci FY group # 2:load balance, select the max free spaces Group to upload file store_lookup=2 # When the previous parameter is set to 1 o'clock (store_lookup=1, When you specify a group name, you must set this parameter to a group name that exists in the system. If you choose another way to upload, this parameter will not work. # which group to upload file # when store_lookup set to 1, must set Store_group to the group name St ORE_GROUP=GROUP2 # Choose which storage server to upload (after a file is uploaded, this storage server is the same as the storage server source for this file, it will be on the same group of storage Server pushes this file to sync Effect # 0: Polling Way # 1: Sorting by IP address Select First server (minimum IP address) # 2: Sorted by Priority (upload priority set by Storage server, parameter named Upload_priorit Y) # which storage server to upload file # 0:round Robin (default) # 1:the-A-IP address # 2:the FIR St Server Order by priority (the minimal) store_server=0 # Selects which directory in storage server to upload. Storage server can have multiple base path (which can be understood as multiple disks) that holds files。  # 0: Take turns, multiple directories in sequence to store the file # 2: Select the largest remaining space of the Directory to store files (Note: The remaining disk space is dynamic, so the stored directory or disk may also vary) # which path (means disk or mount point) of the Storage server to upload file # 0:round Robin # 2:load balance, select the max free spaces path to upload file Store_pat H=0 # Select which storage server as the download server # 0: Polling mode, you can download the current file's any storage server # 1: Which one is the source storage server (previously said this storage serv How the ER source is generated is the storage server servers that were uploaded to before. # which storage server to download file # 0:round Robin (default) # 1:the Urce Storage server which space reserved on the current file uploaded to Download_server=0 # Storage server, guaranteeing system or other application requirements.
You can use an absolute or a percentage (V4 to support the Percent method). # (indicates that if the same group of servers have the same hard disk size, whichever is the smallest, that is, as long as one of the servers in the same group reaches this standard, this standard is in effect because they are backing up) # Reserved storage space for system or other
Applications. # If the free (available) spaces of any stoarge server in # A group <= reserved_storage_space, # No file can be Uploade
D to this group. # bytes Unit Can is one of follows: ### G or G for gigabyte (GB) ### m or M for megabyte (MB) ### K or K For Kilobyte (KB) ### No, for Byte (B) ### XX. XX% as ratio such as Reserved_storage_space = 10% reserved_storage_space = 10% Select Log level #standard log levels as syslog, CA SE insensitive, Value list: ### Emerg for emergency ### alert ### crit to critical ### error ### warn for warning ### not 
Ice ### Info ### Debug Log_level=info # Operating system runs FASTDFS user group (not filled in is the current user group, which is the startup process) #unix group name to run this program, #not set (empty) means run by the group's current user run_by_group= # operating system runs the Fastdfs user (not filling out what is currently the user, which is the startup process) #unix use Rname to run the #not set (empty) means run by current user run_by_user= # can be connected to the IP range of this tracker server (for all types of connections are affected, including the client, storage server) # allow_hosts can ocur more than once, host can is hostname or IP address, # "*" means match al L IP addresses, can use range like this:10.0.1. [1-15,20] or # host[01-08,20-25].domain.com, for example: # allow_hosts=10.0.1. [1-15,20] # allow_hosts=host[01-08,20-25].domain.com allow_hosts=* # Sync or refresh log information to the hard disk at intervals of seconds
# Note: Tracker server log is not always write hard disk, but first write memory # Sync log buff to disk every interval seconds # default value is seconds sync_
Log_buff_interval = 10 # Detects storage server's surviving time in seconds. # Storage Server periodically sends a heartbeat to tracker server, if tracker server has not received a heartbeat in storage server within a check_active_interval, There will be a view that the storage server is offline. Therefore, the value of this parameter must be greater than the heartbeat interval configured by the storage server. Typically configured to be twice times or 3 times times the storage server heartbeat interval # Check storage server alive Interval seconds = 120 # line Check_active_interval size. The FASTDFS server side is threaded. Tracker server thread stack should not be less than 64KB # line stacks The larger the number of system resources a thread occupies. If you want to start more threads (v1.x the corresponding parameter is max_connections, V2.0 is work_threads), you can reduce the value of this parameter # thread stack size, should >= 64KB # default Valu E is 64KB thread_stack_size = 64KB # This parameter controls whether the cluster automatically adjusts when the storage server IP address changes. Note: auto-adjust only when storage server process restarts # Auto adjust when the IP addresses of the storage server changed # default value is True St Orage_ip_changed_auto_adjust = true # V2.0 the parameters introduced. Maximum latency for synchronizing files between storage servers, by default of 1 days. Adjust according to the actual situation # Note: This parameter does not affect the file synchronization process. This parameter only when downloading files, determine whether the file has been synchronized to complete a threshold (experience) # storage SyNC file Max delay seconds # Default value is 86400 seconds (one day) # since V2.00 storage_sync_file_max_delay = 86400 # V2.0 the introduced parameters.
The maximum time the storage server needs to synchronize a file, by default, is 300s, or 5 minutes. # Note: This parameter does not affect the file synchronization process. This parameter is used only as a threshold to determine whether the current file has been synchronized (experience value) # The max time of storage sync a file # default value is seconds # since V2.00 St Orage_sync_file_max_time = The parameters introduced by the V3.0. Whether to merge storage attributes with small files, by default # If use a trunk file to store several small files # default value is False # since V3.00 Use_trunk_f
ile = False # V3.0 the parameters introduced. # Trunk The minimum number of bytes allocated by file. For example, the file is only 16 bytes, the system will also allocate slot_min_size bytes # The min slot size, should <= 4KB # default value is 256 bytes # since V3.00 slot_m
In_size = 256 # V3.0 introduced parameters. # only files with file size <= This parameter value will be merged into the store.
If a file is larger than this parameter value, it is saved directly to a file (that is, no merge storage is used). # The Max slot size, should > Slot_min_size # Store the upload file to trunk file as it ' s size <= this value # de
Fault value is 16MB # since V3.00 slot_max_size = 16MB # V3.0 the parameters introduced. # The trunk file size of the merged storage, at least 4MB, and the default value is 64MB. It is not recommended to set too large # The trunk file size,Should >= 4MB # default value is 64MB # since V3.00 trunk_file_size = 64MB # Create trunk file in advance. Only if this argument is true, the following 3 Trunk_create_file_-preceded arguments are valid # If create trunk file advancely # Default value is False # since V3.06 trunk _create_file_advance = False # to create trunk file starting point (base time) in advance, 02:00 indicates that the first time created is 2 o'clock in the morning # The Times base to create trunk file # th E Time Format:HH:MM # default value is 02:00 # since V3.06 trunk_create_file_time_base = 02:00 # Create trunk file interval, in seconds. If you create only one time in advance per day, set to 86400 # The interval of Create trunk file, Unit:second # Default value is 38400 (one day) # since V3.06 t
Runk_create_file_interval = 86400 # When trunk file is created in advance, the idle trunk size needed to be reached # For example, this parameter is 20G, and the current idle trunk is 4GB, then only need to create 16GB trunk file can # The threshold to create trunk file # While the free trunk file size less than the threshold, 'll create # the trunk fil ES # Default value is 0 # since V3.06 trunk_create_file_space_threshold = 20G # trunk When initializing, check if free space is occupied # if check trunk Space occupying when loading trunk free spaces # tHe occupied spaces'll be ignored # default value are false # since V3.09 # Notice:set This parameter to true'll slow t He loading of the trunk spaces # when startup.
Should set this parameter to True when neccessary.
Trunk_init_check_occupying = false # whether the trunk free space information is loaded unconditionally from Trunk Binlog # Fastdfs The default is to load Storage_trunk.dat free space from the snapshot file trunk. # The first line of the file records the offset of the trunk binlog, and then starts loading from the Binlog offset # If ignore Storage_trunk.dat, reload from Trunk Binlog # Default VA Lue is False # since V3.10 # set to True once for version upgrade when your version less than V3.10 Trunk_init_reload_from _binlog = False # using the server ID as the storage Server identity # If use storage ID instead the IP address # default value is False # Si nce V4.00 use_storage_id = False # use_storage_id set to True to set this parameter # Setting the group name, server ID, and corresponding IP address in the file, see the configuration example in the source directory: Conf/storag E_IDS.CONF # Specify storage IDs filename, can use relative or absolute path # since V4.00 = Storage_ids_filename Ids.conf #文件名中的id类型, there are two kinds of IP and ID, only when use_storage_id is set to TRue this parameter is valid # ID type of the storage server in the filename, values are: # # ip:the IP address of the storage server # # ID:  The server ID of the storage server # This paramter was valid only if use_storage_id set to True # default value is IP # Since V4.03 id_type_in_filename = IP # store from file using symbol link (Symbolic Links) # If set to True, one will occupy two files from the file: Original file and Symbolic link to it # if store SL Ave File Use Symbol link # default value is False # since V4.01 Store_slave_file_use_link = false # whether to rotate the error log periodically, support only Rotate once a day # if rotate the error log every days # default value is False # since V4.02 Rotate_error_log = false # error Log periodic round The point-in-time of the turn, only valid when Rotate_error_log is set to true # Rotate error log times base, Time Format:Hour:Minute # Hour from 0 to, Minute fro M 0 to $ # default value is 00:00 # since V4.02 error_log_rotate_time=00:00 # error log rotate by size # to 0 means no rotation by file size, otherwise when error L 
OG reaches this size, it will rotate to the new file # Rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 #Since V4.02 rotate_error_log_size = 0 # Use Connection pool # If using connection pool # Default value is False # since V4.05 Use_conne Ction_pool = False # If the idle time of a connection exceeds this value, it will automatically shut down # connections whose the idle time exceeds the ' be closed # unit:s Econd # Default value is 3600 # since V4.05 connection_pool_max_idle_time = 3600 # port used to provide HTTP service # HTTP port on this TRAC Ker Server http.server_port=8080 # checks whether HTTP server is still working at intervals, and never checks if the value is less than 0 # Check storage http Server Alive interval Seco NDS # <= 0 for never check # default value is http.check_alive_interval=30 # Check the type of HTTP server is alive, TCP and HTTP two kinds of # TCP Way only HTTP port is connected # HTTP mode check must return status value # Check storage HTTP Server alive type, values are: # Tcp:connect to the Storge ser Ver with HTTP Port only, # does not request and get response # Http:storage Check Alive URL must return HTTP stat US # Default value is TCP http.check_alive_type=tcp # Check storage http Server Alive Uri/url # Note:storage Embed H TTP Server support URI:/status.html http.check_alive_uri=/status.html 

Two, storage.conf

# This configuration file is invalid, false indicates valid # is this config file disabled # False for enabled # True for disabled Disabled=false # specifies this Stora GE Server Groups (volume) # The name of the group this storage server belongs to Group_name=group1 # is binding IP # bind_addr= after the bound IP place Address (commonly used for servers that have multiple IPs but only want one IP service). If not filled, it means all (generally do not fill in OK) # bind an address of this host # empty to bind all addresses the This host bind_addr= # bind_addr is usually a needle to the server.
This parameter is valid when BIND_ADDR is specified.
# This storage server acts as a client to connect to other servers (such as tracker server, other storage server) and whether to bind bind_addr. # If bind an address of ' this host ' to ' Servers # (this storage server as a client) # True for binding th E address configed by above parameter: ' BIND_ADDR ' # False for binding ' any address to this host Client_bind=true # Storag E Server service Port # The storage server Port port=23000 # Connection timeout for socket socket function Connect # Connect timeout in seconds # default VA Lue is 30s connect_timeout=30 # Storage Server network timeout, in seconds.
When sending or receiving data, this network communication fails if data cannot be sent or received after the timeout period. # Network timeout in seconds
# default value is 30s network_timeout=60 # heartbeat interval, in seconds (this refers to the active sending of heartbeat to tracker server) # heart beat interval in seconds ART_BEAT_INTERVAL=30 # Storage Server reports to tracker server the time interval for disk space remaining, in seconds # disk usage report interval in seconds Stat_repor  T_INTERVAL=60 # Base_path Directory address, the root directory must exist subdirectories will be automatically generated (note: This is not the uploaded file to store the address, before yes, after a version of the change) # The base path to store data and log The maximum number of connections when the files Base_path=/home/yuqing/fastdfs # system provides services # Max concurrent connections The server supported # default value I S 256 # More max_connections means the more memory'll be used max_connections=256 # V2.0 introduce this parameter. Sets the buffer size for the queue node.
The amount of memory consumed by the work queue = Buff_size * Max_connections # is set larger and the overall performance of the system improves. # Memory consumed please do not exceed the system physical memory size. In addition, for 32-bit systems, please note that the memory used is not more than 3GB # The buff size to recv/send data # This parameter must far than 8KB # default value is 64K B # since V2.00 buff_size = 256KB # Number of worker threads, worker threads for network IO, should be less than Max_connections value # work thread count, should <= Max_conn Ections # Work thread deal Network IO # default value is 4 # since V2.00 WOrk_threads=4 # V2.0 introduces this parameter. Disk IO read/write is separate, default is split # if disk Read/write separated # False for mixed read and write # # True for separated read and writ E # Default value is true # since V2.00 disk_rw_separated = true # V2.0 introduces this argument.
The number of read threads for a single storage path, with a default value of 1.  # Read and write threads in the system = Disk_reader_threads * Store_path_count # When reading and writing are mixed, the number of reads and writes in the system = (disk_reader_threads + disk_writer_threads) * Store_path_count # Disk Reader thread count per store base path # to mixed read/write, this parameter can be 0 # def This parameter is introduced Ault value is 1 # since V2.00 disk_reader_threads = 1 # V2.0.
The number of write threads for a single storage path, with a default value of 1.  # The number of write threads in the system = Disk_writer_threads * Store_path_count # when Read and write is mixed, the number of read and write threads in the system = (disk_reader_threads + disk_writer_threads) * Store_path_count # Disk writer thread count per store base path # to mixed read/write, this parameter can 0 # def Ault value is 1 # since V2.00 disk_writer_threads = 1 # When synchronizing files, if you do not read from Binlog to the files that you want to sync, sleep n milliseconds and then re-read.
0 means no sleep, try to read again immediately. # for CPU consumption, it is not recommended to set to 0. How do you want to sync as fast as possible, you can set this parameter smaller, such as set to 10ms # when noEntry to sync, try read Binlog again after X milliseconds # must > 0, default value is 200ms sync_wait_msec=50 # Sync prev
After the file is synchronized, the time interval for the next file, in milliseconds, 0 for no sleep, synchronizes the next file directly. # After sync a file, the Usleep milliseconds # 0 for the sync successively (never call Usleep) Sync_interval=0 # The following two are explained together. The time period that allows system synchronization (default is all day).  Typically used to avoid peak sync problems and settings, I believe SA will understand # storage sync start time of the A day, time Format:Hour:Minute # Hour from 0 to, Minute from 0 to sync_start_time=00:00 # storage sync end time of the a day, time Format:Hour:Minute # Hour from 0 to, Minute fro M 0 to sync_end_time=23:59 # sync up n files, sync storage mark files to disk # Note: If mark file content does not change, it will not sync # write to the mark file after SYN c N Files # default value is write_mark_file_freq=500 # Storage server supports multiple paths (such as disks) when storing files.
This configures the number of base paths in which files are stored, usually with only one directory. # path (disk or mount point) count, the default value is 1 Store_path_count=1 # configures Store_path paths one at a, and the index number is based on 0.
Note that there is 0,1,2 behind the configuration method ..., you need to configure 0 to Store_path-1.
# If you don't configure Base_path0, it's the same path as Base_path. # store_path#, based 0, if StorE_path0 NOT exists, it's value is Base_path # The paths must to exist Store_path0=/home/yuqing/fastdfs #store_path1 =/home/ YUQING/FASTDFS2 # Fastdfs When storing files, a level two directory is used.
Here to configure the number of directories to store files (the system's storage mechanism, we look at the directory of file storage to know) # If this parameter is only n (such as: 256), then the storage server automatically creates a subdirectory of n * n files when it is first run. # Subdir_count * subdir_count directories would be auto created under all # Store_path (disk), value can be 1 to 256, DE Fault value is 256 subdir_count_per_path=256 # Tracker_server list to write port Oh (reminder is active connection tracker_server) # when there are multiple tracker servers, Each tracker server writes a line # tracker_server can ocur more than once, and tracker_server the format is # ' Host:port ', host can be hos Tname or IP address tracker_server=192.168.209.121:22122 # logging level #standard log levels as syslog, case insensitive, value Li
 ST: ### Emerg for emergency ### alert

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.