Distributed File storage Fastdfs (vii) FASTDFS configuration file

Source: Internet
Author: User
Tags bind socket sort syslog time interval usleep valid port number

When configuring Fastdfs, it is important to modify the configuration file to understand the significance of each item in the configuration file, so I have consulted the great God's post and collated the explanation of the configuration file. The original post is as follows: Http://bbs.chinaunix.net/thread-1941456-1-1.html, because the original post in the version of the earlier, so I have to modify the existing version, the original post is not client.conf introduction, here I added. One, tracker.conf

# This configuration file is invalid, false indicates valid # is this config file disabled # False for enabled # True for disabled Disabled=false # bind IP # b Ind_addr= is followed by the bound IP address (commonly used for servers with multiple IPs but only one IP is expected to serve).  If not filled out then all (generally not fill in OK) # bind an address of this host # empty for bind all addresses of this host bind_addr= # service Port # The Tracker Server Port port=22122 # Connection Timeout time for socket socket function Connect # Connect timeout in seconds # default value is 30s connect _TIMEOUT=30 # Tracker Server network time-out, in seconds. If data is not sent or received after the time-out, the network communication fails with the net timeout in seconds # default value is 30s network_timeout=60 # the base p Ath to store data and log files Base_path=/home/yuqing/fastdfs # Base_path directory address (root directory must exist, subdirectories will be created automatically) # attached directory description: # tracker SE     RVer directory and file structure: # ${base_path} # |__data # |     |__storage_groups.dat: Storing grouping information # | |__storage_servers.dat: Storage Server List # |__logs # |__trackerd.log:tracker server log file #数据文件storage_groups. DAT and storage
The records in _servers.dat are separated by a newline character (\ n), separated by a Latin comma (,). The fields in #storage_groups. Dat are followed by: # 1. Group_namE: Group name # 2. The Storage_port:storage server port number #storage_servers. DAT records Storage server-related information in the following fields: # 1. Group_name: Owning group name # 2. IP_ADDR:IP Address # 3. Status: State # 4. SYNC_SRC_IP_ADDR: Synchronize the existing data file's source server # 5 to the storage server. Sync_until_timestamp: Synchronizing the Up-time (Unix timestamp) # 6 of an existing data file. Stat.total_upload_count: Number of uploaded files # 7. Stat.success_upload_count: Number of successfully uploaded files # 8. Stat.total_set_meta_count: Change meta Data number # 9. Stat.success_set_meta_count: Successfully changed meta Data number # 10. Stat.total_delete_count: Number of deleted Files # 11. Stat.success_delete_count: Number of files successfully deleted # 12. Stat.total_download_count: Download number of files # 13. Stat.success_download_count: Number of successfully downloaded files # 14. Stat.total_get_meta_count: Get Meta Data number # 15. Stat.success_get_meta_count: Successfully obtained meta data number # 16. Stat.last_source_update: Last Source Update time (update operation from client) # 17. Stat.last_sync_update: Last Synchronization Update time (update operation from other Storage server synchronization) # The maximum number of connections when the system provides services.
For v1.x, a connection is serviced by a single thread, which is the number of worker threads. # for v2.x, there is no relationship between the maximum number of connections and the number of worker threads # max Concurrent connections This server supported MAX_CONNECTIONS=256 # work thread count, SH Ould <= MAX_CONNECTIONS # The default value is 4 # since V2.00 # V2.0 introduces this parameter, number of worker threads, usually set to CPU number work_threads=4 # Upload Group (volume) Way 0: Poll Method 1: Specify Group 2: Balance Load (select Group (volume) of maximum remaining space upload) # here if you specify upload to a fixed group in the application layer, then this parameter is bypassed # The method of selecting group to upload files # 0:round Robin # 1:speci FY group # 2:load balance, select the max free space Group to upload file store_lookup=2 # When the previous parameter is set to 1 o'clock (store_lookup=1, When specifying a group name), this parameter must be set to a group name that exists in the system. If you choose a different upload method, this parameter is not effective # which group to upload file # when store_lookup set to 1, must set Store_group to the group name St ORE_GROUP=GROUP2 # Select which storage server to upload (after a file is uploaded, this storage server is equivalent to the storage server source for this file and will be storage for the same group Server pushes this file to sync effect) # 0: Polling Method # 1: Sort by IP Address select the first server (least IP address) # 2: Sort by priority (upload priority is set by storage server and the parameter name is Upload_priorit Y) # which storage server to upload file # 0:round Robin (default) # 1:the first server order by IP address # 2:the FIR St Server Order by priority (the minimal) store_server=0 # Select which directory in storage server to upload. Storage server can have multiple base path (which can be understood as multiple disks) for storing files。  # 0: Take turns, multiple directories in turn file # 2: Select the largest remaining directory storage file (note: The remaining disk space is dynamic, so storage to the directory or disk may also vary) # which path (means disk or mount point) of the Storage server to upload file # 0:round Robin # 2:load balance, select the max free space path to upload file Store_pat H=0 # Select which storage server as the download server # 0: Polling method, you can download the current file of any storage server # 1: Which is the source storage server to use which one (previously said this storage serv How the ER source is generated) is which storage server server was uploaded to. # which storage server to download file # 0:round Robin (default) # 1:the so Urce Storage server which the current file uploaded to Download_server=0 # Storage the space reserved on the server to ensure that the system or other application needs space.
You can use an absolute value or a percentage (V4 start to support percent mode). # (Note that if the disk size of the server in the same group is the same, whichever is the minimum, that is, if one server in the same group has reached this standard, the standard will take effect because they are backing up) # Reserved storage space for system or other
Applications. # If the free (available) space of any stoarge server in # A group <= reserved_storage_space, # No file can be Uploade
D to this group. # bytes Unit can be one of the follows: # # G or G for gigabyte (GB) # # # M or M for megabyte (MB) # # k or K For Kilobyte (KB) # # # no unit for Byte (B) # # XX. XX% as ratio such as Reserved_storage_space = 10% reserved_storage_space = 10% # Select log Levels #standard log level as syslog, CA SE insensitive, Value list: # # Emerg for emergency # # # alert # # Crit for critical # # # error # # warn for warning # # # not 
Ice # # # info Debug log_level=info # OS run Fastdfs user group (not filled out is the current user group, which boot process is which) #unix group name to run the #not set (empty) means run by the group of users of the current user run_by_group= # operating system Fastdfs (does not fill out the user, which boot process is which) #unix use Rname to run the #not set (empty) means run by current user run_by_user= # can connect to the IP range of this tracker server (for all types of connections All have effects, including client, storage server) # allow_hosts can ocur more than once, host can be hostname or IP address, # "*" means match al L IP addresses, can use range like this:10.0.1. [1-15,20] or # host[01-08,20-25].domain.com, for example: # allow_hosts=10.0.1. [1-15,20] # allow_hosts=host[01-08,20-25].domain.com allow_hosts=* # Sync or refresh log information to hard disk time interval, in seconds
Note: Tracker server's log is not always written on the hard disk, but first write memory # Sync log buff to disk every interval seconds # default value is ten seconds sync_
Log_buff_interval = 10 # Detects the time interval for storage server to survive, in seconds. # Storage Server periodically heartbeats to tracker server, if tracker server has not received a heartbeat from storage server within a check_active_interval, There will be thought that the storage server has been offline. Therefore, this parameter value must be greater than the heartbeat interval configured by the storage server. Typically configured to storage server heartbeat interval of twice times or 3 times times # Check storage server alive interval seconds check_active_interval = 120 # line stacks size. The FASTDFS server side takes a threading approach. The tracker server thread stack should not be less than 64KB # line stacks, the greater the number of system resources a thread consumes. If you want to start more threads (v1.x corresponding parameter is max_connections, V2.0 is work_threads), you can reduce this parameter value appropriately # thread stack size, should >= 64KB # default Valu E is 64KB thread_stack_size = 64KB # This parameter controls whether the cluster automatically adjusts when the storage server IP address changes. Note: auto-tuning is completed only when the storage server process restarts # Auto Adjust when the IP address of the storage server changed # default value is True St Orage_ip_changed_auto_adjust = true # V2.0 the parameters introduced. Maximum delay time for synchronizing files between storage servers, default is 1 days. Adjust according to the actual situation # Note: This parameter does not affect the file synchronization process. This parameter is only a threshold value (XP) that determines whether a file has been synchronized when the file is downloaded (experience value) # Storage SyNC file Max delay seconds # Default value is 86400 seconds (one day) # since V2.00 storage_sync_file_max_delay = 86400 # The parameters introduced by the V2.0.
The maximum time the storage server needs to synchronize a file, which defaults to 300s, is 5 minutes. # Note: This parameter does not affect the file synchronization process. This parameter is used only as a threshold to determine if the current file is being synchronized (experience value) # The max time of storage sync a file # default value is seconds # since V2.00 St Orage_sync_file_max_time = V3.0 the parameters introduced. If you are using a small file merge storage attribute, the default is closed # If use a trunk file to store several small files # default value is False # since V3.00 Use_trunk_f
ile = False # V3.0 the parameters introduced. # The minimum number of bytes allocated by the trunk file. For example, the file only 16 bytes, the system will also allocate slot_min_size bytes # The min slot size, should <= 4KB # Default value is bytes # since V3.00 slot_m
In_size = The parameter introduced by the V3.0 #. # only files with file size <= This parameter value will be merged for storage.
If the size of a file is larger than this parameter value, it is saved directly to a file (that is, no merge storage is used). # The Max slot size, should > Slot_min_size # Store the upload file to trunk file when it's size <= this value # de
Fault value is 16MB # since V3.00 slot_max_size = 16MB # V3.0 the parameters introduced. # merge stored trunk file size, at least 4MB, default value is 64MB. Not recommended to be set too large # The trunk file size,Should >= 4MB # default value is 64MB # since V3.00 trunk_file_size = 64MB # Whether to create trunk file in advance. Only if this argument is true, the following 3 arguments preceded by Trunk_create_file_ are valid # If create trunk file advancely # Default value is False # since V3.06 trunk _create_file_advance = False # The start point (base time) of the trunk file is created in advance, and 02:00 indicates that the first time that you created it is 2 o'clock in the morning # The Times base to create trunk file # th E Time Format:HH:MM # default value is 02:00 # since V3.06 trunk_create_file_time_base = 02:00 # The interval at which the trunk file is created, in seconds. If created only once per day, set to 86400 # The interval of Create trunk file, Unit:second # Default value is 38400 (one day) # since V3.06 t
Runk_create_file_interval = 86400 # When you create a trunk file in advance, you need to reach the free trunk size # such as this parameter is 20G, and the current free trunk is 4GB, then only need to create a 16GB trunk file # The threshold to create trunk file # is the free trunk file size less than the threshold, would create # the trunk fil ES # Default value is 0 # since V3.06 trunk_create_file_space_threshold = 20G # when trunk is initialized, check if free space is occupied # if check trunk Space occupying when loading trunk free spaces # tHe occupied spaces would be ignored # default value was false # since V3.09 # Notice:set This parameter to true would slow t He loading of trunk spaces # when startup.
You should set this parameter to True when neccessary.
Trunk_init_check_occupying = false # whether the trunk free space information is loaded unconditionally from Trunk Binlog # Fastdfs The default is to load the trunk free space from the snapshot file Storage_trunk.dat. # The first line of the file records the offset of the trunk binlog and then loads from the offset of Binlog # If Ignore Storage_trunk.dat, reload from Trunk Binlog # Default VA Lue is False # since V3.10 # set to True once for version upgrade when your version less than V3.10 Trunk_init_reload_from _binlog = false # Whether to use the server ID as Storage Server identity # if used storage ID instead of IP address # Default value is False # Si nce V4.00 use_storage_id = False # use_storage_id set to True to set this parameter # set the group name, server ID and corresponding IP address in the file, see configuration example under source directory: Conf/storag E_IDS.CONF # Specify storage IDs filename, can use relative or absolute path # since V4.00 storage_ids_filename = Storage_ Ids.conf #文件名中的id类型, with IP and ID two, only if use_storage_id is set to TRue when this parameter is valid # ID type of the storage server in the filename, values is: # # ip:the IP address of the storage server # # ID:  The server ID of the storage server # This paramter was valid only if use_storage_id set to True # default value is IP # Since V4.03 id_type_in_filename = IP # store from file if symbol link is used # If set to True, a slave file will occupy two files: the original file and the symbolic link to it # if store SL Ave File Use Symbol link # default value is False # since V4.01 Store_slave_file_use_link = False # Do you regularly rotate error log, currently only supported One day rotation # If rotate the error log every day # default value is False # since V4.02 Rotate_error_log = false # error log regular round Turn point in time, only valid when Rotate_error_log is set to true # Rotate error log times base, Format:Hour:Minute # Hour from 0 to, Minute fro M 0 to $ # default value is 00:00 # since V4.02 error_log_rotate_time=00:00 # error log rotation by size # set to 0 means not rotated by file size, otherwise when error L 
OG reaches that size, it will rotate to the new file # Rotate error log when the log file exceeds this size # 0 means never rotates log file by log file size # default value is 0 #Since V4.02 rotate_error_log_size = 0 # Whether to use connection pooling # If using connection pool # Default value is False # since V4.05 Use_conne Ction_pool = False # If a connection is idle for more than this value will automatically be closed # connections whose the idle time exceeds this time would be closed # Unit:s Econd # Default value is 3600 # since V4.05 connection_pool_max_idle_time = 3600 # port used to provide HTTP service # HTTP port on this TRAC Ker Server http.server_port=8080 # Check if HTTP server is still working at an interval, if the value is less than 0 never check the # Check storage HTTP Server Alive interval Seco NDS # <= 0 for never check # default value is http.check_alive_interval=30 # Check if HTTP server is a surviving type with TCP and HTTP two # TCP Way only HTTP port is connected # HTTP method Check must return a status value of $ # Check storage HTTP Server alive type, values is: # Tcp:connect to the Storge ser Ver with HTTP port only, # does not request and get response # Http:storage Check Alive URL must return HTTP stat US $ # Default value is TCP http.check_alive_type=tcp # Check storage http Server Alive Uri/url # Note:storage Embed H TTP Server support URI:/status.html http.check_alive_uri=/status.html 
Two, storage.conf
# This configuration file is invalid, false indicates valid # is this config file disabled # False for enabled # True for disabled Disabled=false # specifies this Stora GE Server Group (volume) # The name of the group this storage server belongs to Group_name=group1 # is bound IP # bind_addr= back to the bound IP ground Server has multiple IPs, but only one IP is expected to serve). If you do not fill in the indicated all (generally not fill in OK) # bind an address of the this host # empty for bind all addresses of the This host bind_addr= # BIND_ADDR is usually a pin to the server.
This parameter is valid when BIND_ADDR is specified.
# This storage server is connected as a client to other servers (such as tracker server, other storage server) and is bound bind_addr. # If bind an address of the this host when connect to other servers # (this storage server as a client) # True for binding th E address configed by above parameter: ' BIND_ADDR ' # False for binding any address of this host Client_bind=true # Storag E Server service Port # The storage server Port port=23000 # Connection Timeout time for socket socket function Connect # Connect timeout in seconds # default VA Lue is 30s connect_timeout=30 # Storage Server network time-out, in seconds.
When data is sent or received, the network communication fails if the data cannot be sent or received after the timeout period. # Network timeout in seconds
# default value is 30s network_timeout=60 # heartbeat interval, in seconds (here is the active heartbeat to tracker server) # heart beat interval in seconds he ART_BEAT_INTERVAL=30 # Storage Server reports to tracker server the time interval for disk space remaining in seconds # disk usage report interval in seconds Stat_repor  T_INTERVAL=60 # Base_path Directory address, the root directory must exist subdirectories will be automatically generated (note: This is not the address of the uploaded file, before yes, after a version changed) # The base path to store data and log Files Base_path=/home/yuqing/fastdfs # Maximum number of connections when the system provides services # Max concurrent connections The server supported # default value I S-max_connections means more memory would be used max_connections=256 # V2.0 introduces this parameter. Sets the buffer size of the queue node.
The amount of memory consumed by the work queue = Buff_size * Max_connections # is set larger, the overall performance of the system is improved. # Please do not exceed the system physical memory size for memory consumption. Also, for 32-bit systems, be aware that the memory used does not exceed 3GB # The buff size to recv/send data # This parameter must more than 8KB # default value is 64K B # since V2.00 buff_size = 256KB # Number of worker threads used to process network IO, should be less than Max_connections value # work thread count, should <= Max_conn Ections # Work thread deal Network IO # default value is 4 # since V2.00 WOrk_threads=4 # V2.0 introduces this parameter. Disk IO read/write separation, default is separate # if disk Read/write separated # # False for mixed read and write # # True for separated read and writ E # Default value is true # since V2.00 disk_rw_separated = true # V2.0 introduces this parameter.
The default value for the number of read threads for a single storage path is 1.  # Read/write separation, number of reads in the system = Disk_reader_threads * Store_path_count # Read/write mix, number of read and write threads in the system = (disk_reader_threads + disk_writer_threads) * Store_path_count # Disk Reader thread count per store base path # for mixed read/write, this parameter can be 0 # def Ault value is 1 # since V2.00 disk_reader_threads = 1 # V2.0 introduces this parameter.
The default value for the number of write threads for a single storage path is 1.  # Number of write threads in the system = Disk_writer_threads * Store_path_count # Read/write mix, number of read and write threads in the system = (disk_reader_threads + disk_writer_threads) * Store_path_count # Disk writer thread count per store base path # for mixed read/write, this parameter can be 0 # def Ault value is 1 # since V2.00 disk_writer_threads = 1 # When synchronizing files, if you do not read from Binlog to the files to be synchronized, Hibernate n milliseconds after reading again.
0 means do not hibernate, try to read again immediately. # for CPU consumption, it is not recommended to set to 0. How to want to synchronize as quickly as possible, you can set this parameter smaller, such as set to 10ms # when noEntry to sync, try read Binlog again after X milliseconds # must > 0, default value is 200ms sync_wait_msec=50 # Synchronize Previous
File, then synchronize the time interval of the next file, in milliseconds, 0 for no hibernation, to synchronize the next file directly. # After the sync a file, usleep milliseconds # 0 for the sync successively (never call Usleep) Sync_interval=0 # The following two are explained together. The time period that allows the system to synchronize (by default, all day).  Generally used to avoid peak synchronization to produce some problems and set, I believe SA will understand # storage sync start time of a day, time Format:Hour:Minute # Hour from 0 to $ Minute from 0 to sync_start_time=00:00 # storage sync end time in a day, Time Format:Hour:Minute # Hour from 0 to, Minute fro M 0 to sync_end_time=23:59 # Sync Storage's Mark file to disk after synchronizing n files # Note: If the mark file content does not change, then the # write to the "Mark file after Syn" will not be synced c N Files # default value is write_mark_file_freq=500 # Storage server supports multiple paths (such as disks) when storing files.
This configures the number of base paths where the files are stored, usually with only one directory. # path (disk or mount point) count, the default value is 1 Store_path_count=1 # each configured Store_path path, index number is based on 0.
Note The configuration method is followed by 0,1,2 ..., which requires configuration of 0 to Store_path-1.
# If you do not configure Base_path0, it will be the same path as the Base_path. # store_path#, based 0, if StorE_path0 NOT exists, it's value is Base_path # The paths must be exist Store_path0=/home/yuqing/fastdfs #store_path1 =/home/ YUQING/FASTDFS2 # Fastdfs When storing files, a level two directory is used.
Here to configure the number of directories to store files (the system's storage mechanism, you see the file storage directory to know) # If this parameter is only n (for example: 256), then storage server will automatically create n * N sub-directories to hold files when it is first run. # Subdir_count * Subdir_count directories would be is auto created under each # Store_path (disk), value can is 1 to Fault value is the list of subdir_count_per_path=256 # Tracker_server to write the Port of OH (again the reminder is active connection tracker_server) # when there are multiple tracker servers, Each tracker server writes a line # tracker_server can ocur more than once, and Tracker_server format was # "Host:port", host can be hos Tname or IP address tracker_server=192.168.209.121:22122 # logging level #standard log levels as syslog, case insensitive, value Li
 ST: # # # Emerg for emergency # # # alert # # # Crit for critical # # # error # # # warn for warning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.