Ceph configuration parameters (1)

Source: Internet
Author: User

Ceph configuration parameters (1)
1. POOL, PG AND CRUSH CONFIG REFERENCE
Configuration segment: [global] format: osd pool default pg num = 250 maximum pg count per storage pool: number of seconds between creation of pg in the same OSD Daemon in the mon max pool PG num: how many seconds does the mon pg create interval wait PG can be considered to be the master: mon pg stuck thresholdCeph OSD Daemon on PG flag bits: osd pg bitsCeph OSD Daemon PGP bits: osd pgp bits Note: pg and pgp are the same in most cases, but when a pg is split into multiple, it is different from the bucket type when the CRUSH rule uses chooseleaf: the crush rule used when the osd CRUSH chooseleaf type is used to create a replication storage pool: osd pool default crush replicated ruleset erasure pool The size of the Strip to which the object is divided: osd pool erasure code stripe width number of copies: osd pool default size minimum number of copies: osd pool default min size Note: If this number is not reached, ceph will not notify the client to write to the number of completed pg: osd pool default pg numpgp number: osd pool default pgp num flag of the newly created storage pool: maximum number of PG in the default flags list of the osd pool: osd max pgls PGLog construction is the minimum number of pglogs retained: Maximum number of seconds for the osd min pg log entriesOSD to wait for a request from the client: osd default data pool replay window2, journal config reference Journal function: (1) Speed: Random small block Write operations are directly written to journal, merged into sequential IO and then flushed. SSD storage is recommended (2) Consistency: To ensure the transaction of operations, first, the Operation records are applied to the file system in journal. Every other time, Ceph OSD Daemon stops the write operation, synchronizes journal with the file system, and deletes some journal so that the space can be reused. In case of a fault, Ceph OSD Daemon re-performs an operation based on its content from the beginning of the last synchronized journal. Enable direct writing Journal (probably because journal is generally stored on Block devices, that is, in the OSD space or SSD separately ): journal dio enables asynchronous Writing of journal (this parameter can be set only when "Direct Writing" is set to true): journal aio block alignment (this parameter is set to true. The preceding two parameters can be set ): j ournal block align the maximum number of bytes that can be written to journal at a time: journal max write bytes the maximum number of journal that can be written concurrently at a time: journal max write entries the maximum number of operations stored in the queue at any time: journal queue max ops maximum number of bytes stored in the queue at any time (10 <20 doesn't know what this is ......): Journal queue max bytes minimum align size: journal align min size fills the entire log with 0 during file system creation: Disable the nagle algorithm in the journal zero on create3, MESSAGING message passing TCP session: MS tcp nodelay error reconnection initial wait time: MS initial backoff error reconnection maximum wait time: MS max backoff disable crc Check (can improve performance in case of limited CPU ): MS nocrcdebug settings (not configured): MS die on bad msg maximum number of messages waiting for dispatch: MS dispatch throttle bytes to bind process to IPV6 IP: MS bind ipv6 stack size debug settings (not configured): MS rwthread stack bytes wait how many seconds close idle connection: MS tcp read Timeoutdebug settings (not configured): MS inject socket failures4, general config reference File System ID, a cluster one: socket path (/var/run/ceph/$ cluster-$ name. asok): admin socketmom, osd, mds PID file (/var/run/$ cluster/$ type. $ id. pid): pid file daemon running path: the maximum number of files that can be opened by chdir (prevent file descriptors from being used up): max open files can be used as signals (communication ......): Fatal signal handlers5 and filestore config reference enable debugging check during synchronization (overhead): filestore debug omap check (1) extended attributes (important) extended attributes are attributes beyond the inherent attributes of file systems (such as XFS and ext4). The following parameters are used to configure how to store these parameters, which is important to system performance. Some file systems have limits on the attribute length. For example, the ext4 limit attribute length cannot exceed 4 kb. If there is no length limit, the ceph extension attribute is also stored in the underlying file system, if the length limit is exceeded, it is stored in the primary key/value database (aka omap ). Use the extended attributes of database storage (ext4 must do this): The maximum extended attribute length of filestore xattr use omap (cannot exceed the file length ): filestore max inline xattr size the maximum number of extended attributes that each object can store in the file system: filestore max inline xattrs (2) when the synchronization interval is reduced, You can merge a little more write operations and metadata updates. Maximum synchronization interval seconds: filestore max sync interval minimum synchronization interval seconds: filestore min sync interval (3) click "filestore flusher" to sort the large files before writing them to scyn in order to improve the performance (it turns out that it is better to turn it off by default ). Enable filestore flusher: Maximum number of file descriptors of filestore flusherflusher: filestore flusher max fds enable synchronization of flusher: filestore sync flush file system synchronization of journal data: filestore fsync flushes journal data (4) maximum number of operations that can be stored in a filestore queue: Maximum number of btye that can be stored in a filestore queue max ops queue: Maximum number of operations that can be submitted at a time in a filestore queue max bytes queue: maximum number of bytes that can be submitted by filestore queue committing max ops at a time: filestore queue committing max bytes (5) Timeout number of parallel threads: filestore op threads file operation thread timeout seconds: filestore op thread timeout the number of seconds to submit an operation can be canceled: filestore op thread suicide timeout (6) B-TREE FILESYSTEM enable the snapshot of btrfs: filestore btrfs snap enable the cloning of btrfs: filestore btrfs clone range (7) log enable parallel log: filestore journal parallel enable pre-written log: filestore journal writeahead (8) the minimum number of files in the previous subclass directory of MISC merged into the parent class: the maximum number of files in the previous subdirectory of filestore merge threshold split into subdirectories: filestore split multiple limits the automatic upgrade of file storage to the specified version: filestore update to discard any transaction in discussion: filestore blackhole storage transaction dump destination file: filestore dump file is injected with a failure after the nth chance: filestore kill at fails or crashes when the eio error occurs: filestore fail eio

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.