Ceph configuration parameters (ii)

Source: Internet
Author: User

ceph configuration parameters (i)

6. Keyvaluestore CONFIG REFERENCEHttp://ceph.com/docs/master/rados/configuration/keyvaluestore-config-ref/KeyValueStore is an alternative OSD Backend compared to Filestore. Currently, it uses LevelDB as backend. Keyvaluestore doesn ' t need journal device. Each operation would flush into the backend directly.
    • Back end (LEVELDB) used by the Keyvaluestore:keyvaluestore Backend
(1) Queue
    • Maximum number of operands that can be stored on a queue:keyvaluestore queue Max Ops
    • Maximum number of Btye that can be stored on a queue:keyvaluestore queue Max bytes
(2) thread
    • Number of parallel threads:keyvaluestore op Threads
    • File operation line blocks until those seconds:keyvaluestore op thread timeout
    • The number of seconds to commit an operation can be canceled:keyvaluestore op thread suicide timeout
(3) MISC
    • Stripe size:keyvaluestore default strip size
Note: Each object is split into multiple key-value pairs stored in the backend
    • Header Cache Size:keyvaluestore Header Cache Sizes
         Note: The contents of the deposit are almost the same as the inode in the file system
7. OSD CONFIG REFERENCEhttp://ceph.com/docs/master/rados/configuration/osd-config-ref/(1) General Settings
    • UUID of the OSD:OSD UUID
Note: A UUID acting on an OSD Daemon, a fsid acting on the entire cluster, these two different
    • OSD Data Storage path:
Note: Where the actual underlying device is mounted e.g./var/lib/ceph/osd/$cluster-$id
    • Maximum number of megabytes (default):OSD Max write size
    • Maximum client data information that is allowed to be stored in memory (default 500MB):OSD Client message size cap
    • RADOS class Plugin address ($libdir/rados-classes):osd class dir
(2) file system settings
    • Type of build file system:OSD mkfs options {Fs-type}
Note: For XFS default-f-i 2048, other types are not default, e.g. OSD mkfs options XFS =-f-d agcount=24
    • Mount option:OSD mount options {Fs-type}
 Note: For XFS default rw,noatime,inode64, other type RW, noatime,e.g. OSD mount Options XFS = rw, Noatime, inode64, Nobarrier, logbufs=8 where n Oatime indicates that the last read time is recorded when the file is canceled (access times), saves time, inode64 indicates that the number of Inode is 64 bits (almost unlimited) many file systems now force the underlying device to flush the cache when data is submitted, to avoid data loss, Called the write barriers. However, in fact, our database server underlying storage device either uses a RAID card, the RAID card itself battery can be power-down protection, or flash card, it also has a self-protection mechanism to ensure that the data is not lost. So we can safely use the Nobarrier mount file system. (3) Log Settings
    • Log path:OSD Journal
Note: It is recommended to use a hard disk or SSD to store separately, default/var/lib/ceph/osd/$cluster-$id/journal
    • Log size (default 5120MB, 5G):OSD Journal size
Note: If 0 means that the entire block device is used to store logs, it is recommended to initially set to 1G, at least 2 * (Expected throughput * Filestore max sync interval), where throughput is the minimum value for disk speed and network rate , the Filestore max sync interval parameter is explained in the Filestore parameter settings above. (4) ScrubbingScrubbing equivalent to a periodic inspection work to ensure data integrity, the object is not lost, lightweight scrubbing daily, it checks the size and properties of objects, depth scrubbing weekly, it reads data and checksum to ensure the integrity of the data.
  • Simultaneous scrubbing operation in a ceph OSD daemon (default 1):OSD Max Scrubs
  • Scrub line blocks until those (default 60 seconds):OSD Scrub Thread Timeout
  • Timeout for terminating a scrub final thread (default 600 seconds):OSD Scrub Finalize thread timeout
  • Maximum load (scrub not performed on this load):OSD Scrub load threshold
  • Minimum scrub interval (when load is low, default per day):OSD Scrub min interval
  • Maximum scrub interval (default weekly, regardless of load):OSD Scrub max interval
  • Depth scrub interval (default weekly):osd Deep Scrub interval
  • Read size at depth scrub (default 512K):osd Deep Scrub Stride
(5) Operation
  • Ceph OSD Daemon Number of parallel threads (0 means turn off multithreading, default is 2):OSD op Threads
  • Priority of client operations (default):OSD Client OP Priorities
  • Priority of repair operation (default):OSD Recovery op Priorities
  • Line blocks until those (default 30 seconds):OSD op thread timeout
  • An operation becomes complaint worthy (default 30s):OSD op complaint time
  • The number of back-end disk threads, such as scrub operations and snap trimming operations. Default is 1):OSD Disk Threads
  • Disk thread priority (not set by default) class:OSD Disks thread Ioprio class
Note: idle-disk thread priority is lower than any one OSD thread, which helps to keep him scrub;rt-on a busy OSD with a higher priority than any of the OSD threads, if scrub is needed. This parameter is used to set the kernel to CFQ dispatch.
    • Disk thread Priority (default is-1):OSD disks thread Ioprio priorities
Note: This parameter is used in conjunction with the above parameter, there are 0-7 (lowest) level, 1 is not set, the priority of each OSD scrub is set separately, congestion or presence IO competition can use this, this parameter to set the kernel to CFQ dispatch is used.
    • Number of completed operations that can be traced (default):OSD op History size
    • The oldest can be traced to the completed operation (default:OSD op History duration
    • Log of how many operations are displayed at one time (default 5):OSD op log Threshold
(6) backfilling       When the new OSD is added or the OSD is removed, some PG migration has reached a new balance, this behavior can cause performance degradation, in order to avoid performance degradation, it is possible to set the migration to backfill operation, and set it priority lower than normal read and write. (Backfill I guess is the first sign, when the idle time in the actual operation)
  • Maximum number of backfill operations for an OSD operation (default):OSD Max Backfills
  • Minimum number of objects scanned per backfill (default):OSD Backfill Scan min
  • Maximum number of objects scanned per backfill (default):OSD Backfill Scan Max
  • If the OSD daemon usage is higher than this value measurement does not respond to Backfills requests (default 0.85):OSD Backfill Full ratio
  • Backfill Request Retry interval (default 10 seconds):OSD Backfill Retry Interval
(7) OSD MAP        OSD Map records all the OSD information, including the node changes, joins and exits, and so on, as the cluster runs, the map will become larger, some configuration to ensure that the map becomes larger in the case of the cluster can still run well.
    • turn on delete the heavy in map Complex content (default is True): osd map dedup
    • map cache size (default 500MB): < Span style= "Background-color:inherit" >osd map cache size
    • The cache size of map in memory when the OSD process runs (default 50MB): OSD Map cache bl size
    • OSD process run time map increment size of cache in memory: osd map cache BL inc size
    • The largest mapping entry for each MOSDMAP message: OSD map message max
(8) RECOVERY
    • Delay how many seconds to start repairing objects (default 0):OSD Recovery delay start
    • A repair request that can be accepted once per OSD (default:OSD Recovery Max active
Note: Increase accelerates repair but also increases cluster load
    • Fixed maximum chunk size:OSD recovery Max Chunk
    • Number of threads in the repair process (default 1):OSD Recovery Threads
    • Repair line blocks until those (default 30s):OSD Recovery thread Timeout
    • Preserve clone overlaps during recovery (should always be set to True):osd recover clone overlap
(9) Miscellaneous
  • Timeout setting for SNAP trim threads (default 1 hours):OSD Snap Trim thread Timeout
  • Background log thread timeout setting (default 1 hours):OSD Backlog thread Timeout
  • Default notification timeout (default 30s):OSD default notify timeout
  • Check the log file for corruption (high cost, default false):OSD Check for log corruption
  • command line blocks until those when set (default 10 minutes):OSD Command thread timeout
  • Returns the maximum number of Lost objects (default):OSD command Max Records
  • Use TMap as OMAP (default false):OSD Auto Upgrade TMap
  • Use TMap only as debugging (default false):OSD Tmapput sets users TMap
  • No deletion of log space, more disk space (default false):osd Preserve trimmed log

Ceph configuration parameters (ii)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.