Ceph configuration parameters (2)

Source: Internet
Author: User

Ceph configuration parameters (2)
Ceph configuration parameters (a) 6, keyvaluestore config reference http://ceph.com/docs/master/rados/configuration/keyvaluestore-config-ref/ KeyValueStore is an alternative OSD backend compared to FileStore. currently, it uses LevelDB as backend. keyValueStore doesn' t need journal device. each operation will flush into the backend directly. the backend (leveldb) used by KeyValueStore: the maximum number of operations that can be stored on the keyvaluestore backend (1) queue: keyvalue Maximum number of btye that can be stored on the store queue max ops queue: keyvaluestore queue max bytes (2) number of parallel threads: keyvaluestore op threads file operation thread timeout seconds: the keyvaluestore op thread timeout operation can be canceled in seconds after it is submitted: keyvaluestore op thread suicide timeout (3) MISC strip size: keyvaluestore default strip size note: each object is split into multiple key-value pairs stored in the backend header cache size: keyvaluestore header cache size note: the stored stuff is similar to inode in the file system 7, osd config reference http://ceph.com/docs/master/rados/ Configuration/osd-config-ref/(1) general settings osd uuid: osd uuid Note: a uuid acts on An OSD Daemon, and a fsid acts on the entire cluster, these two different OSD data storage paths: osd data Note: where the actual underlying device is mounted e.g. /var/lib/ceph/osd/$ cluster-$ id maximum number of MB for one write (90 by default ): maximum client data that can be stored in the memory (500 MB by default): osd client message size cap RADOS class plug-in address ($ libdir/rados-classes ): osd class dir (2) file system settings file system generation type: osd mkfs options {fs-type} NOTE: For XFS default-f-I 2048, other types are not Recognize, e.g. osd mkfs options xfs =-f-d agcount = 24 mount options: osd mount options {fs-type} Note: Default XFS rw, noatime, inode64, and other types rw, noatime, e.g. osd mount options xfs = rw, noatime, inode64, nobarrier, and logbufs = 8. noatime indicates that the last read time (access time) is recorded when the file is canceled, saving time, inode64 indicates that the number of inodes is 64-bit (almost unlimited). Many file systems force the underlying device to refresh the cache when submitting data to avoid data loss. This is called write barriers. However, in fact, the underlying storage device of our database server either uses a RAID card, and the RAID card's battery can be protected by power loss; or uses a flash card, which also has a self-protection mechanism to ensure data will not be lost. Therefore, we can safely use nobarrier to mount the file system. (3) Log Path setting: osd journal Note: We recommend that you store the log on a hard disk or SSD separately, default/var/lib/ceph/osd/$ cluster-$ id/journal log size (5120 MB by default, that is, 5 Gb): osd journal size note: if the value is 0, logs are stored for the entire block device. We recommend that you set the value to 1 GB at least 2 * (expected throughput * filestore max sync interval ), throughput is the minimum disk speed and network rate. The filestore max sync interval parameter is described in the preceding filestore parameter settings. (4) SCRUBBING scrubbing is equivalent to a periodic check to ensure data integrity, no objects are lost, and the lightweight scrubbing performs daily checks on the size and attributes of objects. It performs In-depth scrubbing on a weekly basis, it reads data and performs checksum to ensure data integrity. Scrubbing operations that can be performed simultaneously in a Ceph OSD Daemon (default: 1): osd max scrubsscrub thread timeout (default: 60 seconds): osd scrub thread timeout terminates a scrub final thread (finalize thread) timeout (600 seconds by default): Maximum load of osd scrub finalize thread timeout (beyond this load scrub is not executed): Minimum scrub interval of osd scrub load threshold (when load is low, daily by default): Maximum scrub interval of osd scrub min interval (no matter what the load is, the default is weekly): osd scrub max interval deep scrub interval (default is weekly ): read size of osd deep scrub interval deep scrub (512 KB by default): osd deep scrub stri De (5) number of parallel threads for the Ceph OSD Daemon operation (0 indicates disabling multithreading; default value: 2): Priority of the osd op threads client operation (default value: 63 ): priority of the osd client op priority repair operation (10 by default): osd recovery op priority thread timeout (30 seconds by default): osd op thread timeout changes an operation to complaint worthy (meaning nothing) timeout (30 s by default): number of backend disk threads (such as scrub and snap trimming operations) of osd op complaint time. Default Value: 1): osd disk threads disk thread priority (not set by default) class: osd disk thread ioprio class Note: idle-the disk thread priority is lower than any OSD thread, this helps avoid scrub on a busy OSD; rt-the priority of the disk thread is higher than that of any OSD thread if scrub is needed. This parameter is useful only when the kernel is set to CFQ scheduling. Disk thread priority (default value:-1): osd disk thread ioprio priority note: this parameter must be used together with the above parameter and has 0-7 (lowest) levels, -1 indicates that the priority of each OSD scrub is not set. This parameter can be used for congestion or IO competition. This parameter is useful only when the kernel is set to CFQ scheduling. The number of completed operations that can be tracked (20 by default): the number of completed operations that can be tracked by the oldest osd op history size (600 by default ): osd op history duration: the number of logs displayed at a time (default: 5): osd op log threshold (6) BACKFILLING. When a new OSD is added or deleted, some PG will migrate to a new balance, which will lead to performance degradation. To avoid performance degradation, you can set the migration to backfill and set its priority to lower than normal read/write. (Backfill I guess it is the first sign, and the actual operation will be performed when idle) the maximum backfill operand of An OSD (10 by default ): minimum number of objects scanned by osd max backfills per backfill (64 by default): Maximum number of objects scanned by osd backfill scan min (512 by default ): osd backfill scan max if the usage of OSD Daemon is higher than this value, test does not respond to backfills requests (default 0.85): osd backfill full ratio backfill request Retry Interval (default 10 seconds ): osd backfill retry interval (7) osd map osd map records all OSD information, including node changes, addition and exit information. As the cluster runs, the map will become larger and larger, some configuration ensures that the cluster can still run well when map becomes larger. Enable the deletion of repeated content in the Map (the default value is true): osd map dedup map cache size (500 MB by default ): osd map cache size the cache size of the map in the memory when the OSD process is running (50 MB by default): osd map cache bl size the incremental size of the map in the memory cache when the OSD process is running: osd map cache bl inc size Maximum ing item for each MOSDMap message: osd map message max (8) How many seconds of RECOVERY ing delay starts to repair objects (default 0 ): osd recovery delay start each OSD one acceptable repair request (15 by default): osd recovery max active Note: increasing will accelerate the repair, but will also increase the maximum chunk size for Cluster load repair: number of threads in the repair process of osd recovery max chunk (1 by default): osd recovery threads fixed thread timeout (30 s by default ): osd recovery thread timeout retains clone overlaps during restoration (always set to true): osd recover clone overlap (9) Miscellaneous snap trim thread timeout settings (default: 1 hour ): osd snap trim thread timeout background log thread timeout settings (1 hour by default): osd backlog thread timeout default notification timeout (30 s by default ): osd default running y timeout checks whether the log file is damaged (high cost, false by default): Specifies the thread timeout of the osd check for log uption command (10 minutes by default ): maximum number of lost objects returned by osd command thread timeout (256 by default): osd command max records uses tmap as omap (false by default ): osd auto upgrade tmap only uses tmap for debugging (false by default): osd tmapput sets users tmap does not delete log space, and more disk space (false by default): osd preserve trimmed log

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.