Ceph Installation Deployment

Source: Internet
Author: User
Tags xeon e5

About Ceph

Whether you want to provide Ceph object storage and/or Ceph block devices for the cloud platform, or if you want to deploy a Ceph file system or use Ceph as his, all of the Ceph storage cluster deployments start with deploying a Ceph node, network, and Ceph storage cluster. A ceph storage cluster requires at least one ceph Monitor and two OSD daemons. When you run the Ceph file system client, you must have a metadata server (Metadata server).

    • The Ceph osds:ceph OSD Daemon (ceph OSD) is capable of storing data, processing data for replication, recovery, backfilling, and rebalancing, and providing some monitoring information to Ceph Monitors by checking the heartbeat of other OSD daemons. When the Ceph storage cluster is set to have 2 replicas, at least 2 OSD daemons are required for the cluster to reach the Active+clean State (Ceph defaults to 3 replicas, but you can adjust the number of replicas).
    • Monitors:ceph Monitor maintains a variety of charts showing the status of the cluster, including Monitor, OSD, collocated Group (PG), and CRUSH diagrams. Ceph holds the historical information (called the epoch) of each state change occurring on the monitors, OSD, and pg.
    • The Mdss:ceph Metadata Server (MDS) stores metadata for the Ceph file system (that is, ceph block devices and Ceph object storage do not use MDS). The metadata server enables users of POSIX file systems to perform basic commands such as LS, find, without burdening the Ceph storage cluster
Ceph Components: OSD
    • OSD Daemon, minimum of two
    • For storing data, processing copy of data, recovering, rolling back, equalization
    • Provides partial monitoring information to monitor through the heartbeat program
    • Requires at least two OSD daemons in a ceph cluster
Ceph Components: Mon
    • Maintaining the status mapping information for a cluster
    • Includes monitor, OSD, placement Group (PG)
    • Also maintains status change history information for Monitor, OSD, and PG
Ceph components: Mgr (new feature)
    • Responsible for ceph cluster management, such as PG map
    • Provide cluster performance indicators (such as cpeh-s IO information) externally
    • Monitoring System with Web interface (dashboard)
CEPH Logical Structure

Data is stored to the PG,PG via Ceph object storage to the OSD DAEMON,OSD corresponding disk

object can only correspond to one PG

A raid can correspond to an OSD

One full drive can correspond to an OSD

A partition can correspond to an OSD

Monitor: Odd number of OSD: Dozens of to tens, more OSD performance better

PG Concept
    • Number of replicas
    • Crush Rule (pg How to find OSD acting set)
    • Users and Permissions
    • Epoach: monotonically incrementing the version number
    • Acting SET:OSD list, first for primary osd,replicated OSD
    • Up set:acting set past versions
    • PG TMP: Temporary PG Group
OSD Status: Default every 2 seconds to report to Mon (while monitoring the group OSD, such as 300 seconds without reporting status to Mon, the OSD will be kicked out of the PG Group)
    • Up can provide IO
    • Down the wall.
    • In has data
    • There's no data.
Ceph Scenario: Support for iSCSI mounts via TGT
    • Intra-company file sharing
    • Large volumes of files, high traffic, concurrency
    • Requires highly available, high-performance file systems
    • Traditional single server and NAS sharing is difficult to meet requirements such as storage capacity, high availability
Ceph Production Environment recommendations
    • The storage cluster uses a full million gigabit network
    • Cluster network (non-external) separate from public network (using different network cards)
    • Mon, MDS and OSD are separated and deployed on different machines
    • Journal recommended PCI SSD, General Enterprise-class IOPS up to 400,000
    • The OSD using SATA can also
    • Planning clusters based on capacity
    • Xeon E5 2620 V3 or more CPU,64GB or higher memory
    • Finally, cluster hosts are deployed to avoid cabinet failures (power, network)
CEPH Installation Environment

Due to the small number of machines, using 3 machines, acting as Mon with OSD, production environment is not recommended, production environment at least 3 Mon Independent

First, system settings

Ceph Installation Deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.