ceph

Read about ceph, The latest news, videos, and discussion topics about ceph from alibabacloud.com

Openstack/gnocchi Introduction--time series Data aggregation operation is calculated and stored in advance, the idea of first counting and taking

storing time series (storage driver) and another for index data (index driver). The memory is responsible for storing the measured value of the measure measurement. It receives timestamps and values and pre-computes aggregations based on the defined archive policy.The indexer is responsible for storing the indexes of all resources, as well as their types and properties. Gnocchi not only understands the resource types from OpenStack projects, but also provides a common type, so you can create ba

CEO of RedHat on VMware, OpenStack, and CentOS

CEO of RedHat on VMware, OpenStack, and CentOS RedHat has completed an acquisition to build an open-source stack that leads the hybrid data center, and its main competitor is VMware. The duel between RedHat and VMware remains to be observed, but Jim whitehust, CEO of RedHat, believes that open source will ultimately define the future of enterprise IT architecture. I met whitehirst and talked to him about the overall planning of cloud, open source, and RedHat companies. Next, let's take a look at

Summary of consistency hash and crush algorithm

actual machine nodeData: Using the object name (full path name) is called key, the algorithm is also MD5Equalization: Introduce a virtual node (equivalent to the PG in CEPH) to ensure that the number of replicas * Actual number of nodes, replicas must remain in different physical nodesVirtual node is the actual node (machine) in the hash space of the replica (replica), a real node (machine) corresponding to a number of "virtual node", the correspondi

Notes for open source projects

"Ceph Distributed Storage"0 Ceph Introduction to theoretical briefshttp://990487026.blog.51cto.com/10133282/1705614Ceph Fast Deployment Combathttp://990487026.blog.51cto.com/10133282/1703982The ceph cluster expansion and management combathttp://990487026.blog.51cto.com/10133282/1704880Ceph Object Store Combathttp://990487026.blog.51cto.com/10133282/1706537Cep

Python's pxssh implementation of SSH login batch "handyman"

A simple summaryThis is my production environment with a script, according to the actual operation of the situation, the white is that people are lazy, do not want to do chores. Ha ha! Do not understand the classmate can @ me Oh! Thank!Second, the Code#!/usr/bin/envpythonfrompexpectimportpxsshimportostry:s =pxssh.pxssh () foriinrange (64,65): #在这个位置定义起始和结束的数字用于IP的主机号 ipaddr= ' 192.168.1.%s ' %i #生成一个完整的IP地址 os.environ[' IP ']=str (ipaddr) # Variable call, Python variable can be taken in Shell

HDFS architecture and design (PDF)

Read more: Build a high-availability and auto-scaling KV storage system GoogleSpanner global Distributed Database Baidu is how to use hadoop's OpenstackSwift introduction Redhat1.75 billion US Dollars acquisition of Inktank (Ceph provider) the original Article address of cloud architecture and O M: HDFS architecture and design (PDF). Thank you for sharing it with me. Read more: Build a high-availability and auto-scaling KV storage system Google Span

Thinking of metadata management of distributed storage System

composed of distributed modules are faced with the trade-offs of the principle of distributed cap, all should be extensible, especially the metadata for consistency has higher requirements;B. Metadata nodes need to co-maintain the state of the data nodes and make consistent decisions when the state changes, which poses a great challenge to the design and implementation of the system;C. In addition, a large amount of metadata required for storage devices is also a non-negligible cost overhead;Th

Large scale leaderboard system practice and challenge

daemon engine uses Docker, the network mode uses the host mode, no performance loss, simple controllable, data volume using host mapping, container restart, downtime data are not lost, and subsequent testing using the Distributed File System Ceph. Mirrored storage Driver Select company TLINUX2 operating system comes with Aufs, AUFS is not suitable for the file in the container to write frequently, the first write needs copy up and multi-layer branch

Server Building and Management (9-1)

1. Server:192.168.9.106 admin192.168.9.107 node1192.168.9.108 node22. Create Deph account and give sudo permission: (3 units)创建账号[[emailprotected] ~]# mkdir /app/userhome -p[[emailprotected] ~]# useradd -d /app/userhome/deph deph[[emailprotected] ~]# passwd deph赋予sudo权限[[emailprotected] ~]# echo "deph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/deph[[emailprotected] ~]# sudo chmod 0440 /etc/sudoers.d/deph修改sudo配置文件#此处有一个小坑,使用不同账户,执行执行脚本时候sudo经常会碰到 sudo: sorry, you must have a tty to run

Sgdisk Common operations

As with Fdisk creating an MBR partition, Sgdisk is a tool to create a GPT partition, and if you do not yet understand the GPT partition, refer to the difference between booting MBR and GPT with GRUB.View all GPT partitions# sgdisk -p /dev/sdbDisk /dev/sdb: 16780288 sectors, 8.0 GiBLogical sector size: 512 bytesDisk identifier (GUID): 4D5B29E8-6E0B-45DA-8E52-A21910E74479Partition table holds up to 128 entriesFirst usable sector is 34, last usable sector is 16780254Partitions will be aligned on 20

Analysis of the evolution and development of network file systems

NFS Although NFS is the most popular Network File System in UNIX and Linux systems, it is certainly not the only choice. In Windows®In the system, ServerMessage Block [SMB] (also known as CIFS) is the most widely used option (like Linux supports SMB, Windows also supports NFS ). One of the latest distributed file systems, also supported in Linux, is Ceph. Ceph is designed as a fault-tolerant Distributed Fi

Red Hat RHOP 8 released a one-stop Solution

. Software-defined storage is included. The Red Hat OpenStack platform contains a large-scale scalable and software-defined storage solution named Red Hat Ceph storage. Red Hat Ceph storage provides 64 TB highly flexible object and block storage, which is sufficient for various Big Data projects. The release of this platform is a good opportunity to get more Telecom customers. The deployment of telecom ente

Web Data storage

commonly used NFS, it is the embodiment of the share we can directly mount the uninstall, but the above file system is the share of the side of the grid good, you can not change, is to give you a directory meaningObject storage is mostly distributed, it is to solve the block storage is not easy to share file storage is not fast enough to appear, if the object storage provides fuse, then object storage can also be conveniently mounted use, which is also the advantage of glusterfs, otherwise it i

Go net/http get JSON format data in body

Go net/http get JSON format data in bodyPackage Mainimport ("Encoding/json" "FMT" "Io/ioutil" "net/http") type autotaskrequest struct {RequestID string ' JSON: "Re Questid "' Clone Clonemodel ' JSON:" clone "' Push Pushmodel ' JSON: ' push ' '}type clonemodel struct {//todo//' Method ': s Tring ' JSON: "Ceph" ' RequestID string ' JSON: "RequestID" ' Callbackurl string ' JSON: "Callbackurl" '}type pushmodel struct { RequestID string ' JSON: ' Re

Centos 6.5 to limit the network card speed by Ethtool

Operating Environment: Centos 6.5 x86_64 Ethtool Operation Steps: View Network Port EM1 device information at this time [Root@ceph-osd-2 ~]# ethtool em1 Settings for EM1: supported ports: [TP] supported link modes: 10baset/ Half 10baset/full 100baset/half 100baset/full 1000baset/full supported frame pause use:no Auto-negotiation:yes Advertised

MKCEPHFS Use __authentication

noatime,nodiratime boosts performance at no cost. When using EXT4, you should disable the EXT4 journal, because Ceph the does of its own. This is the'll boost performance. Creating a new file system on a EXT4 partition that already contains data, would invoke RM-RF to delete the data. If There is a lot to it, it might seem as if MKCEPHFS is hanging when it actually isn't. If for any reason your are re-creating the file system on a pre-existing cluste

New features of the OpenStack kilo release

/{ "versions": [ { "id""v2.1", "links": [ { "href""http://localhost:8774/v2/", "rel""self" } ], "status""CURRENT", "version""5.2" "min_version""2.1" }, ]}Header information for the client:X-OpenStack-Nova-API-Version2.114A known issue: evacuateThis problem is mainly due to the evacuate cleanup mechanism, host name changes will cause the Nova

Dockone WeChat Share (120): The practice of private container cloud construction based on Kubernetes

there is performance loss, but far to meet our actual needs. Storage we use Ceph's RBD way, for more than a year, RBD's program is very stable. The way Ceph FS is we have tried, but it has not been formally used due to limited team effort and possible risks. Highly Available infrastructureContainer cloud to achieve a highly available infrastructure, multidimensional assurance of high availability of applications/services: At the application level,

Ganglia Installation and configuration

A ganglia monitoring is installed for Ceph, as follows. 1. Environmental statementCeph is the 3 physical machines installed on the centos6.5: Mon0, OSD1, OSD2, so you need to install Gmond on all three machines, OSD2 on Gmetad. 2. Installation processInstall the Epel source first: RPM-IVH http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpmTo install a dependency package:Yum Install Apr-devel Expat-develInstall Gmond in Mon0

SSH password-less logon configuration under ubunt/centos

Recently configured CEpH distributed storage. You need to log on via SSH without a password. After checking some information, record it here (My centos system demonstration here ). First, enter the shell command on the system terminal: [Root @ PS-12 CEpH] # ssh-keygen-T RSA (Suggestion: After you press enter, all are default. Do not enter the password. Press enter all the time .) The default va

Total Pages: 15 1 .... 11 12 13 14 15 Go to: Go

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.