Oceanbase: Compiling, installing, and configuring manuals

Source: Internet
Author: User
Tags documentation mkdir svn table definition
Overview

Oceanbase is a high-performance distributed tabular system that provides performance and scalability similar to bigtable, but the table holds strongly typed data such as Integer, String, DateTime, and so on. It is written in C + + and runs in 64-bit Linux environments. In a production environment, you need to use multiple machines to build oceanbase clusters to provide high availability and performance, but you can also use a single machine to run Oceanbase.

This section explains how to quickly build an available minimum oceanbase environment, before making sure you can provide the following conditions: A 64-bit Linux server, memory at least 1G. Enough disk space, at least 100MB. has root privileges or can be elevated to root through sudo.

If you meet your requirements, congratulations. You can continue to install the oceanbase process. This is probably divided into the following steps: From the source code compilation installation Oceanbase and its dependent libraries to start oceanbase use Oceanbase

Finally, we'll make a rough explanation of the configuration in Oceanbase to help you troubleshoot startup problems. installing oceanbase and its dependent libraries from source code compilation

Before you install Oceanbase, check the version of GCC:

GCC--version

It is best to compile with GCC 4.1.2, if the version does not match, may not compile successfully, welcome you to test different versions of the compiler, and welcome you to provide patch so that oceanbase can be compiled in more versions of the compiler.

Oceanbase need to use some dynamic libraries at run time, it is recommended that the Ld_library_path be set up in BASHRC.

echo "Export tblib_root= $HOME/ob-install-dir"  >> ~/.BASHRC
echo "Export ld_library_path=/usr/lib/:/ Usr/local/lib: $TBLIB _root/lib: $HOME/ob-install-dir/lib: $LD _library_path ">> ~/.bashrc
Source ~/.BASHRC
to install dependent libraries [Libtoolize]

The compilation script uses the Aclocal, the autoconf, the Automake and so on the tool, the general machine will bring itself. If the machine is not installed, install it first:

sudo yum install libtool.x86_64
Liblzo2

LIBLZO2 is a compressed library that oceanbase need to compress static data. Rapid installation with Yum

sudo yum install lzo.x86_64

If Yum cannot find this package, you can choose to install it manually:

Wget-c http://www.oberhumer.com/opensource/lzo/download/lzo-2.03.tar.gz
tar zxf lzo-*
(CD lzo-2.03;./ Configure--enable-shared--prefix=/usr/&& make && sudo make install)

Once the installation is complete, you can compile a C program to see if the compiler can find it:

echo "int main () {return 0;}" >/tmp/a.c && gcc/tmp/a.c-llzo2-o/tmp/a.out
/tmp/a.out

If there is no error, the installation is successful. If the following message indicates that the $ld_library_path variable is not configured correctly, determine the directory where the liblzo2.so.2 resides and add it to the $ld_library_path.

./a.out:error while loading shared libraries:liblzo2.so.2:cannot open Shared object file:no such file or directory
Snappy

Snappy is a compression library produced by Google, oceanbase use it to compress static data [optional]. Note: Snappy relies on the Lzo library, please install Lzo Library before installing snappy. Rapid installation with Yum

sudo yum install snappy.x86_64

If Yum cannot find this package, you can choose to install it manually:

CD ~
wget-c http://snappy.googlecode.com/files/snappy-1.0.3.tar.gz
tar zxf snappy-*
(CD snappy-1.0.3;./ Configure--prefix=/usr/&& make && sudo make install)

Once the installation is complete, you can compile a C program to see if the compiler can find it:

echo "int main () {return 0;}" >/tmp/a.c && gcc/tmp/a.c-o/tmp/a.out-lsnappy
/tmp/a.out

If there is no error, the installation is successful. If the following message indicates that the $ld_library_path variable is not configured correctly, determine the directory where the liblzo2.so.2 resides and add it to the $ld_library_path.

./a.out:error while loading shared libraries:libsnappy.so.1:cannot open Shared object file:no such file or directory
Libnuma

NUMA support is used in oceanbase and requires Libnuma support. The following Yum install Numactl to add NUMA-related header files and libraries.

sudo yum install numactl-devel.x86_64

If Yum cannot find this package, you can choose to install it manually:

CD ~
wget-c http://freshmeat.net/urls/5994b4dd6cf45abcf4c4ed8c16a75f24  # If this address is invalid, please go to http://freshmeat.net/ projects/numactl/Download the
tar zxf numactl-*
(CD numactl-2.0.7, make && sudo make install) manually
Libaio

Oceanbase used in AIO, the need for Libaio support. The following Yum install Libaio to add NUMA-related header files and libraries.

sudo yum install libaio-devel.x86_64

If Yum cannot find this package, you can choose to install it manually:

CD ~
wget-c http://libaio.sourcearchive.com/downloads/0.3.107-7/libaio_0.3.107.orig.tar.gz  # If the address is invalid, Download the
tar zxf libaio*
(CD libaio-0.3.107, make && sudo made install) manually to http://libaio.sourcearchive.com/
tbnet and Tbsys

Tbsys is an encapsulation of operating system services, Tbnet provides a network framework, Oceanbase relies on both libraries. Tbnet and Tbsys are open source as Tb-common-utils and can visit http://code.taobao.org/trac/tb-common-utils/wiki/ZhWikiStart for more information. Note that before installing and using tbnet and Tbsys, you set up tblib_root this environment variable, which indicates the path to install Tbnet and Tbsys libraries.

Download the source code and compile the installation with the following command:

CD ~
export tblib_root= $HOME/ob-install-dir
svn checkout http://code.taobao.org/svn/tb-common-utils/trunk/ Tb-common-utils
(CD tb-common-utils; SH build.sh)

After the installation is successful, tblib_root indicates that there are include and Lib two directories, which can be used to verify that the compiler can find the library:

echo "int main () {return 0;}" >/tmp/a.c && gcc/tmp/a.c-o/tmp/a.out-l$tblib_root/lib-ltbnet-ltbsys/t
Mp/a.out

If the error is correct, check that the tblib_root is set correctly. gtest

Optional. If you execute./configure–without-test-case does not compile OB's test, then this section can be ignored. If you want to compile test, this section is for reference only: The new version of Gtest is not allowed to install, gtest on the wiki. Recommendations. After/configure && make, Direct:

Cp-r gtest-build-dir/include/gtest ob-src-dir/include
cp-r gtest-build-dir/lib/.libs ob-src-dir/lib
Compile and install Oceanbase

Check out the OB branch source code:

CD ~
#开源后svn地址   SVN co http://code.taobao.org/svn/oceanbase/trunk/ob-src-dir
SVN co http:// Svn.app.taobao.net/repos/oceanbase/branches/rm/oceanbase/ob-src-dir

Compiling the installation

(CD ob-src-dir;/build.sh init &&/configure--prefix= $HOME/ob-install-dir--with-release- Without-test-case && make-j2 && make install)

Note that –without-test-case does not compile test cases because Oceanbase uses googletest as a test framework, and if you want to run unit tests, The Googletest header files and libraries need to be placed into the include and Lib directories of the oceanbase source top-level directory. Once the installation completes, an initialization is required to generate the necessary initialization files based on the current machine configuration. The execution of the Single-machine-bootstrap script completes all initialization work.

CD $HOME/ob-install-dir
 ./single-machine-bootstrap init# Create the necessary directories to generate the configuration files

After all initialization is completed the system catalog is shown in the following figure:

        /|--/usr/lib/|   |--liblzo*.so.* |   |--libsnappy*.so.* |
        `-- ...
        |   | | $HOME | | tb-common-utils | ob-src-dir |
            `- ...
            |                     '--ob-install-dir | single-machine-bootstrap |--Include | | lib #lib下是OceanBase一些运行时依赖的库文件. | | |-bin #bin下是可执行文件, where Chunkserver mergeser
                Ver rootserver updateserver These four are our main service programs, others are tool programs.   |--@etc #etc下为配置文件, refer to the final description of the documentation |   |--rootserver.conf |   |--mergeserver.conf |   |--chunkserver.conf |   |--updateserver.conf |
                '--Schema.ini #示例schema (that is, table definition) | '--@data #Oceanbase数据库及日志所在文件夹 | |-Ups_commitlog | rs_commitl
          og          |--Ups_data |--|-CS '--RS 
Start Oceanbase

There are four roles in the entire oceanbase cluster: Rootserver?,updateserver?,chunkserver and Mergeserver:rootserver? is the central control node of the cluster Updateserver? is the cluster's Update service node Chunkserver? is the cluster static data storage node Mergeserver? service node for query.

The application uses Oceanbase, which requires the client library, and the client only needs to know the Rootserver address.

In the simplest case, the oceanbase consists of a rootserver, a updateserver, a chunkserver and a mergeserver, all four of which run on the same physical machine. You can start four servers with the following command:

CD $HOME/ob-install-dir
bin/rootserver-f etc/rootserver.conf
bin/updateserver-f etc/updateserver.conf
bin/mergeserver-f etc/mergeserver.conf
bin/chunkserver-f etc/chunkserver.conf

Of course, Oceanbase does not force a four-server startup order.

Server is running as daemon, you can view the corresponding log with the following command:

CD $HOME/ob-install-dir
tail-f data/log/rootserver.log
tail-f data/log/updateserver.log tail-f data/
Log/mergeserver.log
tail-f Data/log/chunkserver.log

After the Oceanbase is successfully started, a table named test is generated with the default configuration. This table is a typical KV data table, which can be read and written using the API provided by Oceanbase. If there is a problem with running, you can refer to the later configuration of a more detailed explanation. using Oceanbase

TODO configuration file Description

Oceanbase provides form storage services, so first you have to have a schema file description table, followed by Rootserver, Updateserver, Chunkserver, and mergeserver all have their own configuration files. The four-server configuration files are specified by command-line arguments when you start the server, and the path to the schema file is specified in the Rootserver configuration file. Modifying database schema configuration files

A schema profile describes an application, an application can have multiple tables, a configuration file to specify the name of the application, and the name of each table, the name of each column, and the type of information. Detailed description See source package Documentation: Doc/oceanbase's Schema.docx

        [App_name]
        Name = database application name
        max_table_id = maximum data table ID applied

        (Datasheet 1 name]
        table_id = data table 1 ID
        ... Other field meanings refer to OB documentation ...
        compress_func_name=lzo_1.0
        column_info=1,2,info_user_nick,varchar,128 ...
        .

        . [Datasheet 2 name]
        table_id = ID compress_func_name=lzo_1.0 of datasheet 2
        ...
Modify Rootserver.conf

Rootserver's configuration mainly specifies its own address and updateserver address, the path to the schema file, and so on. Modify Updateserver.conf

Updateserver's configuration mainly specifies its own address and rootserver address, data storage directory and other information.

        [Root_server]
        Pid_file = rootserver pid file path
        log_file = rootserver Run log file path
        Data_dir = rootserver persisted data directory

        Log_level = Rootserver log level
        Dev_name = Rootserver listening to the name of the network card, such as eth0
        VIP = rootserver VIP device address, used to achieve hot standby, development environment only need a single rootserver, This is specified as the Rootserver address to port
        = Rootserver's listening port

        __create_table_in_init=1     # Establishes a table when the system initializes
        __safe_copy_ Count_in_init=1  # Prevents warn information from being generated when there are only 1 chunkserver
        __safe_copy_count_in_merge=1 #

        [Update_server]
        VIP = updateserver VIP device address, used to achieve hot standby, because Rootserver is active connection updateserver. Only a single updateserver is required in the development environment, where the Updateserver address is specified to port
        = Updateserver listening ports for rootserver connection
        ups_inner_port= Updateserver Low priority listener port for daily merge

        [schema]
        file_name = database Schema configuration file path
about the Data directory structure of Updateserver

Store_root, Raid_regex and Dir_regex together specify the Updateserver data directory, Updateserver? Save the data in a sstable manner. Create multiple directories that match Raid_regex configuration items in the directory specified by the Store_root item, and the current default directory name is RAID1 raid2 raid3 ..., the same sstable file produces multiple copies in a RAID directory, In each of the above RAID directories to establish a number of Dir_regex configuration items matching the soft link, the current default directory name is such a Store1 store2 store3 ... Point to directories under different device mount points (such as/data/1-1/ups_store/data/1-2/ups_store ...), we can use the normal directory instead of the mount point when testing.

So the establishment of a good environment with the tree command to see this is:

        Data | |
        raid1
        |   | --Store1->/data/1-1/ups_store/
        |   | --Store2->/data/1-2/ups_store/
        |   | --Store3->/data/1-3/ups_store/
        |   | --Store4->/data/1-4/ups_store/
        |   '--store5->/data/1-5/ups_store/
        '--raid2 |-Store1->/data/2-1/ups_store/-
            |-Store2->
            / Data/2-2/ups_store/
            | | store3->/data/2-3/ups_store/
            | | store4->/data/2-4/ups_store/
            '-- Store5->/data/2-5/ups_store/

You can create directories and links using the following commands:

Mkdir-p/data/{raid{1..2},{1..2}-{1..5}/ups_store} for I in {1..2}; Todo

For j in {1..5}; Todo

Ln-s/data/$i-$j/ups_store/data/raid$i/store$j; # Note that it's best to use absolute paths when creating soft links

Done

Done Modify mergeserver.conf

The configuration of Mergerserver mainly specifies the address of rootserver.

        [Merge_server]
        Port = Mergeserver Listener
        dev_name = Mergeserver listens for Nic name, such as eth0 log_file = mergeserver
        Run log file path
        Pid_file = Mergesever pid file path
        log_level = mergeserver log Level

        [Root_server]
        VIP = rootserver VIP device Address, Port
        = Rootserver listening ports as long as the Rootserver address is specified in the development environment
Modify Chunkserver.conf

The configuration of Chunkserver mainly specifies the address of rootserver, data storage directory and so on.

        [Public]
        Pid_fie = chunkserver Run log file path
        log_file = chunkserver pid file path
        log_level = chunkserver log Level

        [ Chunkserver]
        dev_name = Chunkserver's listening network card name, such as eth0
        port = chunkserver listening ports
        Datadir_path = Chunkserver Data Persistence directory path
        application_name = database application name
        [Root_server]
        VIP = rootserver VIP device Address, Port
        = Rootserver listening ports as long as the Rootserver address is specified in the development environment

The Chunkserver data is placed under the/datadir_path/$i/application_name/sstable, and the corresponding directory needs to be created because the directory does not exist

 mkdir-p/datadir_path/{1..10}/application_name/sstable 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.