Alluxio Memory Storage System Deployment

Source: Internet
Author: User

I. File download and decompression

1): Http://www.alluxio.org/download

2) Unzip the command as follows:

$ wget http://alluxio.org/downloads/files/1.2.0/alluxio-1.2.0-bin.tar.gz
$ tar xvfz alluxio-1.2.0-bin.tar.gz
$ cd alluxio-1.2.0

Ii. configuration file Changes

Currently only basic configuration changes:

1) A copy of Alluxio-env.sh.template under/data/spark/software/alluxio-1.2.0/conf is: alluxio-env.sh change as follows:

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556 #!/usr/bin/env bash## The Alluxio Open Foundation licenses this work under the Apache License, version 2.0# (the "License"). You may not use this work except in compliance with the License, which is# available at www.apache.org/licenses/LICENSE-2.0## This software is distributed on an "AS IS" basis, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,# either express or implied, as more fully set forth in the License.## See the NOTICE file distributed with this work for information regarding copyright ownership.## Copy it as alluxio-env.sh and edit that to configure Alluxio for your# site. This file is sourced to launch Alluxio servers or use Alluxio shell# commands.## This file provides one way to configure Alluxio options by setting the# following listed environment variables. Note that, setting this file will not# affect jobs (e.g., Spark job or MapReduce job) that are using Alluxio client# as a library. Alternatively, you can edit alluxio-site.properties file, where# you can set all the configuration options supported by Alluxio# (http://alluxio.org/documentation/) which is respected by both external jobs# and Alluxio servers (or shell).# The directory where Alluxio deployment is installed. (Default: the parent directory of libexec/).exportALLUXIO_HOME=/data/spark/software/alluxio-1.2.0# The directory where log files are stored. (Default: ${ALLUXIO_HOME}/logs).# ALLUXIO_LOGS_DIR# Hostname of the master.# ALLUXIO_MASTER_HOSTNAMEexportALLUXIO_MASTER_HOSTNAME=spark29# This is now deprecated. Support will be removed in v2.0# ALLUXIO_MASTER_ADDRESS#export ALLUXIO_MASTER_ADDRESS=spark29# The directory where a worker stores in-memory data. (Default: /mnt/ramdisk).# E.g. On linux,  /mnt/ramdisk for ramdisk, /dev/shm for tmpFS; on MacOS, /Volumes/ramdisk for ramdisk# ALLUXIO_RAM_FOLDERexportALLUXIO_RAM_FOLDER=/data/spark/software/alluxio-1.2.0/ramdisk# Address of the under filesystem address. (Default: ${ALLUXIO_HOME}/underFSStorage)# E.g. "/my/local/path" to use local fs, "hdfs://localhost:9000/alluxio" to use a local hdfs# ALLUXIO_UNDERFS_ADDRESSexportALLUXIO_UNDERFS_ADDRESS=hdfs://spark29:9000# How much memory to use per worker. (Default: 1GB)# E.g. "1000MB", "2GB"# ALLUXIO_WORKER_MEMORY_SIZEexportALLUXIO_WORKER_MEMORY_SIZE=12GB# Config properties set for Alluxio master, worker and shell. (Default: "")# E.g. "-Dalluxio.master.port=39999"# ALLUXIO_JAVA_OPTS# Config properties set for Alluxio master daemon. (Default: "")# E.g. "-Dalluxio.master.port=39999"# ALLUXIO_MASTER_JAVA_OPTS# Config properties set for Alluxio worker daemon. (Default: "")# E.g. "-Dalluxio.worker.port=49999" to set worker port, "-Xms2048M -Xmx2048M" to limit the heap size of worker.# ALLUXIO_WORKER_JAVA_OPTS# Config properties set for Alluxio shell. (Default: "")# E.g. "-Dalluxio.user.file.writetype.default=CACHE_THROUGH"# ALLUXIO_USER_JAVA_OPTS

2) The address of the Add worker node under worker Spark24spark30spark31spark32spark33 III, host configuration changes

1) Change in the home directory. Bash_profile add something:

Export tachyon_home=/data/spark/software/alluxio-1.2.0path= $PATH: $HOME/bin: $HADOOP/bin: $JAVA _home/bin: $TACHYON _ HOME/BIN2) in effect configuration source. bash_profile iv. Spark Add dependent jar

1. Under the Conf directory in the Spark installation directory of all Spark hosts

Change spark-env.sh after add: Export spark_classpath= "/data/spark/software/spark-1.5.2-bin-hadoop2.6/lib/ Alluxio-core-client-spark-1.2.0-jar-with-dependencies.jar: $SPARK _classpath "

V. Distribution to each worker node 1, Alluxio Software: scp-r./alluxio-1.2.0 spark30:/data/spark/software/vi. format and start-up

1. Go to the bin directory below the Alluxio installation directory and execute the command: Alluxio format for memory formatting.

2, start the cluster:./alluxio-start.sh All

Seven, may encounter problems

1, the Start worker error, error content: Pseudo-terminal won't be allocated because stdin are not a terminal.

Change: alluxio\bin\alluxio-workers.sh 44 lines of content

The original content is:

Nohup ssh-o connecttimeout=5-o stricthostkeychecking=no-t ${worker} ${launcher} \

Change to the following:
Nohup ssh-o Connecttimeout=5-o Stricthostkeychecking=no-tt $ {worker} ${launcher} \

2, if the start report sudo related command error, is because the start user is not in sudoers inside, need to add the user to this file, add a method to search the root location, and then Add.

The contents are as follows:

Root all= (All) all
Spark All= (All) all

Also comment out this file: #Defaults requiretty.

3, if also reported wrong, you can start the master, a node to start the worker.

Eight, the official website installation Instructions

Official website Installation instructions: http://www.alluxio.org/docs/master/cn/Running-Alluxio-on-a-Cluster.html have Chinese, you can see.

Alluxio Memory Storage System Deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.