Spark with the talk _spark

Source: Internet
Author: User
Tags random seed
Spark (i)---overall structure

Spark is a small and dapper project, developed by Berkeley University's Matei-oriented team. The language used is Scala, the core of the project has only 63 Scala files, fully embodies the beauty of streamlining.

Series of articles see: Spark with the talk http://www.linuxidc.com/Linux/2013-08/88592.htm

The reliance of Spark

(1) Map reduce model
As a distributed computing framework, Spark adopts the MapReduce model. On it, the traces of Google's map reduce and Hadoop are heavy, and it is clear that it is not a big innovation, but a micro-innovation. Under the premise that the basic idea is unchanged, it borrows, imitates and relies on the ancestors, has added a little improvement, has greatly promoted the MapReduce efficiency.

Using the MapReduce model to solve the problem of large data parallel computing, the biggest advantage is that it belongs to the same family as Hadoop. Because the same belongs to the MapReduce parallel programming model, rather than the MPI and OpenMP other models, complex algorithms, as long as they can be expressed in a Java algorithm, run on Hadoop, can be expressed in Scala, run on Spark, and have a multiple increase in speed. In contrast, switching between MPI and Hadoop algorithms is much more difficult.

(2) Functional programming
Spark is written by Scala, and the supported language is Scala. One reason is that Scala supports functional programming. This has created the Spark code concise, and secondly makes the process based on spark development, but also particularly concise. A complete mapreduce,hadoop need to create a mapper class and a reduce class, and spark only need to create a corresponding map function and reduce function, the amount of code is greatly reduced

(3) Mesos
Spark the need to consider the issue of distributed operation to Mesos, not care, which is one of the reasons for its code can be streamlined. This is also a steal a big lazy bar, hehe

(4) HDFs and S3

Spark supports 2 types of distributed Storage systems: HDFs and S3. should be regarded as two of the most mainstream now. The read and write functions of the file system are spark by the mesos distributed implementation. If you want to do cluster test, and there is no HDFS environment, there is no EC2 environment, you can make an NFS, to ensure that all mesos slave can access, can also simulate. Terminology of Spark

(1) RDD (resilient distributed datasets)

Elastic distributed data sets, the most core modules and classes in Spark, are also the essence of design. You will see it as a large set that loads all the data into memory for easy reuse. First, it is distributed and can be distributed across multiple machines for calculation. Second, it is elastic, in the calculation of processing, when the machine is not enough memory, it will and hard disk data exchange, to some extent, will reduce performance, but can ensure that the calculation to continue. As for the detailed elaboration of RDD, a separate article will be set up later.


(2) Local mode and Mesos mode
Spark support for local calls and Mesos cluster two modes, the development of algorithmic programs on the spark, can be in the native mode after the success of debugging, directly to the Mesos cluster operation, in addition to the file save location need to consider, the algorithm does not need to make any changes in theory.

Spark Local mode supports multithreading, and has a certain capability of concurrent processing of a single machine. But not very strong. Local mode can save results locally or in Distributed file systems, while the mesos pattern must be stored in distributed or shared file systems.

(3) Transformations and actions

For RDD, there are two types of action, one is transformation, the other is action. Their essential difference is: transformation return value or a rdd. It uses the design pattern of a chained call, and after a RDD is calculated, it is converted to another rdd, and then the RDD can make another conversion. This process is a distributed action return value that is not a rdd. It is either a normal set of Scala, either a value, or null, to end up or return to the driver program, or write RDD to the file system

Further details on these two actions are presented in the Spark Development Guide, which are based on the core of spark development. Here will be a spark of the official PPT of a map slightly modified to clarify the difference between the two movements.

Spark on Mesos

In order to run on the Mesos frame, install Mesos specification and design, spark implement two classes, one is Sparkscheduler, in spark the class name is Mesosscheduler; one is Sparkexecutor, In Spark, the class name is executor. With these two classes, spark can be distributed through Mesos.

Spark will transform the RDD and MapReduce functions into a standard job and a series of tasks. Submitting to Sparkscheduler,sparkscheduler will submit the task to Mesos Master, assigned to a different slave by master, and ultimately to the spark Executor in slave, which will be assigned to task one by one. and return, compose the new Rdd, or write directly to the Distributed file system.

Spark-based projects

2 spark based projects are also produced by AMP Labs

The first is the Bagel,pregel on Spark inside the Spark, which can be computed using Spark, which is a very useful small project. Bagel with an example, the implementation of the Google PageRank algorithm, the experimental data in http://download.freebase.com/wex/, interested can download to try.

The second is shark,hive on Spark, which migrates the Hive syntax to Spark, translates the SQL into Spark run, and directly reads MapReduce's metabase and corresponding data. The project also won the best Demo award at the 2012 SIGMOD Conference, now in Alpha, and will soon have a formal release, in the hope that it will be a fully compatible existing hive, fast 10x-fold cow b product.


Spark with the Talk (ii)--Installation strategy

The original installation of this matter, do not have to open a separate talk. But the installation of Spark is a bit of a pain in the egg, which has a lot to do with the language and framework of Spark.

Spark is written in Scala, so you have to install Java and Scala, and the underlying scheduling framework is Mesos,mesos written in C + +, so there are certain requirements for the glibc and GCC environments. The Mesos and Spark are installed, also to connect 2, version to choose the right, these steps, any one step wrong in the middle of the spark can not be normal operation, so spark installation, or quite a bit of trouble, here the complete process recorded, including Mesos 0.9 of the installation process, hope that the next person as far as possible do not need to fall in the pit.

This strategy is based on the version is Spark 0.5 and Mesos 0.9, the server is Redhat Enterprise 6.1 32 bits, other server commands may be slightly different

Series of articles see: Spark with talk http://www.linuxidc.com/Linux/2013-08/88592.htm installation Spark installation Mesos boot Mesos boot spark on Mesos cluster deployment

1, install Spark 1.1 install Java

Recommended version is JDK1.6.0 U18, the specific download installation process will not say, and finally must set the Java_home, this is the next step, especially Mesos installation must

Export JAVA_HOME=/USR/JAVA/JDK
export path= $JAVA _home/bin: $PATH
1.2 Installing Scala
wget http://www.scala-lang.org/downloads/distrib/files/scala-2.9.2.tgz
tar xvf scala-2.9.2.tgz
mkdir/usr/ Share/scala
cp-r scala-2.9.2/*/usr/share/scala
export scala_home=/usr/share/scala
export path= $PATH: $ scala_home/bin/
1.3 Installation Spark
Wget-o mesos-spark-v0.5.0-0.tar.gz https://github.com/mesos/spark/tarball/v0.5.0
TAR-XZVF mesos-spark-v0.5.0-0.tar.gz
MV Mesos-spark-0472cf8 Spark
CD spark
SBT/SBT compile

At this point, the basic installation of Spark is complete and you can try running in local mode

./run Spark.examples.SparkPi Local

See the correct PI results, indicating that the first step of spark installation is complete, local mode run OK 2, install Mesos

Mesos 0.9 installation must have the following conditions:

GLIBC 2.9 (must be more than 2.9)
Gcc-c++ 4.1
Python 2.6
Python-devel
Cppunit-devel
Libtool

Redhat 6 The above conditions are basically already available, Redhat 5, glibc may be less than 2.5, must be upgraded to complete the Mesos compile installation, otherwise don't toss, wash and sleep:

wget http://people.apache.org/~benh/mesos-0.9.0-incubating-RC3/mesos-0.9.0-incubating.tar.gz
Tar zxvf mesos-0.9.0-incubating.tar.gz
CD mesos-0.9.0
mkdir build
CD.
/configure--with-python-headers=/usr/include/python2.6--with-java-home= $JAVA _home--with-java-headers= $JAVA _ Home/include--with-webui--with-included-zookeeper--prefix=/usr/local/mesos make make
install

Pray, if everything goes well, Mesos will be installed under/usr/local/mesos, the final key step, set Mesos_home

Export Mesos_home=/usr/local/mesos
3, start Mesos

Manual Mode Boot: 3.1 Start Master

Cd/usr/local/mesos
(Sbin/mesos-master–log_dir=/usr/local/mesos/logs &) &

The following prompts master succeeds

Starting Mesos Master
Master started on ***:5050
Master ID: * * *
elected as master!
Loading WebUI script at '/usr/local/new_mesos/share/mesos/webui/master/webui.py '
Bottle Server starting up (using Wsgirefserver ()) ...
Listening on http://0.0.0.0:8080/
Use Ctrl-c to quit. 3.2 Start Slave

(Sbin/mesos-slave-m 127.0.0.1:5050–log_dir=/home/andy/mesos/logs–work_dir=/home/andy/mesos/works &) &

Using the –resources= "Mem:20240;cpus:10″ parameter, you can specify the assigned resource according to the specific machine condition

Starting Mesos Slave
Slave started on ***:42584
Slave resources:cpus=16; Mem=23123
New Master detected at master@***:5050
registered with Master; Given slave ID * * * *
Loading WebUI script at '/usr/local/new_mesos/share/mesos/webui/slave/webui.py '
Bottle Server starting up (using Wsgirefserver ()) ...
Listening on http://0.0.0.0:8081/
Use Ctrl-c to quit. 4, start spark on Mesos

Well, finally came to the most critical step, on the Mesos run Spark, to the spark and Mesos connected together. Spark is a java,mesos with a Scala coat. C + +, their channel, the inevitable is JNI

The key to the configuration is the spark configuration file, spark with the sample file Conf/spark-env.sh.template, and a detailed explanation, according to our previous installation path, refer to the file, configuration as follows:

#保持与系统的MESOS_HOME一致
Export mesos_home=/usr/local/mesos/

#新版本的配置项, Directly specify the location of the libmesso.so, the Mesos-0.9.0.jar in the so and spark directory must be consistent, and is the key to spark and MESOS communication
mesos_native_library=/usr/ Local/mesos/lib/libmesos.so

#旧版本的配置项, other so, does not seem to need the
export spark_library_path=/usr/local/mesos/lib

#自定义的程序jar包, can be placed in the directory of
export spark_classpath=

... #保持与系统的SCALA_HOME一致
Export Scala_home=/usr/share/scala

#必须小于或者等于Slave中的mem, Slave resources:cpus=16; mem= 23123
#本地模式下, run large tasks also need to modify this parameter, the default is 512m, very small
export spark_mem=10g

All right, after everything's ready, try running the following command:

CD Spark
./run spark.examples.SparkPi 127.0.0.1:5050
(note, unlike previous Mesos versions, no need to play master@127.0.0.1 : 5050, otherwise Mesos will complain.

If you successfully see the PI value again, congratulations, Spark's installation has succeeded another step.


Spark--Development Guide (translation)

This article is translated from the official blog, slightly added: https://github.com/mesos/spark/wiki/Spark-Programming-Guide, thank Shi Yun tx correction. Hope to give some help to the friends who wish to try Spark. The current version is 0.5.0

Series of articles see: Spark with discussion http://www.linuxidc.com/Linux/2013-08/88592.htm Spark Development Guide

From a high level, in fact, every spark application, is a driver class, by running the user-defined main function, on the cluster to perform various concurrent operations and calculations

Spark provides the most important abstraction, is an elastic distributed data set (RDD), it is a kind of special set, can distribute in the node of the cluster, in the way of function programming operation collection, carry on a variety of concurrent operation. It can be created from a file on the HDFs, or in a driver program, from an already existing set. The user can cache the dataset in memory, allowing it to be reused efficiently and concurrently. Finally, the distributed dataset can automatically recover from the node failure and compute again.

The second abstraction of Spark is the shared variable used in parallel computations. By default, when Spark runs a function concurrently, it runs on multiple tasks at different nodes, passing a copy of each variable to the function used by each individual task, so these variables are not shared. Sometimes, however, we need variables that can be shared in a task, or shared between tasks and drivers. Spark supports two types of shared variables:

Broadcast variable: can be accessed in all nodes of memory, for caching variables (read only)

Accumulator: A variable that can only be used for addition, such as counting and summation

This guide demonstrates these features through a number of examples. Readers should be familiar with Scala, especially the grammar of closures. Note that spark can be run interactively through the Spark-shell interpreter. You may need it. Access Spark

To write a spark application, you need to add spark and its dependencies to the classpath. The easiest way to do that is to run SBT/SBT assembly to compile spark and its dependencies and hit a jar CORE/TARGET/SCALA_2.9.1/ Spark-core-assembly-0.0.0.jar, and then add it to your classpath. Alternatively, you can choose to publish spark to Maven's local cache and use SBT/SBT publish. It will become a spark-core under the org.spark-project of the organization.

In addition, you will need to import some spark classes and implicit conversions, adding the following lines to the top of your program

Import Spark. Sparkcontext

Import Sparkcontext._ initialization Spark

The first thing you need to do to write a spark program is to create a Sparkcontext object that will tell spark how to access a cluster. This is usually done by using the following constructor:

New Sparkcontext (Master, JobName, [Sparkhome], [jars])

The master parameter is a string that specifies a connected Mesos cluster, or a special string "local" to indicate running in local mode. As described below, JobName is the name of your task and will be displayed in the Mesos Web UI monitor interface when running on a cluster. The next two parameters are used when you deploy your code to the Mesos cluster, which is mentioned later.

In Spark's interpreter, a special Sparkcontext variable has been created for you, and the variable name is SC. Creating your own sparkcontext is not going to take effect. You can let master connect to the desired context by setting the master environment variable.

Master=local./spark-shell the name of master

Master's name can be one of the following 3 formats

Master Name

Meaning

Local

Localize run Spark, using a worker thread (no parallelism)

LOCAL[K]

Localized run spark, using K worker threads (based on machine's CPU cores)

Host:port

Connect the spark to the specified Mesos Master and run on the cluster. The host parameter is the hostname of Mesos master, and the port is the master-configured port, which defaults to 5050.

Note: In the early Mesos version (the Old-mesos branch of Spark), you must use Master@host:port. Cluster deployment If you want your task to run on a cluster, you need to specify 2 optional parameters: Sparkhome:spark the installation path on the cluster machine (must be all consistent) Jars: on the local machine, contains the code for your task and the list of Jars files you rely on. Spark will deploy them to all cluster nodes. You need to use your own compilation system to package your assignments into a set of jars files. For example, if you use SBT, then the sbt-assembly plugin is a good way to turn your code and dependencies into a single jar file.

If some class libraries are common and need to be shared among different jobs, you may need to manually copy them to Mesos nodes, and in conf/spark-env, point to them by setting Spark_classpath environment variables. More information can be used to configure distributed data sets

The core concept of spark is the resilient distributed data Set (RDD), a set of fault-tolerant mechanisms that can be manipulated in parallel. There are currently two types of RDD: a parallel set (parrallelized collections), an existing Scala collection that runs various concurrent computations on it, a Hadoop dataset (Hadoop datasets), on each record of a file , and run various functions. As long as the file system is HDFs, or any storage system supported by Hadoop. Both of these rdd can be manipulated in the same way. Parallel collections

A parallel collection is created by invoking the Sparkcontext Parallelize method, in an already existing Scala collection (as long as the SEQ object is available). The objects of the collection will be copied to create a distributed dataset that can be manipulated in parallel. The following example shows how to create a concurrent collection from an array by using the SPARK interpreter

scala> val data = Array (1, 2, 3, 4, 5)

Data:array[int] = Array (1, 2, 3, 4, 5)

scala> val distdata = sc.parallelize (data)

Distdata:spark. Rdd[int] = Spark. parallelcollection@10d13e3e

Once created, a distributed dataset (Distdata) can be manipulated in parallel. For example, we can call Distdata.reduce (_ +_) to add the elements of an array. We'll do a further description on the subsequent distributed datasets.

An important parameter for creating a parallel collection is the number of slices, which specifies how to divide the dataset into several pieces. In cluster mode, spark will start a task on a slice. Typically, you can have 2-4 slice on each CPU in the cluster (that is, 2-4 tasks per CPU). In general, Spark will try to automatically set the number of slices based on the status of the cluster. However, you can also manually set it through the second parameter of the Parallelize method (for example: sc.parallelize (data, 10)). Hadoop Data Set

Spark can create distributed datasets from any file that is stored on HDFs file systems or other file systems supported by Hadoop (including local files, Amazon S3, Hypertable, HBase, and so on). Spark can support text File, Sequencefiles, and any other Hadoop input formats

The rdds of a text file can be created by the Sparkcontext Textfile method, which accepts the URI address of the file (or the local path to the file on the machine, or a hdfs://, sdn://,kfs://, or other URI). Here is an example of an invocation:

scala> val distfile = Sc.textfile ("Data.txt")

Distfile:spark. Rdd[string] = Spark. Hadooprdd@1d4cee08

Once created, Distfile can perform dataset operations. For example, we can add the length of all rows by using the following map and reduce operations:

Distfile.map (_.size). Reduce (_ + _)

The method also accepts an optional second parameter to control the number of fragments of the file. By default, Spark creates a slice for each file (HDFs the default block size of 64MB), but you can specify more slices by passing in a larger value. Note that you cannot specify a slice value that is less than the number of blocks (and in Hadoop, the map number cannot be less than the block number)

For Sequencefiles, using the Sparkcontext sequencefile[k, V] method, K and V are the key and values types in the file. They must be writable of Hadoop, such as intwritable and text. In addition, Spark allows you to specify several native generic writable types, such as: Sequencfile[int, String] automatically reads intwritable and texts

Finally, for other types of Hadoop input formats, you can use the Sparkcontext.hadooprdd method, which can receive any type of jobconf and input format classes, key types, and value types. Set the input source just as you would for a Hadoop job. Distributed data set operations

Distributed datasets support two operations:

Transform (Transformation): Create a new DataSet from an existing dataset

Actions: After a calculation is run on a dataset, returns a value to the driver

For example, a map is a transformation that returns a new distributed dataset as a result after each element of the dataset is evaluated by a function. On the other hand, reduce is an action that aggregates all the elements of a dataset with a function and then returns the final result to the driver, while the parallel Reducebykey returns a distributed dataset

Conversions in all spark are inert, that is, they do not occur immediately. Instead, it simply remembers these transformations (transformation) applied to the underlying dataset. These transformations (transformation) are only really calculated when there is one action (action) that requires the return of the results to the driver application. This design allows the spark to run more efficiently. For example, we can implement, create a dataset through map, and then use reduce to return only the result of reduce to driver, rather than the entire large dataset.

An important conversion operation provided by Spark is caching. When you cache a distributed dataset, each node stores all slices of the dataset and calculates it in memory and reuses it in other operations. This will make subsequent computations much faster (usually 10 times times), and caching is a key tool for constructing an iterative algorithm in spark, or it can be used interactively in an interpreter.

The following table lists the transformations and actions currently supported: Transformations (Transformations)

Transformation

Meaning

Map (func)

Returns a new distributed dataset, composed of each original element after Func function conversion

Filter (func)

Returns a new dataset, consisting of the original element that returns a value of True after the Func function

Flatmap (func)

Similar to map, but each INPUT element is mapped to 0 to multiple output elements (therefore, the return value of the Func function is a seq rather than a single element)

Sample (Withreplacement, Frac, Seed)

Random sampling of FRAC data based on a given random seed seeded seed

Union (Otherdataset)

Returns a new dataset, combined with the original dataset and parameters

Groupbykey ([Numtasks])

Called on a dataset composed of (k,v) pairs, returns a (K,seq[v]) pair of data sets. Note: By default, 8 parallel tasks are grouped, you can pass in Numtask optional parameters, set different number of task based on the amount of data

(combined with filter, you can achieve the reduce function in groupbykey like Hadoop)

Reducebykey (func, [Numtasks])

Used on a data set of a (K,V) pair, returns a (K,V) pair of data sets, key values, are aggregated together using the specified reduce function. Like Groupbykey, the number of tasks can be configured with a second optional parameter.

Join (Otherdataset, [numtasks])

Called on a dataset of type (K,V) and (k,w) type, returns a (K, (v,w)) pair, a dataset with all elements in each key

Groupwith (Otherdataset, [numtasks])

Called on a dataset of type (K,V) and (k,w) type, returns a DataSet, which is composed of elements (K, seq[v], seq[w]) tuples. This operation is in other frames, called Cogroup

Cartesian (Otherdataset)

Cartesian product. But when called on Datasets T and U, returns a (t,u) pair of data sets, all elements interacting with Cartesian product.

Sortbykey ([Ascendingorder])

Called on a DataSet of type (k, V), which returns the data set (K,v) sorted with K. Ascending or descending is determined by the Boolean ascendingorder parameter

(similar to the Map-reduce intermediate phase of Hadoop sort, sorted by key) actions (actions)

Action

Meaning

Reduce (func)

Func all elements of a clustered dataset through a function. The Func function accepts 2 arguments and returns a value. This function must be associative to ensure that it can be executed in the correct concurrency

Collect ()

In a driver program, returns all the elements of a dataset in the form of an array. This usually returns a small enough subset of data after using filter or another operation, returning the entire RDD set collect directly, possibly allowing the driver program Oom

Count ()

Returns the number of elements in a dataset

Take (N)

Returns an array that consists of the first n elements of the dataset. Note that this operation is not currently performed in parallel on multiple nodes, but rather driver the machine where the program is located and compute all the elements on a stand-alone basis.

(Gateway memory pressure will increase, need to use caution)

A ()

Returns the first element of a dataset (similar to take (1))

Saveastextfile (PATH)

Save the elements of a dataset, in textfile form, to a local filesystem, HDFs, or any other file system supported by Hadoop. Spark will call the ToString method for each element and convert it to a line of text in the file

Saveassequencefile (PATH)

The elements of the dataset, in Sequencefile format, are saved to the specified directory, the local system, HDFS, or any other file system supported by Hadoop. The elements of the RDD must be composed of key-value pairs that implement the writable interface of Hadoop, or implicitly can be converted to writable (spark includes basic types of conversions, such as int,double,string, etc.)

foreach (func)

On each element of the dataset, run the function func. This is typically used to update an accumulator variable, or to make an interactive cache with an external storage system

Call the Rdd cache () method to keep the result stored in memory after the first calculation. Different parts of the dataset will be stored on the different cluster nodes that compute it, allowing subsequent datasets to be used faster. Caching is fault-tolerant, and if any partition's RDD data is lost, it will be converted using the transformation that originally created it, recalculated again (no need to recalculate all, only compute the missing partition) shared Variables share variable

Generally, when a function is passed to a spark operation (for example, map and reduce), it is usually run on a cluster node, and all variables used in the function are copied separately for function operations without affecting each other. These variables are copied to each machine, and on the remote machine, all updates to the variable are not propagated back to the driver program. However, Spark offers two limited shared variables for two common usage modes: broadcast variables and accumulators

Broadcast variables

Broadcast variables allow programmers to keep a read-only variable, cached on each machine, rather than saving a copy of each task. They can be used, for example, to give each node a large input dataset, in an efficient manner. Spark will also try to reduce the loss of communication by using an efficient broadcast algorithm.

The broadcast variable is created from the variable v by calling the Sparkcontext.broadcast (v) method. This broadcast variable is a V-loading device that can only be obtained by invoking the value method. The following interpreter module shows how to apply:

scala> val broadcastvar = Sc.broadcast (Array (1, 2, 3))

Broadcastvar:spark. Broadcast[array[int]] = Spark. Broadcast (B5C40191-A864-4C7D-B9BF-D87E1A4E787C)

Scala> Broadcastvar.value

Res0:array[int] = Array (1, 2, 3)

After the broadcast variable is created, it can be invoked on any function that the cluster runs, replacing the V value, so that the V value does not need to be passed on to those nodes again. In addition, the object V can not be modified after being broadcast, is read-only, so that all the nodes of the variable, received is the same.

Accumulator

An accumulator is a variable that can be "added" only by a combination of operations, and is effectively supported in parallel. They can be used to implement counters (like MapReduce) and sum. Spark native support for int and double type counters, programmers can add new types.

A counter that can be created by calling the Sparkcontext.accumulator (V) method. To run a task on a cluster, you can use + + to add a value. However, they cannot read the value of the counter. When the driver program needs to read the value, it can use the. Value method.

The following interpreter shows how to use an accumulator to add all the elements in an array

scala> val accum = sc.accumulator (0)

Accum:spark. Accumulator[int] = 0

Scala> sc.parallelize (Array (1, 2, 3, 4)). foreach (x => accum + = x)

...

10/09/29 18:41:08 INFO Sparkcontext:tasks finished in 0.317106 s

Scala> Accum.value

Res2:int = 10 More information

On the Spark website, you can see the Spark sample program

In addition, Spark includes examples of Examples/src/main/scala, some of which have both spark versions and local, non parallel versions, allowing you to see what changes need to be made if the program is to run in a clustered fashion. You can run them by passing the class name to the run script in spark-for example,/run spark.examples.SparkPi. Every sample program prints the use of help when there are no arguments at run time.

From Amoy Treasure Ming Wind

Source: http://www.linuxidc.com/Linux/2013-08/88592.htm

Reference: http://spark.apache.org/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.