Flume Log Collection

Source: Internet
Author: User
Tags bulk insert syslog system log xsl zookeeper hadoop ecosystem

I. Introduction of Flume

Flume is a distributed, reliable, and highly available mass-log aggregation system that enables the customization of various data senders in the system for data collection, while Flume provides the ability to simply process the data and write to various data-receiving parties (customizable).

Design goal:

(1) Reliability

When a node fails, the log can be transmitted to other nodes without loss. Flume provides three levels of reliability assurance, from strong to weak in order: End-to-end (Received data agent first writes the event to disk, when the data transfer is successful, then delete; If the data sent fails, you can resend it.) ), Store On failure (this is also the strategy adopted by scribe, when the data receiver crash, the data is written to local, after recovery, continue to send), best effort (data sent to the receiver, will not be confirmed).

(2) Scalability

The Flume employs a three-tier architecture, Agent,collector and storage, each of which can be scaled horizontally. All agents and collector are managed by master, which makes the system easy to monitor and maintain, and master allows multiple (management and load balancing using zookeeper), which avoids a single point of failure.

(3) Manageability

All agents and Colletor are managed centrally by master, which makes the system easy to maintain. Multi-master case, Flume uses zookeeper and gossip to ensure the consistency of dynamic configuration data. Users can view individual data sources or data flow executions on master, and can be configured and dynamically loaded on individual data sources. Flume provides two forms of web and Shell Script command to manage data flow.

(4) Functional Scalability

Users can add their own agent,collector or storage as needed. In addition, Flume comes with a number of components, including various agents (file, syslog, etc.), collector and storage (FILE,HDFS, etc.).

Second, flume structure

Flume's logical architecture:

As mentioned earlier, Flume uses a layered architecture: agent,collector and Storage, respectively. The agent and collector are composed of two parts: sourceand Sink,source are data sources, and sink is the data whereabouts .

Flume uses two components: Master and Node,node are dynamically configured in the master shell or the web to determine whether they are agents or collector.

(1) Agent

The role of the agent is to send data from the data source to collector.

Flume comes with a number of directly available data sources (source), such as:

    • Text ("filename"): Send file filename as a data source, by row
    • Tail ("filename"): Detects the new data generated by filename and sends it by line
    • Fsyslogtcp (5140): Listens to TCP's 5140 port and sends out incoming data
    • Taildir ("dirname" [, fileregex= ". *" [, startfromend=false[, Recursedepth=0]]): Listens to the end of files in the directory, Use regular to select the file (without the directory) to be listened to, recursedepth to listen to the depth of its subdirectory recursively

See also this friend's arrangement: http://www.cnblogs.com/zhangmiao-chp/archive/2011/05/18/2050465.html

A number of sinkare also available, such as:

    • console[("format")]: Display data directly on CONSOLR
    • Text ("txtfile"): Writes data to file txtfile
    • DFS ("Dfsfile"): Writes data to the Dfsfile file on HDFs
    • SYSLOGTCP ("host", Port): Passing data over TCP to the host node
    • agentsink[("Machine" [, Port])]: equivalent to Agente2esink, if omitted, machine parameter, Default Flume.collector.event.host and Flume.collector.event.port are used as default Collecotr
    • agentdfosink[("Machine" [, Port])]: Local hot standby Agent,agent discovers collector node failure, constantly checks the survival status of collector to resend the event, Data generated here will be cached on the local disk
    • agentbesink[("Machine" [, Port])]: The agent is not responsible, if the collector fault, will not do any processing, it sends the data will be directly discarded
    • Agente2echain: Specify multiple collector to increase availability. When sending an event to the primary collector, turn to the second collector send, and when all the collector fail, it will be very persistent again

See also this friend's arrangement: http://www.cnblogs.com/zhangmiao-chp/archive/2011/05/18/2050472.html

(2) collector

The role of collector is to load data from multiple agents into the storage after it has been aggregated.

Its source and sink are similar to agents.

Data sources (source), such as:

    • collectorsource[(port)]:collector source, listening port aggregation data
    • Autocollectorsource: Automatic aggregation of data through master coordination of physical nodes
    • Logicalsource: Logical source, which is assigned the port by master and listens to Rpcsink

Sink, such as:

    • Collectorsink ("Fsdir", "Fsfileprefix", Rollmillis): Collectorsink, data sent to HDFs via collector, Fsdir is the HDFs directory, Fsfileprefix for file prefix code
    • Customdfs ("Hdfspath" [, "format"]): Custom Format DFS
(3)Storage

Storage is a storage system, can be a common file, it can be hdfs,hive,hbase, distributed storage and so on.

(4)Master

Master is the management of coordination agent and collector configuration information, is the controller of the flume cluster.

In Flume, the most important abstraction is data flow, which describes a path from which data is generated, transmitted, processed, and eventually written to the target.

    1. For the agent data flow configuration is where to get the data, send the data to which collector.
    2. For collector is the data sent by the receiving agent, sending the data to the specified target machine.

Note: The flume framework relies on Hadoop and zookeeper only on the jar package and does not require that the Hadoop and zookeeper services be started when the flume starts.

III. Flume Distributed Environment Deployment 1. Experimental scenarios
    • Operating system version: RedHat 5.6
    • Hadoop version: 0.20.2
    • JDK version: jdk1.6.0_26
    • Install flume version: Flume-distribution-0.9.4-bin

To deploy flume on the cluster, follow these steps:

    1. Install flume on each machine on the cluster
    2. Select one or more nodes as Master
    3. Modifying a static configuration file
    4. Start a master on at least one machine, all nodes start flume node
    5. Dynamic configuration

You need to deploy flume on each machine in the cluster.

Note: Flume cluster cluster network environment to ensure stable, reliable, otherwise there will be some inexplicable errors (such as: the agent can not send data to collector).

1.Flume Environment Installation
$wget http://cloud.github.com/downloads/cloudera/flume/flume-distribution-0.9.4-bin.tar.gz $tar-XZVF flume-distribution-0.9.4-bin.tar.gz $cp-rf flume-distribution-0.9.4-bin/usr/local/flume $vi/etc/profile  # Add Environment Configuration     export Flume_home=/usr/local/flume     export path=.: $PATH:: $FLUME _home/bin $source/etc/profile  $ Flume #验证安装

2. Select one or more nodes as Master

For master selection, you can define a master on the cluster, or you can select multiple nodes as master for increased availability.

    • Single point Master mode: Easy to manage, but defective in system fault tolerance and extensibility
    • Multi-point Master mode: usually runs 3/5 master, which can be very good fault tolerance

Flume Master Number selection principle :

The ability of the distributed master to continue working normally does not crash if the number of normal working master is more than half of the total master number.

There are two main functions of Flume master:

    • Tracking the configuration of each node and notifying the configuration changes of the nodes;
    • Track the end of flow control in reliable mode (E2E) so that the source of flow knows when to stop transmitting the event.
3. Modify the static configuration file

Site-specific settings for flume nodes and master through Conf/flume-site.xml on each cluster node are configurable, if this file does not exist, set the properties of the default in conf/ Flume-conf.xml, in the next example, set the master name on the flume node and let the node find the flume master called "Master" on its own.

<?xml version= "1.0"?>     <?xml-stylesheet type= "text/xsl"  href= "configuration.xsl"?>     < configuration>         <property>             <name>flume.master.servers</name>             <value> master</value>          </property>     </configuration>

In the case of multiple master, the following configuration is required:

<property>     <name>flume.master.servers</name>    <value>hadoopmaster.com, hadoopedge.com,datanode4.com</value>     <description>a comma-separated List of hostnames, one for each Machine in the Flume master.</description> </property> <property>     <name> flume.master.store</name>     <value>zookeeper</value>     <description>how the flume Master stores node configurations. Must be either ' zookeeper ' or ' memory ' .</description> </property> <property>     <name> flume.master.serverid</name>     <value>2</value>     <description>the Unique identifier For a machine in a Flume Master ensemble. Must be different on every master instance.</description> </property>

Note: The configuration of the Flume.master.serverid property is primarily for master, and the Flume.master.serverid of the master node on the cluster must not be the same, and the value of this property starts with 0.

When using the agent role, you can set the default collector host by adding the following configuration file in Flume-conf.xml:

<property>     <name>flume.collector.event.host</name>     <value>collector</value >     <description>this is the host name of the default "remote"  collector.</description> </ property> <property>     <name>flume.collector.port</name>     <value>35853</ value>     <description>this Default TCP port, the collector listens to, order to receive events it is coll Ecting.</description> </property>

See also: http://www.cnblogs.com/zhangmiao-chp/archive/2011/05/18/2050443.html for configuration.

4. Start the cluster

Node start on cluster:

    1. On the command line input: Flume master starts the master node
    2. In the command line input: Flume node–n nodeName Start other nodes, NodeName best according to the cluster logical division to the name of the child, so when the master configuration is relatively clear.

Name rules are defined by themselves, convenient memory and dynamic configuration can be (followed by the introduction of dynamic configuration)

5. Dynamic configuration based on the flume shell

About the command in the Flume shell see: http://www.cnblogs.com/zhangmiao-chp/archive/2011/05/18/2050461.html

Suppose we currently deploy the flume cluster structure as follows:

We want to collect the system log of the machine where the a-f is located in HDFs, how do we configure it in the Flume shell to achieve our goal?

1. Set logical nodes (logical node)

$flume Shell >connect localhost >help >exec map 192.168.0.1 agenta >exec map 192.168.0.2 agentb >exec map 1 92.168.0.3 AGENTC >exec Map 192.168.0.4 agentd >exec map 192.168.0.5 agente >exec map 192.168.0.6 agentf >GETN Odestatus 192.168.0.1---Idle 192.168.0.2---Idle 192.168.0.3---Idle         192.168.0.4-- Idle         192.168.0.5         ---Idle 192.168.0.6---Idle AgentA-------- --Idle Agentd---Idle Agente---agentf-----Idle         >exec map 192.168.0.11 Collector

Here you can also open the Web master interface to view.

2. Start the Collector listening port

>exec config collector ' Collectorsource (35853) ' Collectorsink ("", "") ' #collector节点监听35853端口过来的数据, This is a very important 

Log in to the collector server for port detection

If the above configuration is not done in master, this open port is not detected on collector

3. Set source and sink for each node

>exec config collector ' Collectorsource (35853) ' Collectorsink ("hdfs://namenode/flume/", "syslog") '  >exec Config AgentA ' tail ("/tmp/log/message") ' Agentbesink ("192.168.0.11") ' #经过实验, like  A logical node, can have at most one source and sink.
A..
>exec config agentf ' tail ("/tmp/log/message") ' Agentbesink ("192.168.0.11") '

At this point the configuration can be viewed from the master Web at a glance, at this point we can already achieve our original purpose.

The above dynamic configuration through the flume shell can be done in the Flume master Web, without further explanation.

Iv. Advanced Dynamic Configuration

The advanced configuration is actually to add the following features in the simple configuration above to ensure better operation of the system:

    • Multiple master (high availability for master node)
    • Collector Chain (High availability of Collector)

Multiple master cases have been described above, including the use and number of master. The following is a simple look at collector Chain, in fact, is also very simple, is in the dynamic configuration, using Agent*chain to specify multiple collector to ensure the availability of its log transport. Take a look at the logical diagram of Flume in a general formal environment:

Here AgentA and Agentb point to Collectora, if Collectora Crach, according to the configured reliability level agent will have the corresponding action, we are likely to ensure efficient transmission and did not choose e2e (even this way, Agent local log accumulation is still a problem, and will typically configure multiple collector to form collector chain.

>exec config Agentc ' tail ("/tmp/log/message") ' Agente2echain ("collectorb:35853", "collectora:35853") ' >exec Config agentd ' tail ("/tmp/log/message") ' Agente2echain ("collectorb:35853", "collectorc:35853") '

This collectorb in the case of a problem:

V. Questions and summaries

The above nodes are as follows: Master, agent, collector, storage, for each type of node we look at high availability and there is no possibility of causing performance bottlenecks.

First, the failure of thestorage layer is the same as the failure of the collector layer , as long as the data is not placed in the final position, the node is considered to be a failure. We will be based on the reliability of the data collected to set the appropriate transmission mode, and according to our configuration, we control the collector receive data, collector performance impact is the entire flume cluster data throughput, so collector best to deploy separately, Therefore, high-availability issues are generally not considered.

Then,agent layer failure , flume data Security level configuration of the main agent configuration, the agent provides three levels to send data to collector:e2e, DFO, BF, in some do not repeat. Take a look at a summary from Daniel:

Agent node monitoring all files under the log folder, each agent listens for up to 1024 files , each file in the agent will have a cursor-like thing, record the location of the listening file read, so that every time the file has a new record to produce, Then the cursor reads the delta record, and the security level attribute sent to collector according to the agent configuration is E2E,DFO.
If this is the case of e2e, then the agent node will first write the file to the Agent node's folder, and then send to collector, if the final data is finally successfully stored to the storage layer, then the agent deletes the file written before, if not received the successful information, Then keep the information. If there is a problem with the agent node, then the equivalent of all the record information disappears, if you restart directly, the agent will assume that all files under the log folder are not listening, there is no file record, so the file will be re-read, so that the log will be duplicated, the specific recovery method is as follows The log files that have been sent under the Listening log folder on the agent node are moved out and processed and the agent can be restarted after the failure. Note: In the case of failure of the agent node, according to the point of failure, the data files before the point in time are moved out, the Flume.agent.logdir configuration folder is emptied, and the agent is restarted.

Finally, master fails, master is down, the entire cluster will not work, restart the cluster, move all the files under the log folder that the agent listens to, and then restart Master. In the case of multi-master nodes, the cluster will work as long as the master working on the cluster is greater than half the total master number, so long as the master of the outage is restored.

Summary of issues:
1.Flume when collecting data on the agent side, the default is to generate a temporary directory under/tmp/flume-{user} to hold the agent's own intercepted log files, if the file is too large to write a disk full then the agent will report    Error closing Logicalnode a2-18 sink:no space left on device, so when configuring the agent side you need to be aware of   <property>     <name> flume.agent.logdir</name>     <value>/data/tmp/flume-${user.name}/agent</value>   </ Property> property, as long as the flume is guaranteed to run during the 7x24 hour, the agent side does not make the path Flume.agent.logdir disk full.
2. Flume will look for Hadoop-core-*.jar files at boot time, and the name of the standard Hadoop core jar package needs to be changed to Hadoop-*-core.jar Hadoop-core-*.jar.
The flume in a 3.Flume cluster must be version consistent. Otherwise there will be an inexplicable error.
4.Flume cluster-collected logs are sent to the HDFs to establish a folder based on the time of the event, which is Clock.unixtime () on the source code, so if you want to generate the file based on the time generated by the log, you need to The constructor of the Com.cloudera.flume.core.EventImpl class public Eventimpl (byte[] s, long timestamp, priority PRI, long nanotime,       String host, Map<string, byte[]> fields) is re-written to parse the contents of the array s to remove the time and assign to timestamp.
Note: The flume framework constructs an array of S content that is empty, and is used to send an event similar to simple validation, so it is important to note that the S content is empty when the timestamp problem occurs.
5. If collector and agent are not in a network segment, the phenomenon of flash will occur, so that the agent can not transmit data collector So, in the deployment agent and collector preferably in a network segment.
6. If you start master: "Try to start hostname, but hostname is not in the master list of errors", this is the need to check whether the host address and hostname configuration is correct or not.
7. At the source side, there is a relatively large defect in the tail class of source, not supported, the breakpoint continues to transmit function. Since restarting node does not record where the last file was read, there is no way to know where to start reading the next time. Especially when the log files are constantly increasing. The source node of the flume is hung. When the flume source is opened again this time, the added log content, there is no way to be read by the source. However, Flume has a execstream extension, you can write a monitoring log to increase the situation, the increased log, through their own tools to write the added content, to the Flume node. and send it to sink's node.

In the previous article introduced the scribe scheme, gives me the most intuitive feeling is:

    • scribe installation complexity, simple configuration
    • Flume simple installation, dynamic configuration complex

A contrast chart in the following Dong's blog:

Flume Usage Summary

This article describes the initial process of using flume to transfer data to MongoDB, covering environment deployment and considerations.

1 Environment Construction

Requires JDK, Flume-ng, MongoDB java driver, Flume-ng-mongodb-sink
(1) jdk:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
(2) flune-ng:http://www.apache.org/dyn/closer.cgi/flume/1.5.2/apache-flume-1.5.2-bin.tar.gz
(3) MongoDB Java driver jar package: https://oss.sonatype.org/content/repositories/releases/org/mongodb/mongo-java-driver/ 2.13.0/mongo-java-driver-2.13.0.jar
(4) Flume-ng-mongodb-sink Source: Https://github.com/leonlee/flume-ng-mongodb-sink
Flume-ng-mongodb-sink needs to compile the jar itself, download the code from GitHub, unzip it and execute the MVN package and build. Maven needs to be installed to compile the jar package, and the machine needs to be networked.

2 Introduction to Simple principles

This is a story about the pond. There is a pool, it is a water, the other end of the water, the inlet can be configured with a variety of pipes, outlet can also be configured with a variety of pipes, can have a plurality of water inlet, a plurality of outlets. The term water is called the event, the inlet term is called Source, the spout term becomes Sink, the pool term becomes channel,source+channel+sink, the term is called the agent. If necessary, you can also connect multiple agents together.
For more details, refer to the official documentation: http://flume.apache.org/FlumeDeveloperGuide.html

3 Flume Configuration

(1) Env configuration

Put the Mongo-java-driver and flume-ng-mongodb-sink two jar packages into the Flume\lib directory and add the path to the Flume_classpath variable of the flume-env.sh file;
Java_opts variable: Plus-dflume.monitoring.type=http-dflume.monitoring.port=xxxx, you can see the monitoring information on [Hostname:xxxx]/metrics; XMS specifies JVM initial memory,-XMX specifies JVM max memory
Flume_home variable: Set FLUME root directory
Java_home variable: Setting the Java root directory

(2) Log configuration

When debugging, set the log to debug and hit the file: Flume.root.logger=debug,logfile

(3) Transmission configuration
Use Exec Source, File-channel, Flume-ng-mongodb-sink.
Source Configuration Example:

 my_agent.sources.my_source_1.channels = My_channel_1my_agent.sources.my_source_1.type = Execmy_ Agent.sources.my_source_1.command = Python Xxx.pymy_agent.sources.my_source_1.shell =/bin/bash-cmy_ Agent.sources.my_source_1.restartThrottle = 10000my_agent.sources.my_source_1.restart = Truemy_agent.sources.my_ Source_1.logstderr = Truemy_agent.sources.my_source_1.batchSize = 1000my_agent.sources.my_source_1.interceptors = I1 I2 I3my_agent.sources.my_source_1.interceptors.i1.type = Staticmy_agent.sources.my_source_1.interceptors.i1.key = Dbmy_agent.sources.my_source_1.interceptors.i1.value = Cswuyg_testmy_agent.sources.my_source_1. Interceptors.i2.type = Staticmy_agent.sources.my_source_1.interceptors.i2.key = Collectionmy_agent.sources.my_ Source_1.interceptors.i2.value = Cswuyg_testmy_agent.sources.my_source_1.interceptors.i3.type = Staticmy_ Agent.sources.my_source_1.interceptors.i3.key = Opmy_agent.sources.my_source_1.interceptors.i3.value = Upsert 

Field Description:
With exec source, specifying the Execute command behavior Python xxx.py, I process the log in the xxx.py code, and print out JSON-formatted data in accordance with the Flume-ng-mongodb-sink Convention, if the update class operation must carry _id field, print out of the log is treated as the body of the event, I again through the interceptors to add a custom event Header;
The static interceptors is used to add information to the event header, and here I add db=cswuyg_test, Collection=cswuyg_test, Op=upsert, These three keys are agreed with Flume-ng-mongodb-sink to specify the DB, collection name in MongoDB, and the operation type is update.

Channel Configuration examples:

My_agent.channels.my_channel_1.type = Filemy_agent.channels.my_channel_1.checkpointDir =/home/work/flume/ File-channel/my_channel_1/checkpointmy_agent.channels.my_channel_1.usedualcheckpoints = truemy_agent.channels.my _channel_1.backupcheckpointdir =/home/work/flume/file-channel/my_channel_1/checkpoint2my_agent.channels.my_ Channel_1.datadirs =/home/work/flume/file-channel/my_channel_1/datamy_agent.channels.my_channel_1. transactioncapacity = 10000my_agent.channels.my_channel_1.checkpointinterval = 30000my_agent.channels.my_channel_ 1.maxFileSize = 4292870142my_agent.channels.my_channel_1.minimumrequiredspace = 524288000my_agent.channels.my_ Channel_1.capacity = 100000

Field Description:

The parameter to note is capacity, which specifies the number of event numbers that can be stored in the pool, and you need to set an appropriate value based on the log volume, if you also use File-channel, and the disk is sufficient, it can be as large as possible.
Datadirs Specify the location of the pool storage, if possible, select a disk that is not so high, and use a comma to separate multiple disk directories.

Sink Configuration Example:

My_agent.sinks.my_mongo_1.type = Org.riderzen.flume.sink.MongoSinkmy_agent.sinks.my_mongo_1.host = Xxxhostmy_ Agent.sinks.my_mongo_1.port = Yyyportmy_agent.sinks.my_mongo_1.model = Dynamicmy_agent.sinks.my_mongo_1.batch = 10my _agent.sinks.my_mongo_1.channel = My_channel_1my_agent.sinks.my_mongo_1.timestampField = _s

Field Description:

Model selects dynamic, which means that the DB, collection name of MongoDB is the name specified in the event header. The Timestampfield field is used to convert the value of the specified key in the JSON string to a DateTime format Mongodb,flume-ng-mongodb-sink does not support nested key designations such as: _ S.Y), but can be implemented by modifying the sink code itself.

Agent Configuration Example:

My_agent.channels = my_channel_1my_agent.sources = My_source_1my_agent.sinks = My_mongo_1

(4) Start

You can write a control.sh script to control the startup, shutdown, and restart of Flume.
Launch Demo:
./bin/flume-ng agent--conf./conf/--conf-file./conf/flume.conf-n agent1 >/start.log 2>&1 &


Since then, the log data from the log file, read through the xxx.py, into the Flie-channel, and then be Flume-ng-mongodb-sink read away, into the destination MongoDB Cluster.
After the basic function, we need to adjust the xxx.py and enhance the flume-ng-mongodb-sink.

4 other

1, monitoring: The official recommended monitoring is ganglia:http://sourceforge.net/projects/ganglia/, there is an image interface.

2, Version change: Flume from 1. X is no longer using zookeeper, and it provides E2E (end-to-end) support for data reliability, removing the DFO (store on Failure) and being (best effort) before refactoring. E2E refers to the guarantee that the event has been passed to the next agent or end point when deleting an event in the channel, but there is no mention of how the data is guaranteed not to be lost before entering the channel, like exec The way the source data is imported into the channel requires the user's own assurance.

3. Close plug-in: When using exec source, the Flume reboot does not close the old plugin process and needs to be shut down.

4, Exec source does not guarantee that the data is not lost, because this method is only to pour water into the pond, no matter what the condition of the pool, see Https://flume.apache.org/FlumeUserGuide.html#exec-source Warning section. However, spooling directory source is not necessarily a good way to monitor the directory, but note that you cannot modify the name of the file, you cannot overwrite the file with the same name, and you do not have half-content files. After the transfer is complete, the file is renamed to Xx.completed, and a timed cleanup script is required to clean up the files. Restarting causes duplicate event, because the files that are transferred to half are not set to the completed state.

5, Transmission bottleneck: Use FLUME+MONGODB to securely transfer large amounts of data (the log per second is not big data volume, hundreds of g per day is not counted), the bottleneck will appear on MongoDB, especially the update type of data transmission.

6, need to modify the current Flume-ng-mongodb-sink plug-in: (1) Let update support $setOnInsert, (2) to resolve the update $set, $inc is empty, exception caused by the bug, (3) to resolve the bulk INSERT, A bug that causes subsequent logs that are inserted in the same batch to be discarded because one of the logs has duplicate exception.

7, Flume and fluentd very similar, but from the Hadoop ecosystem flume more popular, so I choose Flume.

8, Batch deployment: First the JDK, Flume packaged into tar, and then the use of Python Paramiko Library, the tar package sent to each machine, decompression, operation.

This article is located in: http://www.cnblogs.com/cswuyg/p/4498804.html
Reference:

1, http://flume.apache.org/FlumeDeveloperGuide.html

2, "Apache flume:distributed Log Collection for Hadoop"

Flume Log Collection

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.