Today, some friends asked how to perform unit tests on spark. Write the SBT test method as follows:
When testing the spark test case, you can use the SBT test command:1. test all test cases
SBT/SBT Test
2. Test a single test case
SBT/SBT
case in other languages, but the function is far more complex, involving the sample class (case Class), Unapply functions and other specific online have a lot of introduction. Second, there are powerful for expressions, partial functions, implicit conversions, and so on, the following mainly introduces Scala concurrent (parallel) programming.ii. introduction of SBTWith the Scala language programming, it's best to use the SBT framework, which automati
without any changes to them. However, with the design of a more advanced backup and recovery strategy, you must change these settings to implement the policy. The Rman Show and configure command views and alters the Rman configuration settings.Oracle Database Backup and Recovery Reference provides the syntax for configure.3.4.1.1, Displaying current RMAN Configuration settings:showrman> SHOW RETENTION POLICY;rman> SHOW DEFAULT DEVICE TYPE;Rman> SHOW All;3.4.1.2, Restoring Default RMAN Configura
Recently want to test the performance of Kafka, toss a lot of genius to Kafka installed to the window. The entire process of installation is provided below, which is absolutely usable and complete, while providing complete Kafka Java client code to communicate with Kafka. Here you have to spit, most of the online articles about how to install Kafka on Windows are either incomplete or Kafka client code is wrong or not based on the 0.8 version. But it must be recalled that this article simply intr
Because we are currently working on a quad-rotor project. So I searched for some information on the Internet and transferred the following article about the four-rotor entry-level knowledge.
1. StructureThe rotor is symmetric distributed in the four directions of the front and back of the body, left and right. The four rotor is in the same height plane, and the structure and radius of the four rotor are the same. The four motors are installed in the b
The premise of this article is that scala, sbt, and spark have been correctly installed. Briefly describe the steps to mount the program to the cluster for running: 1. Build the sbt standard project structure: Where :~ /Build. the sbt file is used to configure the basic information of the project (project name, organization name, project version, scala version us
Qingming Holiday toss for two days, summed up two ways to use the IDE for the Spark program, record:
The first method is simpler, both of which are compiled with SBT.
Note: There is no need to install the Scala program locally, otherwise there is a version compatibility issue when compiling the program.
First, based on the NON-SBT way
Create a Scala idea project
We use the NON-
'. Welcome to ____ __/__/__ ___ _____//__ _\ \ _/_ '/__/' _//___/. __/\_,_/_//_/\_ \ version 2.1.1/_/Using Scala version 2.11.8 (Java HotSpot (TM) 64-bit Server VM, Java 1.8.0_91) Type in Express
Ions to has them evaluated.
Type:help for more information. Scala>
In the case of interactive mode, you can go to the Web page to view relevant information, as follows:
8. Yum Installation SBT
[Centosm@centosm test]$ Curl https://bintray.com/
Respect for copyright. What is http://blog.csdn.net/macyang/article/details/7100523-Spark?Spark is a MapReduce-like cluster computing framework designed to supportLow-latency iterative jobs and interactive use from an interpreter. It isWritten in Scala, a high-level language for the JVM, and exposes a cleanLanguage-integrated syntax that makes it easy to write parallel jobs.Spark runs on top of the Mesos cluster manager.-Spark?Git clone git: // github.com/mesos/spark.git-Spark compilation and ru
Restart idea:
Restart idea:
After restart, enter the following interface:
Step 4: Compile scala code in idea:
First, select "create new project" on the interface that we entered in the previous step ":
Select the "Scala" option in the list on the left:
To facilitate future development, select the "SBT" option on the right:
Click "Next" to go to the next step and set the name and directory of the scala project:
Click "finish" to
/profile Environment Variables
Step 1 use the following command to open the/etc/profile file:
$ Sudo vi/etc/profile
Step 2: set the following parameters:
Export HADOOP_HOME =/app/hadoop/hadoop-2.2.0
Export HIVE_HOME =/app/complied/hive-0.13.1-src
Export HIVE_DEV_HOME =/app/complied/hive-0.13.1-src
Step 3: Configure and verify
$ Sudo vi/etc/profile
$ Echo $ HIVE_DEV_HOME1.3.3 run sbt for compilation
To run hive/console, you do not need to start Spark.
Reference Site:https://github.com/yahoo/kafka-managerFirst, the function
Managing multiple Kafka clusters
Convenient check Kafka cluster status (topics,brokers, backup distribution, partition distribution)
Select the copy you want to run
Based on the current partition status
You can choose Topic Configuration and Create topic (different configurations for 0.8.1.1 and 0.8.2)
Delete topic (supports only 0.8.2 and above and to set delete.topic.enable=true in broker conf
Configuring the Playframework EnvironmentDownload jar Package [Play with Activator], this step is a bit dizzy is the Java programmer cmd to perform the steps, run the jar package download a lot of configuration files, some resources do not have a VPN link on the Hanging machine one night;The next best ...Go to the folder where the files are located, configure environment variables [; directory: \activator]NewCreate a new project will prompt to choose what template, originally should choose Java
by each task, therefore, some variables are not shared. However, I need to share the variables that can be shared in the task or between the task and the dynamic program. Spark supports two types of shared variables:
Broadcast variable: Can be stored in all the points, used to store variables (only)
Accumulators: Only variables used for addition, such as sum
Some examples of Nantong show some features. It is better to be familiar with Scala, especially the packet method. Note that Spark can run
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.