Run the example one by one to see the results illustrate Hadoop_home environment variables
Org.apache.spark.examples.sql.hive.JavaSparkHiveExample
Modify the run Configuration to add env hadoop_home=${hadoop_home}
Run the Java class. After the hive example is exhausted, delete the metastore_db directory.
Here's a simple way to run it one by one
Eclipse->file->import->run/debug Launch Configuration
Browse to the Easy_dev_labs\runconfig directory. Import all.
Now from Eclipse->run->run Configuration
Start from Javaconsumerreceiver and run down one by one. Whatever is not written out, is directly run can.
Before running Javaconsumerreceiver, run Socketserver first.
Javadirectkafkawordcount: To run Kafkasvr first. After running Kfksvr , remove the Kafka maven dependency and run the javadirectkafkawordcount. Finish This example and add Kafka to maven.
Javanetworkwordcount Dependent Socketserver
Javaflumeeventcount run first and then run FLUMESVR.
thereafter Socketsvr,flumesvr,kafkasvr It's all open, no need to turn it off.
Javakafkawordcount relies on kafkasvr. You can use the original pom file.
Javarecoverablenetworkwordcount Dependent Socketserver
Javasqlnetworkwordcount Dependent Socketserver
You can then take a look at the official descriptions of these examples to see what each code snippet means in the example.
Https://spark.apache.org/examples.html
Spark runs Spark-examples under Eclipse v2-02