"War of the Hadoop SQL engines. And the winner is ...? "This is a very good question. Just. No matter what the answer is. We all spend a little time figuring out spark SQL, the family member inside Spark.
Originally Apache Spark SQL official code Snippets on the Web (Spark official online sample has a common problem: do not provide complete code) has been written to be relatively clear, but assume that the user completely copy its code down, you may encounter the issue of compilation does not pass. In addition, there is another common problem with the Spark official Web sample: No test data is provided. So. For everyone to experience the spark SQL API without detours and high speed. This article will show a small program that rewrites the sample from the official website, as well as the execution results.
[A Program]
[B test data]
Product.data:
[C Run]
Use Spark-submit to submit the program to yarn to run.
[D Run result]
-Console:
-Yarn Web Console:
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvc2ftagfja2vy/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/center ">
-Yarn App Log:
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvc2ftagfja2vy/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/center ">
[E Summary]
-Note that the internal class Product needs to be defined outside the main method, or it will cause a compilation error
-Define a "Table Object" (Schemardd) directly with the Spark SQL API or it's relatively simple
-The next step can be to try and hiveql points
Copyright notice: This article blog original articles, blogs, without consent, may not be reproduced.
3-minute high-speed experience with Apache Spark SQL