Why use Spark SQL?

Source: Internet
Author: User
Tags sql using

To put it simply, Shark's next-generation technology is spark SQL.

Because Shark is dependent on hive, the advantage of this architecture is that traditional Hive users can seamlessly integrate Shark into existing systems to run query load.

But there are some problems: on the one hand, with the version upgrade, the query optimizer relies on Hive, inconvenient to add new optimization strategy, need to do another system of learning and two development, learning cost is very high.

MapReduce, on the other hand, is process-level parallelism, for example: Hive uses some static variables in different process spaces, and when multithreading is performed in parallel in the same process space, multiple threads simultaneously writing static variables of the same name produce a consistency problem.

So shark needs to use another set of independently maintained Hive source code branches. To solve this problem, Amplab and Databricks developed Spark SQL using Catalyst.


Spark's All-stack solution provides users with a variety of data analysis frameworks, machine learning, graph computing, stream computing in full swing of the development and popularity of attracting a large number of learners, why should people pay attention to the use of SQL in the big Data Environment today? The author believes that there are several main reasons:
1) Ease of use and user inertia. Over the years, a large number of programmers have worked around database + application architectures because the ease of use of SQL has increased the efficiency of application development. Programmers have become accustomed to the business logic code to call the SQL mode to write programs, the power of inertia is powerful, if you can still use the original way to solve the existing big data problems, why not? Providing support for SQL and JDBC will allow traditional users to write programs as before, significantly reducing migration costs.
2) The power of ecosystems. Many system software performance is good, but has not achieved success and decline, to a large extent because of ecosystem problems. Traditional SQL in the JDBC, ODBC, SQL to form a set of mature ecosystem, many application components and tools can be migrated to use, such as some visual tools, data analysis tools, the original enterprise's IT tools can seamlessly transition.
3) data decoupling, Spark SQL is expanding to support a variety of persistence layers, users can use the original persistence layer to store data, but can also experience and migrate to Spark SQL provided by the data analysis environment for bigdata analysis.
  

Why use Spark SQL?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.