Top ten sets of large data enterprises based on Hadoop

Source: Internet
Author: User
Keywords Offered they fact could
Top two

of the Superman gas Hadoop start-up company

This is no longer a secret, global data is growing geometrically, with the wave of data growing rapidly around the world in a large number of hadoop start-ups. As an open source branch of Apache, Hadoop has almost become a surrogate for large data. Gartner estimates that the current market value of the Hadoop ecosystem is about 77,000,000, which the research company expects will increase rapidly to $813 million by 2016.

With the rapid development of the Hadoop market, a large number of start-up companies have emerged to divide the nearly 1 billion-dollar pie.

1, Platfora

What they do: Platfora mainly provide relevant large data solutions for enterprises, and transform the data of Hadoop to provide more intelligent business guidance for enterprises.

Headquarters: San Mateo, California.

Person in charge: Ben Vitte, who has served as vice president of products in DataStax.

Establishment Date: 2011

Financial strength: 65 million dollars. The latest round of Tenaya capital-led financing is 38000000 dollars. and Citigroup venture capital company, Cisco, Allegis Capital, Anderson Horowitz, Battery Ventures Company, Satt Venture capital company and other well-known VCs participated in the round of financing.

Reason for listing: Platfora was set up to simplify Hadoop. Although the enterprise uses Hadoop as a solution for large data, it does not quickly gain value from the data. Platfora's solution has a shell on Hadoop that enables enterprise data analysts to better leverage organizational data and related results.

Key customers include: Comcast, Disney, Edmunds.com and Washington Post.

Competition Pattern: Platfora competitor Datameer,ibm,sap,sas, alpine data, etc.

The difference: Platfora claims to have the first scalable large-scale memory data analysis platform based on Hadoop, Platfora focuses on simplifying the complex process between Hadoop and large data analysis, making the data clearer and faster for those who need it.

2, Alpine Data Labs

What they do: provide a data analysis platform based on Hadoop.

Company headquarters: California State San Francisco

Executive Director: Chooto, Sales and service to the former senior vice president of Greenplum.

Establishment Date: 2010

Financial strength: 23.5 million dollars in total capital, including 16-dollar B-series financing, from Sierra Ventures Company, Venture capital company mission, UMC Capital and Bosch venture capital.

Reason for listing: Most executives and managers don't have the time or skill code to gather data and have no time to learn complex new infrastructure like Hadoop. Instead, they want to see the big picture. The trouble is that sophisticated advanced analysis and machine learning often require scripting and coding expertise, which can limit access to data to scientists. Mitigate this problem by enabling predictive analysis through the Alpine data accessed by SaaS.

Alpine Data Labs provides a visual drag-and-drop and drag-and-drop approach that allows data analysts (or any designated user) to work with large datasets throughout the organization, develop and refine models, and collaborate on a large scale without code. Data is on site environment analysis, not migrated or sampled, through a Web application that can be hosted locally.

Alpine Data Labs leverages the parallel processing capabilities of Hadoop and MPP databases and implements MapReduce and SQL data mining algorithms. Users use their data to interact directly where they have already sat. They can then design and analyze the workflow without having to worry about data movement. All of this is done in a Web browser, and the Alps data is then converted to these visual workflows as a sequence of tasks in the database or MapReduce.

Key customers include: Sony, Hawass media, Scala, Visa, xactly, NBC, Stop, BlackBerry, and Morgan Stanley.

Competitive Landscape: The mountains will compete for two large old (SAS,IBM,SPSS, as well as SAP), while the startups such as Nuevora,platfora, the clear Sky Tower, the Revolutionary Analytics (analytics), and fast me.

The key difference: Alpine Data Labs that most competition scenarios are either desktop based or a single point of solution without any collaboration capabilities. By contrast, Alpine Data Labs provides a "SharePoint-like" feel. In collaboration and search, it also provides modeling and machine learning under the same roof. High mountains are also part of countless mobile camps. In any case, if a company's data is in a Hadoop or MPP database, the Alpine emits instructions through its cluster analysis without moving the data.

Enterprise

providing Hadoop platform and services

3, Altiscale

Main business: Provide Hadoop as a service (HAAS).

Headquarters: Palo Alto, California State

Executive President: Raymie Stata, who had the chief technology officer of Yahoo.

Establishment Date: March 2012

Financial strength: Altiscale by 12 million dollars from the general Catalyst and Sequoia Capital for the first round of investment support, has come from individual funders.

List reason: The purpose of Altiscale's service is the abstract complexity of Hadoop. The main purpose of Altiscale's service is to solve the abstraction and complexity of Hadoop, to build a complete hadoop environment for engineers, and to maintain and manage it. Allows users to focus more on their data and applications. When the customer's demand changes, Altiscale will also make corresponding adjustments and changes.

Main customer: Marketshare company.

Main competitors: Microsoft Azure, Qubole and Xpleny.

Key business difference: Altiscale thinks of himself as "the only one that truly offers a complete Hadoop production environment."

4, Trifacta

Main business: Provide the whole service platform for the enterprise, transform and organize the raw data, analyze and deal with the structured data.

Company headquarters: California State San Francisco

Executive President: Joe Hellerstein, who is in addition to being the CEO of Trifacta, is also a professor of computer science at Berkeley.

Establishment Date: 2012

Sources of funding: TRIFACTA by 16300000 dollar registered funds, by Accel Partners, Xseed Capital, collective data, greylock partnership and individual investors in two rounds of financial support.

List reason: Trifacta has a technology platform in the data chain. For the enterprise to solve the bottleneck between large data and analysis tools, the business analyst to save a lot of time and effort.

To address this problem, Trifacta uses the "predictive interaction" technology to enhance data processing into a visual experience that enables users to quickly and easily identify features that are of interest or concern. Analysts emphasize visual features, Trifacta predictive algorithms that simultaneously observe the nature of user behavior and data to predict user intentions and make recommendations without requiring user specifications. Therefore, the tedious task of data conversion becomes a lightweight experience, more flexible and more efficient than traditional methods.

Main customers: Lockheed Martin, Johnson Group.

Competition Pattern: Trifacta will compete with Paxata,informatica and Cirrohow.

The key difference: trifacta that the problem of data conversion requires a completely new interactive model-a business insight that uses machine intelligence, conjugal human. The TRIFACTA platform combines the visual interaction between intelligent reasoning and "predictive interaction" technology, and the gap between people and data.

second-level large data processing speed

5, Splice Machine

What they do: Provide customers with a compatibility database design based on Hadoop.

Company headquarters: California State San Francisco

Person in charge: Monte Zweben, who had previously been at NASA's Ames Research Center, where he served as deputy director of the Artificial intelligence division. Later, he founded and served as chief executive officer of the Blue Nile software company.

Establishment Date: 2012

Money: They are supported by the 19 million dollar subject partner and Mordo venture capital firm.

Reasons for listing: Enterprise applications and Web developers have been far away from traditional relational databases, and with the rapid development of data volumes and changing data types, more flexible solutions are needed to address architectural problems.

Now, with the emerging database solutions, features that make the RDBMS so popular for so long, such as acid compatibility, transactional integrity, and standard SQL, are available on a cost-efficient and scalable Hadoop platform. The weld machine believes that this allows developers to get the best of both worlds in a common database platform.

Splice Machine provides businesses with a bit of all the databases on NoSQL, such as automatic slicing, scalability, and fault tolerance and high availability. It also optimizes complex database queries for the enterprise and does not need to rewrite all data applications and BI tools.

By leveraging distributed computing, Splice machine can simply add more commodity servers from TB to Petabyte scale. Splice Machine is the cornerstone of an RDBMS that can provide this scalability without sacrificing SQL functionality or acid compliance.

Competitive Landscape: Competitors include Cloudera, Memsql,nuodb,datastax and Voltdb.

The key difference: Splice Machine claims to have the ability to use Hadoop in the database to solve real-time time analysis.

6, Datatorrent

What they did: provide a real-time stream processing platform based on Hadoop.

Headquarters: California State Santa Clara

Person in charge: Fortis, who was formerly an engineering team at Yahoo, was one of the founding members of the engineering executive vice president.

Establishment Date: 2012

Source of funding: The company closed the 8 million-dollar series June 2013 a round of August capital, and joined the venture by Ame Cloud. The company had previously won 750,000 of dollars in seed money from Morado venture capital and Farzad Nazem.

Reason for listing: Datatorrent that they can solve the problem of data latency, especially in environments where real-time data analysis is highly demanding.

For some insights, the data will be stored on disk, analyzed, and answered-it's too late. For example, if a hacker hijacked a credit card account and managed to make some purchases, even if the credit card was cut in minutes, the cardholder had already suffered significant losses. Datatorrent that enterprises need to master real-time data, and its rapid analysis and judgment.

Unlike traditional batch processing, which can take hours, datatorrent claims to be able to execute hundreds of millions of data items per second. This enables the enterprise to process, monitor, and make decisions based on its real-time data.

Competitive Landscape: Datatorrent's main competitor comes from IBM (Infosphere stream) and storm open source project.

The key difference: the most important part of Datatorrent is its data analysis speed.

large Hadoop Hosting platform
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.