The big figures, which emerged in 2011 and soar in 2012, may change many aspects of data management in a dramatic way. Large data systems have brought about changes in the management and manipulation of computer data, continuous extraction, transformation and loading functions, operational business intelligence, dynamic large data, and cloud-based data warehouses.
However, with large data entering the 2013, there is no system technology more active than the NoSQL database and Hadoop framework, it seems that these two products have more room for development. According to a report of the marketanalysis.com 2012, the hadoop-mapreduce market alone is expected to achieve a composite annual growth rate of 58%, which will reach $2.2 billion in 2018.
The advent of NoSQL and Hadoop is primarily to address unstructured data, such as text data or web logs. Like Apache Hadoop, these technologies often start with open source and become new commercial products.
Judith Hurwitz is president and CEO of Hurwitz and Associates, a company based in Massachusetts, where she believes large data architectures and large-scale parallel processing greatly alter the data landscape. "Before that, even if the data really mattered to the company, people were not able to get huge amounts of data and analyze them in real time," she said. ”
"Now, unrealistic things are becoming practical," she says. This has brought the data out of the comfort zone. ”
SQL hit, imminent comeback
We can see on TechTarget's website that the prediction of the plight of the mainstream relational databases began at the beginning of the 2012. Part of the prophecy has come true. The SQL relational database, after a series of struggles with a product that could become a substitute for the next few years, now (or soon) seems to be facing the most intense competition for dealing with large amounts of data filtering across the enterprise.
The driving force behind this trend is the desire of companies to acquire more unstructured data at a faster rate so that companies can rely more on data-driven decision-making. The conventional approach is changing to accommodate the best new technologies.
These actions from 2012-specific data-management vendors show the current state of big data and Hadoop impact on relational data:
IBM is also continuing to create small data and analytics companies, albeit less than 2011. The efforts of the blue giants have been improved (for example, the NoSQL graphics library for DB2 10 and Infosphere Warehouse 10) to a very large puredata system device aimed at making large data for the business.
Oracle launched a large data facility at the start of the year. This release follows Oracle NOSQL Database 2.0, Oracle NoSQL Database 2.0 has been automatically rebalanced, new application programming interfaces can handle large objects, more tightly integrated with Oracle databases, and support direct SQL query Oracle NoSQL database records.
Microsoft showed a preview of Hadoop support for Windows Azure and Windows Server, Teradata Company released its aster large data analysis products, and Informatica released a large data version of the PowerCenter suite. It is said to eliminate the need for manual coding of Hadoop and to bring programming tasks into the Informatica development environment.
SQL may have only one or two strokes in 2012, but it is important to respond positively to market challenges. More professional companies in Non-mainstream NoSQL and Hadoop have updated their SQL certification last year. A typical example is Hadoop, which created the Cloudera company, which expects to increase the level of collaboration between SQL and Impala (Impala is a Hadoop software product that supports standard SQL for interactive queries).
Change of large data
Such a move may represent a certain momentum-there are more opportunities for people to see SQL and NoSQL together. To some extent, SQL has been somewhat diluted in the early years of noisy discussion of large data.
Ronnie Beggs is vice president of the San Francisco Sqlstream Corporation, a manufacturer of streaming media databases. "In the past few years, because of the big data movement, SQL is no longer on everyone's lips," he said. At the same time, he added: "Big data and NoSQL have hit the mainstream." ”
In 2013, he said, we should see significant changes and mention the efforts made in recent years to make the NoSQL database better adapted to SQL-style development.
Beggs said: "It is constantly changing." We'll see the return of SQL in the next year, and it will be the interface of all the big data platforms. ”
This development goes towards the coexistence of Hadoop frameworks, NoSQL, and SQL methods, marking a new step in the maturity of large data. In the 2013, big data could be transformed from a hot topic into practical practice.
"I think people are trying to really understand business value through the hype of big data," says Colin White, the president and founder of the Oregon State Ashland Bi Research Institute in the United States. In 2013, I think we'll see excellent cases where people get business value from big data. The problem is not in the big data itself, but in your application. ”
Although companies have a wide range of interest in new technologies, not all companies deploy large data systems to the same extent. In this connection, one of the integrated service managers was also mentioned at a key bank meeting recently held by TechTarget.
He argues that the banking sector has only partially dabbled in basic big data, not all of it. Banks and other sectors saw only large numbers of data, without noticing its non-structural nature. At least for now.
"There are two parts to the meaning of big data," he said. The first part is that they are very large, and the second part is that the data is unstructured. Banks are clearly part of the first section. But we are not going to collect tweets, at least not yet. We are still waiting to see how the financial data Services market will respond. ”
(Responsible editor: Lu Guang)