Big Data era coming giants crack dense computing dilemma

Source: Internet
Author: User
Keywords Data-intensive already securely decrypted massive data

After the Internet, cloud computing, "big data" has quickly become the market and users scramble to discuss the hot technology concept. So what is the big data? IDC, a research firm, says a technology that wants to be a big data technology must meet the three "V" described by IBM: Diversity (produced), High-capacity (volume) and time-sensitive (velocity). Diversity means that data should contain structured and unstructured data; Large capacity means that the amount of data aggregated together for analysis must be very large, and timeliness means that data processing must be fast.

In 2011, the concept of "big Data" has earned popularity, the research institute IDC Digital Universe in June 2011 reported that the global volume of data in 2011 has reached 1.8ZB, has increased 5 times times in the past 5 years, and to 2015 will reach nearly 8ZB. Into the 2012, big data will not slow the pace of growth, global manufacturing, government, retailers, finance and many other institutions have been plunged into the plight of "data explosion."

Especially in the Internet and telecommunications industries, with the continuous innovation of mobile Internet powerful, massive data influx, more new data forms are also emerging, now the data is not structured, but also doped with Office documents, text, pictures, Web pages, reports, audio, video information and many other unstructured data, This brings new challenges to traditional data processing.

With the rapid increase of data volume and the increasing demand of data processing capability, the processing of massive data is more and more concerned. In the fields of finance, telecommunication and so on, it is necessary to analyze a large number of user data in order to make the corresponding decision. The massive data processing system for the storage and processing of Internet data has also begun to be developed in the database-intensive computing systems.

Data-intensive Computing system features

Data-intensive computing systems require not only large scale data storage but also complex computation and analysis of these data. Due to the increasing demand for data-intensive large-scale computing systems, there is also growing concern. Unlike existing distributed computing or high-performance computing, the characteristics of data-intensive large-scale computing can be summed up in two aspects:

Massive datasets: usually at PB level. This means that for a single computing task, the time taken to get the required data will be intolerable, which is completely different from the previous computing system and poses new challenges to the design and implementation of data-intensive large-scale computing systems.

Complex computing process: simply chunking data into chunks does not meet the need for data-intensive computing. Even the analysis of Internet data has begun to have the complexity of scientific computing, and the complexity of this calculation poses new challenges for local optimization and data management.

Since the research on data-intensive large-scale computing system is still in its infancy, the architecture design of data-intensive large-scale computing systems is still being explored, and the focus of the research on system structure is mostly focused on how to make the calculation as close to the data as possible. However, when the amount of data is more than 1PB, the traditional storage subsystem is difficult to meet the need of the mass data processing, and the bottleneck of data transmission I/O bandwidth becomes more and more outstanding.

Therefore, the biggest challenge in the system architecture of data-intensive computing systems is how to ensure the I/O bandwidth between the storage system and the computing systems while storing the large amount of data. The application of mass data processing system is to deal with a large amount of data, so the key is how to organize storage resources to obtain high speed I/O throughput rate and massive data capacity.

Mainframe breaking I/O bottleneck hardening security

In 2011, IBM put forward the concept of "intelligent computing", which includes large-scale data consolidation, optimized systems, and emerging service delivery models such as cloud computing. With the launch of the new Zenterprise 114 mainframe products, the Zenterprise system enterprise-class mainframe has been implemented as "systems in the system" to fully implement "intelligent computing".

It is well known that, in addition to RAS, mainframe design is generally accepted to handle large capacity I/O applications. IBM mainframe design includes a number of auxiliary computers to manage the I/O throughput channel, while freeing the CPU to handle only high-speed memory data, each I/O channel can handle many I/O operations and control thousands of devices. Using mainframes to process large data centers is already a very common scenario.

Mainframes often handle thousands of streams of data simultaneously compared to x86 servers. And can ensure the high-speed operation of each data flow. On the software side, IBM offers a High-performance operating system, IBM Z/TPF, designed to provide high availability for organizations with high demand, high-capacity, and real-time transaction processing needs.

In addition, with highly distributed computing, extensive online collaboration, and a combination of heterogeneous IT environments, the reliance on data is increasing, making information security more critical and complex than ever before. As the IT infrastructure becomes more open and diverse, security threats are intensifying and becoming more difficult to manage.

In terms of security, IBM mainframe has a unique advantage, and System Z Mainframe has a highly secure design that helps reduce the risk of data destruction in today's distributed, collaborative, multi-platform environment. Security is built at every level of mainframe architecture, including processors, operating systems, communications, storage, and applications.

In addition to the exceptionally strong security base, the mainframe is in the IBM Security FX ' secure by-design ' program, where mainframes have been built into the IT infrastructure from the outset. The goal of the program is to help organizations integrate security into their internal service structures and integrate them into business processes and day-to-day operations.

Also, IBM has taken the "secure by" plan into account in software design, with IBM Tivoli and IBM Information Management security products for mainframes supporting the idea behind secure by and provide solutions for user management, resource protection, and audit and compliance reporting. This also enables the mainframe to support and manage multiple hybrid environments in the current distributed, multi-platform computing environment to minimize risk in a mixed environment.

(Responsible editor: The good of the Legacy)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.