Ron Kasabian barge Big Data bubble theory blow out the sand to gold
Source: Internet
Author: User
KeywordsWe big data solutions bubble theory
Svetlana Sicular, a Gartner analyst, said in his blog that big data had passed the "peak of expectations" of Gartner's development cycle and was slipping into the "illusion of disillusionment". (If you don't know about Gartner's development cycle, check out the explanations on the Svetlana blog.) )
With my big data experience, the illusion of disillusionment is meaningless. Large data analysis can create a lot of value. Like most worthwhile jobs, big data is worth the time and effort we have to tap into the value. For the past three years, I've spent a lot of time developing business intelligence and analytics solutions as part of Intel's CIO team. These solutions help customers save a significant amount of cost and time while significantly accelerating time-to-market.
Based on this experience, I give four suggestions below to help you get a glimpse of the myth of the illusion of disillusionment and gain greater value from large data implementations:
1. Expand the mode of thinking. Imagine a larger, more comprehensive business activity pattern and think about how to enrich the pattern with as many data sources as possible, which will allow you to co-ordinate the overall situation. Imagine what infrastructure is needed to support such a large scale of data, and ask yourself whether using the same infrastructure can support 10 times or more times more data.
This is the main goal of the Oregon Health Sciences University (OHSU) large data project. The project aims to accelerate the analytical process of the human genome map, thereby helping to create personalized cancer treatment programs and supporting many other types of scientific innovation. According to approximately 1TB data per patient and a total of millions of patients, OHSU needed to process a large amount of data. To this end, the university is developing infrastructure with its technical partners to deal with a large amount of data related to the sequencing of the human genome and the display of changes over time. With technological innovations in the field of large data processing, this groundbreaking study could reduce the cost of sequencing the genome to $1,000 per person, meaning that demand will rise rapidly and data will increase.
2. Find business related data. Learn from business leaders what challenges they face, what is most important to them, and what they need to know to expand their business impact. Then search the data to see if you can help them solve business problems. This is the main content of the big data plan that is being carried out within Intel. This is intended to provide information to the sales team, including when and which dealers should be contacted for which products. In 2012, the project brought new revenue and opportunities of about $20 million trillion and is expected to be high by 2013.
3. Maintain flexibility. We are in a time of rapid innovation and cannot be as methodical as the implementation of enterprise resource Planning (ERP). From a technical standpoint, you should be prepared to move flexibly to different solutions if necessary. For example, Pecan Street Inc., a non-profit organization comprised of universities, technology companies and utility providers, has a database architecture designed to collect Texas's "smart grid" energy data, which is now in the third round of iterations. As smart meters produce more and more detailed data, Pecan Street Inc. is looking for new ways to help consumers reduce their energy consumption while helping utilities better manage their grids. However, to meet demand, Pecan Street needs to constantly replace its infrastructure. What we need to know is that even now you think you know what tools are needed to build big data solutions, but things will change after a year. You should always be ready to adjust.
4. Connect the main points. At Intel, we are aware of the enormous advantages associated with design data and manufacturing data. "Test, redesign, test, redesign" is the main work of our development cycle. Speeding up this cycle can bring great value. The analysis team began to focus on manufacturing data produced by the specific department responsible for manufacturing and incorporating it into the design process. In this process, we are aware that the standard test process can be simplified while ensuring quality. We use predictive analysis to simplify the chip design verification and debugging process by 25%, and shorten the processor test time. By increasing the efficiency of processor testing, we saved 3 million of dollars in 2012 when we tested a series of Intel Core processors. If this solution is to continue in 2014, it is expected to save 30 million of dollars in expenditure. However, we are just starting out on the road to learning how to use big data to get significant benefits. In stark contrast to the large data illusion, we found many exciting and big data possibilities when we looked at large business problems in a comprehensive way, and we found some desirable ways to help us increase revenue and profits while improving the efficiency and security of our IT infrastructure. Big data projects can be difficult at first, but it's definitely worth the time and effort.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.