Take the experiment as a case of the strategic process

Source: Internet
Author: User
Keywords Strategic process

I've covered this point in a number of articles before, but recently I spent four hours talking to a client in Europe about this topic. It was a pleasant conversation and I thought it was worth noting. It all started with a simple question I asked the customer: What was the last item you thought the error was beneficial to you? The response from the client was very common: an unbelievable expression, and then asked if I had jet lag.

The answer that the customer (and you) should give is this: "In fact, there is a project this week and we plan to continue to find more errors." "Now, I have to admit that this is an otherwise intentional question, and the answer I get is basically that I should do everything possible to avoid mistakes." So, this leads to our discussion ...

For the sake of clarity, this article does not explore operational systems, operational BI, or the various reports that we increasingly rely on. What I want to discuss is the experimental guidelines for corporate assets. I firmly believe that the management and decision makers must accept that experimentation is a core competency, as important as the enterprise IT architecture and data modeling that we currently focus on. I suggest that the Organization adopt experiments as a strategic process, while promoting the business by allowing (and even encouraging) mistakes. The practice of this tolerant experiment stems from the idea that mistakes are often the best way to learn. If you never try and do not make mistakes when validating analysis assumptions, I can only say that you are not aggressive enough in dealing with data, trying to model, and not trying to take into account some aspects of previous modeling, low utilization, or even total unused.

Of course, many smart people have come to realize this. But in a traditional enterprise (what I mean by "tradition" refers primarily to companies outside the Internet domain), we need to work hard to ensure that the project is highly reliable and predictable because it is the only way to guarantee that the project is worth more than it costs.

A system that consumes hundreds of TB of capacity to try and error is an uncommon behavior. It is also rare to use an environment to explore mixed information types in order to engage structured and unstructured flows within the enterprise.

In most of the work we do, it is a reasonable practice not to try to experiment. In general, that's why we're so good at planning. But in fact, we have all become experts in planning success, ensuring that returns do not diminish, especially when applied to traditional databases. And if everyone is doing this, then you can't build a competitive advantage with this approach. Now, we see a change in the best performing customers-to accept experiments and pursue unrealistic and impractical results so far. They explicitly view experimentation and innovation (and the errors in this process) as a more granular understanding of their business, the necessary steps in using data in a way that was too complex from a mass perspective in the past, and using information types that are difficult to manage with conventional systems.

We can take advantage of the technologies that are contained in today's toolkits to start small, integrate areas that have never been consolidated in the past, and represent jobs through programming methods that are completely impossible to implement in regular queries. In other words, experiments have become a practical practice. Http://www.aliyun.com/zixun/aggregation/13568.html "> Large data technology reduces the cost of error, not only to make the experiment possible (even on large data sets), but also become a completely pragmatic approach. When launching large-scale projects, the final results must remain for many years and/or require significant human and capital resources, so it is perfectly reasonable to take some time to experiment.

This article has something in common with the "Big Data Project primaries" series published here. It also raises questions about the contrast between decentralized experimentation and the establishment of a Center for Excellence (COE), and what I call a shared analysis cloud, which treats data, methods, and technologies as shared enterprise assets, not departmental-level projects. We will also discuss the role of the person carrying out the experiment (note that I did not say that it contains only data scientists, and later articles will cover more relevant content). We leave these topics for discussion in future articles.

Also, please comment and contact me by email or letter. I am looking forward to understanding how your organization will conduct trial and error (through your preferred approach), as well as the projects and cultural issues you encounter.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.