For the past few decades, most IT departments have experienced similar developments: The original use was a highly centralized computer system (for example, large mainframe), but then, like the Big Bang, computer resources exploded out of the explosion, was not seen before the decentralization and centralization of the. In a market so active, this dispersion is very meaningful. The computer platform is upgraded very quickly, and a single mainframe needs to be supported by high maintenance costs it would be better to invest in a low-power device, so that the mainframe is depreciating and its efficiency reduced (for example, by the standard of MIPS per dollar a year).
Therefore, it is a natural process for computer processing to concentrate gradually. In fact, many of the technologies invented in recent years have been driving this trend forward.
Because of the increasing degree of decentralization, the logic is that storage will next (overall) become more and more dispersed. In fact, storage is balanced between centralization and decentralization. The role of storage is special, data needs to be concentrated enough to be stored, so it is easy to manage, but at the same time it needs to be dispersed enough to facilitate the efficient use of customers. This is the current problem. But the situation is changing – a change that will affect the overall pattern of our management of it from a security perspective, which is "big data."
What is "Big data"
The emerging "big data"-the logical derivative of the increased use of virtual technology, cloud computing and data centers. These technologies are characterized by high cost and efficiency. And they can be in the standardization of computing resources, integration and centralization of leverage to achieve http://www.aliyun.com/zixun/aggregation/13746.html "> economies of scale, but also to help the realization of cost-effectiveness." But when the enterprise adopted technology such as centralized storage, it found that it produced a lot of data, in some cases, even the EB level. What is the level of EB in the end? Since the history records, the amount of information produced by mankind is about 5EB.
People of insight, such as some of the most observant engineers and scientists in social networks, have found that if a large amount of data is concentrated in one place, there will be opportunities to use that data to achieve higher returns. This seems to be an unexpected gain from big data. So, as data volumes snowball, there are opportunities to add value to these data. This is a revolutionary significance for the enterprise, it gives us more understanding of our customers, how they enjoy our services, and our business overall operation.
Of course, for those of us who are security-conscious, there is no doubt that it has changed the whole pattern. From a security perspective, the impact of this transition has both positive and negative. For example, storing all of the data in one place makes it easier to protect the data, and on the other hand, it facilitates hackers and their goals become more alluring. It may take a lot of time to explore all the pros and cons of large data from a security perspective, but as the transition progresses, the rules for data security will change.
Why? Because the amount of data is non-linear growth. Most companies do not have a specific tool or process to deal with this non-linear growth. That is, as the volume of data continues to grow, we see that traditional tools, especially security tools, are fading out of the arena (they have already started) because they are no longer as useful as they used to be.
So, for companies that want to plan ahead before the change comes, they have to think clearly about how they can avoid reacting to this change in the first place, "avoid being led by the nose". If you're considering using natural gas, you're not going to be stockpiling a pile of coal, are you? Therefore, it is necessary to attach importance to the direction of the industry.
The tools and coping processes used are very important
Some people may immediately question: what does it matter? or "I don't care about the size of the data, how can security tools be affected?" Just calm down and think about what tools your system is using to make sure it's safe. And then think about how many of these tools can support searching or converting limited data?
Consider further how difficult it is to scan a malicious software in a large network-attached storage or SAN. How long would it take if the database were to grow 1000 times times? 100,000 times times what? Can you scan it all at once a day like this?
What if data disclosure Protection (DLP) or compliance is required when mining data? For example, when a PCI auditor needs to search for a credit card number for data stored in the cardholder database, what happens when the CDE data reaches EB level? The search itself is hard enough, not to mention the manual confirmation of the GB-level false positives after the scan. Both of these operations have become less realistic if we continue with the previous approach.
In many technical scenarios, the size of the data will affect the security control or the proper operation of the supporting operations. Imagine, for example, that in order to ensure storage data and file-based data integrity and controllability, they require log analysis, file monitoring, and encryption/decryption operations. These all belong to the processing function of the data. For these operations to continue to work, it is necessary to upgrade them. So, in order to make scanning as easy as ever in the big data world, some people have begun to design new tools (for example, databases), so the security tools we use must be innovated to meet new challenges.
Of course, change is unlikely to happen overnight, but for security professionals it's time to think about it and it would be nice if they thought about buying new tools. Data has started to grow geometrically, so developing a new tool based on linear data scanning should not be the best solution, and at least it has brought some tricky questions to the vendor. Instead, it may accelerate the use of operations such as file encryption, which are gradually climbing after the advent of a technology based on linear data scanning. It may not be easy to encrypt an EB-level data at once, but what if the operation was done in time before the massive data growth? In that case, it could be another thing.
Luckily we still have time to deal with. There is still time to adjust our operations and control procedures before the problem becomes trickier. But think about how fast the virtualization process can be, and this problem may come earlier than we expected. So it makes sense to take the time to think about the problem.
Author: The Senior safety planner of Ed Moyle,savvis Company, founder of Security Curve Company, provide customers with strategy, consulting and solutions. It has rich experience in the field of computer embedded development testing, information security audit and security solution development.
TechTarget Chinese original content, original link: http://www.searchstorage.com.cn/showcontent_52138.htm
(Responsible editor: Lu Guang)