In a recent interview, Joshua McKenty, chief executive of Piston Cloud, asserts that when it comes to private clouds, the best way to integrate cloud storage and server performance is far beyond the more traditional approach.
"The right model is to have two servers with JBOD, because it restricts you, not any one of them, but is determined by the number of spindles," McKenty said. ”
It works, according to McKenty, because the direct connection is stored on the server without the same as the file manager, and even a SAN switch routes through a single gatekeeper. Instead of "total bandwidth 10G of 20 servers," he says, each server has its own 10G port, which means you have the value of 200G of bandwidth value. When you need to store multiple copies of a file, even if a single disk is slow, the overall can still be far more than you can achieve a more traditional settings.
The idea is "heresy", so far for the IT department, McKenty says, because they want to use NAS or SAN storage to handle their workloads rather than set storage and compute at each node. He describes how NASA's operations team is trying to replace one of his integrated systems.
"They brought a file manager worth 20,000 dollars and put it at the bottom of the rack." Its fibre Channel, has a switch switch. They cabled Fibre Channel and FCoE everything. And they spent 4 days in the downtime to adjust this thing, they have 20% performance and will eventually get our configuration, "he asserted.
But is this comprehensive approach just for private cloud JBOD to have a special nasa-like workload?
McKenty in the following conversation, absolutely not. "We have seen customers using the same architecture for VDI workloads, network hosting, risk analysis and other Compute-heavy Monte Carlo simulation and finance." ”
McKenty's idea is that storage density and processor speed are growing much faster than network capacity, and the liquidity gap is an important reason why distributed models work well.
"The speed of light exceeds your infrastructure. The delay in moving data at high speed will never keep up with the data, but we want to do it, "he said." "But if you look at a SAN or NAS architecture, generally all of the storage devices are a single wire at the distal end." ”
Buffer time is 10 years or 15 years
Needless to say, not everyone agrees with the mckenty of the problems involved.
"He said it was kind of like talking about the next 10 or 15 years," said Scott Scott, marketing manager at Bo Ke. "According to him, direct attached storage is the best way to ensure performance, just no customers are dissatisfied with their storage." ”
He also criticized the way forward, representing almost all uses. A lot of people are caught up in this trap, they want to say "This is cloud" or "big data", want to in this way to ensure the development of the world. But the reality is, it depends on the amount of work. He added, "Your application must make the most of the difference between adjusting the infrastructure." ”
HP Storage Product Marketing Director Shawn Kinne also disagreed with the view that the integrated computing approach makes it difficult for users to adapt to changing, flexible ways. At the same time, he points out, the idea of keeping multiple copies of a single file to improve access speed is difficult to achieve in the face of industry trends.
"If disk hosting is not a problem, it is possible for each customer to complete," Ginnie said. "Corporate It is often very cautious. McKenty's advice, in this regard, is more like a science project. ”
For him, the chief executive of the piston cloud mentioned the cost issue as just a joke. Distributed storage is obviously much more economical than the price of a professional storage system. "It's a much more economical way to integrate the hard drives in each server's infrastructure," he said.
(Responsible editor: The good of the Legacy)