If you doubt the secrecy of running your business application in public cloud computing. Then you can consider private cloud computing. Accenture Cloud manager Joe Tobolski that many technology companies portray cloud computing as something you can buy out. Add a bit of cloud computing powder to your data center and you have an internal cloud. That is not the case.
Tobolski says many of the leading IT organizations have been working on an in-house, internally deployed private cloud, starting with data center consolidation, the rationalization of operating systems, software and hardware, and virtualized servers, storage, and networking.
Tobolski says the guiding principle is flexibility and pricing by use. This principle means standardization, automation, and commercialization of it.
Tobolski says it goes beyond the scope of infrastructure and configuration resources to become an experience of application building and the use of it by users.
Despite all these assertions, we are in an early stage when it comes to internal cloud computing. Only 5% of the world's big companies can run an internal cloud, according to James Staten, a leading analyst at the market research firm, Forrester. Of these big companies, perhaps only half of the companies actually have an internal cloud.
But if you're interested in studying private cloud computing, here's what you need to know.
First step: Standardize, automate, share resources
Forrester's three principles for building an internal cloud are consistent with the concept of Accenture's next-generation it.
To build an internally installed cloud, you have to have a standardized and documented program for operating, deploying, and maintaining that cloud environment, Staten said. Most companies are not yet standardized enough, even though companies that follow the IT information base (ITIL) are closer to the goal than others. Standardized operating procedures for efficiency and consistency are important for the next base layer, which is automation. You have to be a trustworthy and first-rate user of automation technology. This is usually a big hurdle for most businesses.
Automating deployment can be the best starting point because it enables self-service capabilities. For a private cloud computing, this is not Amazon-style cloud computing services. In that cloud computing service, any developer can deploy its own virtual machine at will. That would create chaos in the business, Staten said, and it is totally unrealistic.
On the contrary, for a private cloud computing, self-service means that a business has established an automated workflow. In this way, the resource request is subject to an approval process.
Once approved, the cloud computing platform will automatically deploy this specific environment, Staten said. More often, private cloud computing self-service is about allowing developers to apply for three virtual machines of this size, one size of storage and such bandwidth. For end users looking for resources within the company's cloud computing, self-service may be "I need a SharePoint volume or a file share." ”
Third, creating an internal cloud computing means sharing resources, Staten said. This usually excludes other departments of the company in the list. This is not a technical issue, but an organizational structure. The marketing department does not want to share a server with the Human resources department and the finance department does not want to share it with anyone. When you think about it, it's hard to run a cloud. ' When resources are not shared, cloud computing is very inefficient, ' says Staten.
In the face of this challenge, it manager Marcos Athanasoulis presents an innovative approach that makes participants feel comfortable. He offered to share his Linux based cloud-based infrastructure resources at Harvard Medical College in Boston. This, he says, is a way to contribute to the hardware.
Harvard Medical School is the place where Athanasoulis is called 1000 CIOs, where the IT department faces a unique challenge. The IT department has no right to tell a lab what technology to use. IT departments are subject to a number of restrictions. However, if a lab is to deploy its own infrastructure, it can do so. So when Harvard Medical School approached the concept of cloud computing four years ago, it did so and asked for a pattern. In this model, we have capacity in a shared way, and schools pay for it and subsidize it. In this way, people with a small amount of demand can get the resources they need to do their research work here. If we can't provide a suitable alternative, it will be attractive to labs that want to build their own high-performance computing or cloud computing environments.
Using this approach, Athanasoulis says, if a lab buys 100 nodes in the cloud, it promises to be able to use that capacity. However, if that capacity is idle, other workloads can also use that capacity. We said to them, you have this hardware. But if you let us integrate into cloud computing, we will manage it for you and keep it updated and using patches. But if you don't like the way the cloud works, you can take your hardware away. It turned out to be a good selling point, he added. In four years, no one has left the cloud.
To support this method of contributing to the hardware, Harvard Medical School uses the platform LSF workload automation software of the platform Computing (Platform Computing company), Athanasoulis said. This tool provides us with the ability to establish a queuing sequence and terminate the operation of a dedicated hardware node. In this way, people with hardware can be guaranteed to use hardware. The terminated job can also be recovered.
Do not implement before understanding your service
If cloud computing is inefficient when resources are not shared, the cloud computing is meaningless without first considering these services. IBM, for example, said Fausto Bernardini, the IT strategy and architecture manager for the IBM Cloud Computing portfolio, that when it started implementing every potential cloud project, it would evaluate the benefits and costs of different types of workloads and risks, migrating to different cloud computing models.
Whether a workload is closely related to a private, public, or mixed cloud model depends on a number of attributes, including key attributes such as compliance and security, as well as delays and interdependencies of application components.
Gartner, senior analyst Tom Bittman, says many companies consider building a proprietary cloud from a product perspective, without considering service and service requirements. If you really want to build a private cloud, you need to know what your service is, what service-level agreements, roadmap, and costs are for each project. This is an understanding of whether these services are evolving into a cloud computing model.
A generic service with a relatively stable interface is something you should consider for private or public cloud computing, even if your business relies heavily on these services, Bittman said. E-Mail is an example.
' E-mail usage is high, but it's not mixed with the way I work inside my company, Bittman said. It is this service that is moving in the direction of interface and independence. I don't want it to work closely with the company. I want to make it as separate as possible, easy to use and available from the self-service interface. If I have customized this service over time, I'll have to undo that service and make it as standard as possible.
In contrast, a service that defines a business and has been the focus of technological innovation is not a competitor of cloud computing. The goals of these services are close relationship and integration. They will never be used in cloud computing. They may use cloud computing at a lower level, just like our theoretical calculations, but the interface to the company will not be the cloud model.
Once you understand what services apply to cloud computing and how long it takes you to get ready for a public cloud, you're ready to build a business justification and start looking at a proprietary cloud from a technical perspective, Bittman says.