For years I've been worried that the open source movement might suffer from Kim Stanley Robinson's brilliant exposition of "Green Mars": "The tide of history is faster than we do." "Innovators are left behind, and the world they've changed is running in unexpected directions with their ideas."
In the "Open source paradigm shift" and "What is Web 2.0" in these articles I think the internet as a private platform mainly built on the open source software, its success may lead to a new block in the cloud computing domain. Free and open source licenses are based on software distribution, and what is the meaning of free and open source licenses when software is no longer needed for distribution and only runs on a network platform? How do we protect the freedom of innovation when online companies form a competitive advantage? These enterprises through the user to create a huge database to form their own competitive advantage, the more users use this advantage is stronger, thus to the later formation of insurmountable threshold.
This year's meeting at the open source conference cheered me up. Open source activity for Web 2.0 and cloud computing has been proliferating over the past few years, and I've clearly seen the signs that the concept of open source has been restructured in the Internet era. Like "Beyond rest?" With XMPP pubsub building data Services, "Bigdata cloud Computing", "hypertable: Open source, high-performance extensible Database", "Support for Open Web" and "large data with Hadoop and EC2", the field is full. (due to fire requirements at the Portland Convention Center, many meetings failed to bring in all interested people.) Brian Aker's speech about drizzle was so popular that he had to talk about it three times! )
Just focusing on cloud computing is not the key. The key is to find out again how to keep open source in the new situation. It is important to recognize that the success of open source has several key elements:
1. Permits and encourages redistribution, modification and even development of branches;
2. An architecture that allows a program to be reused as a component wherever it is possible, and can be extended rather than replaced to provide new functionality;
3. Low threshold, let new users easy to try.
4. Low threshold for developers to build new applications to share with you.
These are far from comprehensive, but we can think about them. As mentioned above I don't think we've found some kind of license that allows the branch to develop Web 2.0 and cloud applications, especially when these applications form the blockade that data brings, not without open code. However, there were already indications, (like Yahoo! Boss) companies are beginning to understand that open source is only half of open source in the era of cloud computing.
But Open data is fundamentally challenged by the idea of efficiency calculations in cloud computing. Jesse vincent--He's got some of the best hacker T-shirts in history (and RT) Bare: "Web 2.0 is the digital tenant tenant's land." (I checked to see if Nick Carr invented the idea in 2006!) If this is true for the Web 2.0 successful enterprise, then it is even more true for the infrastructure of cloud computing. I remember Microsoft Windows live Vice President Debra Chrapaty "The future on someone's platform as a developer means you are rooted in this infrastructure." "The New York Times has called the bandwidth provider an OPEC 2.0. How much is the cloud platform?
That's why I'm more in agreement with the point-to-point approach to distributing Internet apps. Jesse Vincent's speech at the Open source Conference, "Prophet: The way out of the cloud" describes a joint synchronization system; Evan Prodromou's "open Source microblogging" introduces Identi.ca, a joint, open source life-flow application program.
We can speak freely about open data and open services, but it's even more important to be honest about how much of the possibilities are being manipulated by the system architecture that everyone uses. Consider, for example, why PCs can only drive a binary free software industry, while Unix creates an Open-source software ecosystem? This is not just an ideological problem; Unix's decentralized hardware architecture requires source code so that users can compile applications on their own machines. Why is there so many independent information providers on WWW, and concentrated sites like AOL and MSN stumble?
Please note: All platforms serving as services (from Amazon's S3 and EC2, Google's app to Salesforce's force.com, not to mention Facebook's social networking platform) have more in common with AOL, Instead of the Internet services that I've known for the last 15 years. We're going to go back to the centralized model in 10 years? An interoperable Internet should be a platform rather than a private restricted area for a vendor. (Neil McAllister describes how one-sided most platforms are as service contracts.) )
So I gave my first tip: if you care about open source in cloud computing, build projects on services that are designed to be federated rather than centrally controlled. Architecture has always been a triumph over licenses.
But the point-to-point architecture is not as important as open standards and protocols. If the service requires interoperability, the competition is protected. Whatever Microsoft and Nescape wanted to control the web in the browser wars of the year, it failed because Apache insisted on open standards. That's why the Open Web Foundation, set up at the open source conference last week, has important implications. We want to ensure that not only open source software on the Web, but also open standards, it will ensure that the dominant manufacturers do not cynical.
I expect the "Internet operating system" that will evolve over the next few years to require developers to stop seeing applications as endpoints, but to use them as components. For example, why should every application create its own social network? Shouldn't social networking be a system service?
This is not a "moral" appeal, it is a strategic proposal. The first vendor in any field to provide a properly open, reusable system service will grow rapidly. There is a lot of emphasis on low-level platform subsystems, like storage and computing, but I've always believed that many of the key subsystems of such a developed operating system are data subsystems, such as identity information, location information, payments, catalogs, music, and so on. And eventually these subsystems will be properly open and interoperable, allowing developers to build data-intensive applications directly without having to organize all the data they need. John Musser called it a programmable web.
Please note that what I am saying is "properly open". Google maps is certainly not open source, but it is open enough (compared to any previous web map service) to become a key component of a new generation of applications that no longer need their own map information. A summary on programmableweb.com shows that Google Maps supports almost 90% of the map mashups. Google maps is private, but reusable. A key principle to determine whether an API is open is to see if it supports services that are not built on the API and can be distributed on the web. Facebook's API supports apps on Facebook; Google Maps is the real programmable Web subsystem.
So, even if the cloud computing platform itself is private, the software it runs on can not. Rightscale's Thorsten von Eicken, in his speech "extending to cloud computing," points out that almost all software on cloud computing platforms is open source, a simple reason is that proprietary software licenses do not support the way cloud computing is deployed. While the open source protocol does not prevent cloud providers from blocking, it allows developers to deploy software in cloud computing at least.
It is important to say here that even private cloud computing platforms offer a key benefit of open Source: lowering the threshold for entry. Derek Gottfried's speech "dealing with big data with Hadoop and EC2" is a good way to show that. Derek describes how he can use a credit card, authority, and hacker skills to put the New York Times Online Archive on the Internet for free access. Open source is to encourage innovation and reuse, WEB 2.0 and cloud computing can serve the same goals.
Another benefit of open source--try it before you buy it--viral marketing--it's also possible to provide a vendor for cloud computing. In a venture capital I asked the company how to avoid high sales costs (especially enterprise software). Open source solves this problem by building a pipeline of free users, and the company can then sell the following services to it. The answer to cloud computing is not so good, but there is an answer: Some applications are free, and many instances are charged. This business model loses some viral marketing and uploads costs from the end user to the application provider, but it has an open-source advantage that provides a more powerful channel for paying service upgrades. Only time can show who is the better way to distribute open source or cloud computing, but it is clear that both are much more advanced than traditional proprietary software.
To sum up, there is a long way to go before we get all the answers, but we are moving forward. Regardless of the possibility of blocking the Web 2.0 and cloud computing We see, I believe the benefits of openness and interoperability will eventually become mainstream, and we'll see a system of collaborative programs that don't belong to the same company, and an Internet operating system like Linux on a PC architecture, consists of countless software. Those sceptical about Internet operating systems say we are losing the control level of a real operating system. I reminded them that many of the software in Linux today existed before Linus wrote the kernel. Just as "72 towns make up Los Angeles", today's web is a 72 subsystem of an operating system kernel. When we finally find the kernel it is best to open source.