Big Data predicts US election
Clearly, "Big data" does not really care who will be elected to the next president of the United States. But all the data show that political scientists and others are concerned that Obama is more likely to win re-election. This success prediction shows the powerful energy of large data.
The statistical model has been watching the hot topics (or even arguments) led by the New York Times FiveThirtyEight bloggers and statisticians Nate Silver over the past few weeks. Silver, who has become the focus of the controversy, has been "whirlwind" to publicize his new book, predicting that the Obama has more than 80% chance of winning the Tuesday election (and later the model to 90.9%). However, Zeynep Tufekci, a researcher at Princeton University's information technology policy, responded swiftly last week: Silver is absolutely impossible to guarantee that Mr Obama will win the November 6 election-just a high probability- None of the results involved in his model took into account partisan politics.
Believe it or not, Silver spends all his time building statistical models-predicting the outcome of a political election, though he is not the only one who does it, but he is the most famous. There are many academicians in the United States, predicting markets, amateurs and others can do this, all of them using different data, using different methods to assess the authority of specific results. With a few exceptions, most of them also predict that Mr Obama will win.
He will Yahoo! Hadoop expands from 20 nodes to 42,000 nodes
Eric Baldeschwieler on the elephant.
Eric Baldeschwieler, 47 years old, has a deep background in computer technology. After acquiring a bachelor's degree in Applied Mathematics (computer science) at Carnegie Mellon University (Carnegie Mellon University), Eric also obtained a master's degree in computer science from the University of California, Berkeley. The technology leader of the Inktomi company Web Services engine (Inktomi is the second search engine to appear earlier, Amazon.com,ebay,hotbot,msn,overture,walmart.com,looksmart,excite,hotbot is his client, and through these top-level portals and target sites, Inktomi provides more than half of the world's Internet users with the latest , the most relevant search results), as Inktomi was acquired by Yahoo in 2003, Eric also turned to Yahoo and eventually passed 2 years of effort to become Yahoo's web search chief designer in 2005. More legendary, Eric, in 2006, resolutely invested in the embrace of the Yahoo Apache Hadoop Project, developing its 20-node prototype into 42,000-node services. Then, when Yahoo decided to fully support the Apache Hadoop project and set up a new company Hortonworks in July 2011, Eric deserved to be the first CTO. As a senior technical person, Eric, the CTO, feels he faces many challenges. But he is very optimistic about the future of Hadoop, "If you make a little contribution, Hadoop will do wonders." Eric will come to HBTC 2012 and deliver a keynote speech on the experience sharing of Hadoop technology.
TripAdvisor: Use AWS to save 50% on server hosting costs
Let's review the architecture of the TripAdvisor. TripAdvisor published its architecture in June 2011. Our business has developed rapidly over the past year, let me summarize our achievements:
56 million visitors a day, 350 million pages of page traffic the Hadoop cluster runs 120TB of data and is growing fast
This summer, we recruited 60 part-time jobs from the university, including Luke Massa and Victor Luu, who worked like our full-time engineers and quickly merged with us. There has always been a question that haunts me: Why use cloud computing? Luke Massa and Victor Luu, by deploying our services in AWS, summed up what happened to them in TripAdvisor in the past summer.
AWS helps businesses save a lot of costs
Running TripAdvisor on AWS
In the summer of 2012, TripAdvisor an experimental assessment of all our products migrating to AWS. First, we started experimenting with www.tripadvisor.com and all the international domain names in the AWS EC2 Environment, and our engineers started with the simplest question: Is it really a good deal to move to AWS by abandoning the hardware we already have? (AWS) can be run intact? (Csdn Note: For blackouts, hurricanes and other unknowable reasons, AWS has seen two major failures this year.) Perhaps TripAdvisor has considered running OpenStack on its own server, an Open-source platform that allows businesses to set up their own private cloud, which is compatible with most of AWS's APIs. )
A few months ago, we started experimenting with cloud computing in close contact, and of course the results were not good or bad. We have learned a great deal from the process, not only the value that AWS offers, but also the architecture that helped us transform our existing managed server clusters. It's all thanks to AWS's flexibility that we will switch DNS and traffic to AWS, which is very practical and a very good learning tool!
Target
builds the site on EC2, evaluates the actual production environment, and builds the cost model to confirm that after the schema upgrade we can reduce spending and increase extensibility after transitioning to AWS, we need to find ways to upgrade our existing architecture
EC2 expenditure
Expenditure includes three main components: instance, EBS and network. The network traffic for the production environment is assumed to be 200gb/hours, with an expense of USD 14.30/h. It can be foreseen that the expenditure of an instance occupies the bulk of the whole expenditure.
Actual comparison
Deploying each data center requires approximately 2.2 million dollars, plus an upgrade and expansion fee of $300,000 a year. Fixed asset expenditure (CAPEX) approximately 1 million USD/year, assuming that the initial cost of the data center is apportioned to 3 years. Operating costs include space, power, and bandwidth, which are about 300,000 dollars/year. The total cost is USD 1.3 million/year/data center. We have over 200 units in each data center, costing 7000 dollars per typical equipment.
If we spend 1.3 million dollars on the EC2 and sign the 1-year contract, we will get the following architecture:
550 Front-End and back-end instances 64 cache Instances 10 Database instances
Cost 1486756.96 dollars.
This means that we will increase the capacity by 60% (there are currently 340 front-end and back-end instances, 32 cache instances, 5 database instances).
If we sign a 3-year contract, we will enjoy an amazing discount, the cost of this architecture is only 880,000 USD/year. If we want to spend 3.9 million dollars in three years, we will get the following architecture:
880 Front-End and back-end instances 64 cache Instances 20 Database instances
An interesting phenomenon is that even with this architecture we use only 1760 cores (2 CPU cores per server), but we now use (Csdn Note: The traditional server hosting method) in total 3,500 cores. Clearly, we are convinced that there are some garbage and potential threats to the current architecture and that it is inefficient to operate.
Cost Savings Summary
retain the case, we calculated that after the signing of the 1-year contract, the cost of the year will be saved by half. At the same time, we do not need to reserve instances for peak traffic or system backups, thus saving our total cost. Each instance can be customized to meet actual requirements. Now, we can only use part of each server's performance. Operations personnel-operational dimensions are more efficient because we know that the instance will always be there to run.
Your phone will become a supercomputer in the future.
Klint Finley of Wired magazine says that in five years, Intel may make your phone a supercomputer.
This is the target of Intel's experimental single chip cloud computer project or SCC. The company is currently developing potential mobile applications for chips, as well as development tools that make it easy for developers to take advantage of this technology, rather than being a supercomputer expert.
In other words, the arm tries to put the phone chips into our supercomputer, and Intel does the opposite. The boundaries between mobile hardware and data center hardware are becoming increasingly blurred. This may seem strange, but if you have a big picture, you can see what it means.
Appro launched Liquid cooled supercomputer
U.S. High-performance Computing provider Appro launched the new Xtreme-cool supercomputer, featuring an energy efficient design that does not use a freezer, but uses warm water to cool the heat exchanger. The company will showcase the system next week at the SC12 event held in Salt Lake City.
This xtreme-cool supercomputer is made up of blades that are normally installed in the cluster. The liquid cooling attached to the node is connected to the Coolant distribution Unit (CDU) through the pipe with the drip-free fast connection. Leak detection and prevention systems are integrated in the system as an additional protection measure. It also provides integrated remote power supplies as well as temperature monitoring and reporting.
"Appro's new Xtreme-cool supercomputer is aimed squarely at the global High-performance Computing market, which reached a record $10.3 billion trillion in 2011, with IDC predicting more than 14 billion dollars by 2016," said Earl Joseph, vice president of IDC HPC Project , "Appro's new products are designed to meet customer needs, such as warm liquid cooling heat exchanger technology in less or no air-conditioned data centers, which directly cools computing processors and memory that combine power and temperature monitoring software. This could increase the cost-performance and TCO of high-density, large-scale cluster environments.
Using a high temperature water cooling system allows you to use less water chillers or not at all.
Rightscale Join OpenStack Support Rackspace Open Cloud
Rightscale, a company that provides unified access to multiple cloud platforms, today announced its formal support for the OpenStack project and announced that it will support clients to deploy to the OpenStack cloud of Rackspace.
This initiative represents the further development of the OpenStack project.
Michael Crandell, chief executive of Rightscale, said: "There is a growing interest in OpenStack." He says Rackspace's open source cloud is closely aligned with the OpenStack backbone code, minimizing proprietary extensions.
Rightscale is already a platform for integrating a wide variety of public and private clouds, including AWS, Windows Azure, Google Compute Engine, DataPipe, HP, Logicworks, SoftLayer, and Tata. In the private cloud, Rightscale can be used to manage workloads on the OpenStack, Cloudstac, and eucalyptus platforms, all of which are open source.
VMware releases Cloud Foundry mini version
Everything in the cloud seems to be bigger or smaller. VMware has now gone on a miniature route, releasing a miniature version of the company's cloud Foundry.
Miniature Cloud Foundry can be deployed on a single virtual machine. In its blog post, VMware says this is the ideal choice for developers to test applications that are still in the development phase.
Cloud providers seem to be constantly tweaking their products to expand their portfolio. The easiest way to do this is to add capacity on the basis of existing products, or to divide the product into smaller, separate pieces. VMware took the latter approach.
By contrast, Amazon Web services recently announced the launch of two new types of virtual machine instances for its cloud services, both high input/output versions, for its popular Resilient Cloud computing (EC2). At the time, independent analyst Paul Burns pointed out that by adding to the functionality of existing products, companies could not only have more products like Amazon, but it would also allow customers to have an instance type that is more consistent with their computing needs.
VMware says that the miniature cloud foundry will have the same characteristics and functionality as the regular cloud foundry, the only limitation being that it will run on a single VM. In addition to the miniature version announced today, VMware also announced that new features will come with the release of the miniature cloud foundry version. These features include support for stand-alone applications and increased support for various programming languages such as Ruby, Java, and Node.js.