Ren first talks about successor system: Believe in Huawei's inertia
Huawei chief executive Ren has recently written an internal article for the rotating CEO heralding, in this article, he reviewed his personal heroism to believe that unity is the power of the journey, reviewed the company's organizational structure from scratch to the current implementation of the rotating CEO system evolution, talking about successors, he said, " Believe in Huawei's inertia and believe in the wisdom of their successors. ”
Last year, there have been rumors that Ren in order to let his son Ren smooth succession, to 1 billion yuan "breakup fee" forced the company chairman Yafang, then the company issued a statement denied.
In this article, Ren specifically mentioned in the 2002 company strife, "not the backbone of the company, in the vast darkness, light their hearts, to illuminate the way forward, now the company has not." This period of time Sun Chairman Unity Staff, enhance confidence, work not. ”
Ren's article is for the rotating CEO heralding, on the issue of succession, he did not use the term "successor", and the "successor", he thought, through induction to find direction, and make himself in a reasonable organizational structure and excellent enterprising state, in order to prevent the future of all kinds of accidents.
"Wired": A model of mapr--hadoop commercialization
Recently, the famous American Wired magazine author Cade Metz wrote a mapr commentary. He believes MAPR has the necessary elements to develop Hadoop and succeed.
M.C Srivas (founder of MAPR) helped build the Google search engine, which makes Google's search engine amazing!
M.C Srivas The Google search infrastructure team for 2 years, and in the summer of 2009 he chose to leave Google and create a company--MAPR. MAPR also employs the excellent design ideas behind Google's infrastructure (Google GFs and MapReduce) and provides large data-processing operations. As M.C as any other company. Srivas will commercialize and sell based on open source Hadoop products.
According to M.C Srivas and Schroeder, their Hadoop distributions lead other Open-source Hadoop distributions on many features. While others do not think so, this is an indisputable fact that MAPR's products overcome the inherent flaws of other open source versions of Hadoop.
M.c. Srivas that during the 2 years of development, MAPR basically reconstructed the file system. At the same time, the "Job tracker" of Hadoop has been improved so that it can be distributed across machine tasks and manage its execution. As a central server, Namenode is responsible for managing file System namespace and client access to files. The open source version of Hadoop still has a single node failure and a limited number of namenode processing files.
The New York Times: Big data will breed the next heavyweight start-up.
The result of the Big Bang was that the data began to become as ubiquitous as air: mobile phones, computers, digital cameras, smart meters and GPS devices. The face of massive data, companies and government agencies are difficult to digest, at a loss, nothing to do.
But the problem means opportunity.
Internet Platform Open report: Baidu Tencent attracts developers
Recently, the Chinese Academy of Sciences Graduate School of Management Network Economics laboratory released the 2011 China internet Platform Open investigation report. The report evaluates several mainstream Internet open platforms in China from the four dimensions of user, platform, support and revenue. Among them, the comprehensive score of Baidu's highest 4.06 points, Sina Weibo 3.78 points, Tencent 3.73 points, 360 platform 3.57 points, Renren 3.5 points.
Wired: Decryption of Amazon Virtual supercomputer
This fall, Amazon built a virtual supercomputer on its EC2 (elastic Compute Cloud, the flexible Computing cloud) service. In the global supercomputer rankings, the "Non-existent behemoth" has the ability to compute at 42nd Place.
This virtual supercomputer also requires hardware support, which, like other entities ' supercomputers, is made up of a series of computers. And this virtual supercomputer makes sense, and it's not being used by Amazon alone, it's a supercomputer that anyone can use.
Amazon is a representative enterprise in the era of cloud computing. Along with its huge electronic retailing business, Amazon has built a global network of data centers that anyone can access computing resources, including virtual servers, virtual storage, and a wide variety of online services. The global network is so large that it can run any of the fastest supercomputers on the planet.
2012: Data Center and cloud network architecture will prevail
There have been many significant events in the data Center/cloud Exchange Architecture area in 2011. This trend is expected to continue in 2012. At the same time, the next generation IT technology will be implemented in the real world.
So what happens in 2012 years?
1.2011 years of architectural progress will be implemented in the enterprise and service provider network in 2012.
2. The network architecture will be as diverse as the institutions that apply them.
3. We will also see another disruptive action launched by the 10G server. This will achieve a perfect combination of 10G and 40G architecture.
4.2012 years of further it automation, a single click of the mouse can be configured with bare metal servers and switches and virtual LANs, open scripts, virtual machine mobility, and so on.
5.2012 years of investment in data centers and cloud-structured start-ups will continue and may grow.
Point of view: Streaming computing drives real-time business change
During the year, we saw that many vendors focused mainly on integrating Hadoop or NOSQL data processing engines and improving basic data storage. The most successful thing about Hadoop is that it uses MapReduce. MapReduce is a programming model for processing Super large datasets and generating related execution, MapReduce's core idea is to draw lessons from the function is the programming language and the character of the vector into language.
2012 Vision: Hadoop's fist-kick hybrid cloud creates connectors
Will the world be destroyed in the 2012? Whether you have a ticket or not, there are a lot of new trends in the IT field next year that are worth watching. The boom in cloud and big data has been unstoppable and will continue to develop, so what kind of destruction and rebirth will happen in the dark waves?
1. Large data rapid growth Hadoop should rise
2011, Cloud computing Belt hot Big Data, 2012, big data will fry high Hadoop.
2. Hybrid cloud is a great way to generate cloud connectors
We see that the debate is still going on about what kind of cloud to use, and it may become more intense in the future. But at the same time, as IT departments begin to combine public and private clouds, the utility of the hybrid cloud is emerging in the enterprise. Now there are a lot of vendors pushing the hybrid cloud rather than simply pushing the public cloud model.
PC: China's efforts to build data centers bring opportunities to America
China is launching an unprecedented wave of data-center construction, providing business opportunities for US companies and is expected to make China one of the most sophisticated computing infrastructures, PC, the US it website, wrote today.
Virginia Tech makes supercomputer Hokiespeed
Virginia Tech today announced the start of a supercomputer hokiespeed, which can achieve single-precision peak 455 trillion floating-point operations, double peak 240 trillion operations, although not the world's strongest computer (TOP500 ranked 96), But it is one of the most energy-efficient computers in the world, ranking 11 in the November 2011 Green500 list, and performing quite well.