Reflecting on the past and changing the future

Source: Internet
Author: User

For people in the computer field, Chuck Thacker is definitely a name that nobody knows. He is one of the inventors of ethereum and one of the world's first laser printers. In 2009, the Turing Award was awarded to chuck Thacker in recognition of his pioneering design for the first modern personal computer, Xerox PARC, Alto, and on a LAN (including Ethernet) outstanding achievements and contributions in multiple processor workstation, high-speed cache consistency protocol, and tablet PC. Bill Gates once unconcealed his respect for him and said, "as far as my contribution to computer science is concerned, I don't know who can afford Chuck Thacker ." In 2010, at the 12th computing Conference of the 21st century, the computer master once again shared his thoughts on the computer's past and future. Let's listen to the voice of the wise!

I am very glad to have the opportunity to come to Shanghai to communicate with Chinese college students. I was surprised when I got the Turing Award just now, because in the past 40 years, this sacred award was basically awarded to researchers like me who were actually computers. I read the papers of the previous winners and read what they are thinking. I found that they are basically divided into two categories, one of which is generally smooth, and the other is a great challenge for the future. However, I do not belong to either of these two categories, so I want to discuss it from another approach. I want to study a paper which is also being discussed recently, I hope to find a way to talk about the problem and help us better understand the problem and the solution.

When designing a system in the future, I want to consider the impact of previous decisions on the present. The decisions made at that time may be a good decision, however, today it is less important than decades ago. Some of the choices we made in the past are no longer valuable or meaningful today. If we go back to the past and make another choice, this is not to say that we have made the wrong choice in the past. I want to say that we need to carefully analyze the decisions made in the past. Today we regard them as some principle, in addition to these principles, I will give you six examples to prove that my theory is valuable.

Before that, let's take a look at the development of computers and computers in the past decade. This list compares the two machines. The first machine was Alto, a computer I studied in 1972, at that time, the CPU processing capacity was 6 MB, which was 2010 GB by 2.8, but multiplied by 4. There were 4 CPUs in my computer, and most of the three CPUs were idle, it has increased by 1900 times, which is very fast. The memory capacity is changed from kb to 6 GB, and memory is becoming a bottleneck. In addition, the pixel size of the display only increases by 150 times, which is not a big problem because our display has now exceeded the level that human eyes can afford, or the level that human eyes can identify. Currently, the display screen may not simply increase the pixel, but provide a breakthrough in 3D or other ways. In addition, the network speed used to be 3 MB, and now it is 1 GB, an increase of 300 times. Hard disk capacity is very interesting. The access capacity has increased by 0.28 million times, which poses a challenge for us.

So what are the driving force behind the rapid growth of computers? At the top, there is a 0.28 million, which is what we call magnetic memory. We can store all the information we have gained for a lifetime on several disks. Second, we have not considered this. We can further improve the purity of glass. If we add some routes in the past, this will increase the time. However, what I want to say is that bandwidth is no longer a scarce resource, the latency can be improved through our better design. The third dynamic force is that we often consider the semiconductor and Moore's Law. We often have a wrong idea about Moore's Law. We didn't say that computers are getting cheaper, but computers are actually getting cheaper, but it is getting hotter and hotter. Moore said that there may be more and more semiconductors on a fixed chip, doubling every 24 months. His original prediction range is the next few years, however, his predictions will be effective for the next half century.

Next, let's take a look at some specific examples. Some of the things that have been invented have not brought much influence today, because the technology reality cannot be adopted on a large scale.

The first is virtual memory. Everyone knows the concept of virtual memory very well. This machine is very small and fast. There is a memory of the magnetic kernel and a magnetic bone, I am talking about this development was implemented in the early 1960s S. Their idea is very simple, and the kernel memory looks as big as the magnetic bone. How can this problem be solved? Divide the kernel into 32 512 magnetic pages (divide core into 32 512-word pages) and map them to the actual kernel address. At the same time, there is also a very basic, they are called learning plans. They use learning plans to decide which page of 32 pages to go. This technology is used by all computers except supercomputer. This is because the supercomputer does not want to increase the access to these memories. Kerui once talked about how real projects require real memory instead of virtual memory. Embedded Systems in your mobile phones do not use virtual memory for Embedded things in MP3. Everything except the above two items uses virtual memory. The second problem is memory consistency. The idea of consistent memory is very important. For example, my program is write, and another program is read. The read program extracts the value I wrote, instead of the previous value. As long as we run the program on a computer without a cache, the problem is actually a problem in the program and IO systems, input and output systems, such as writing content to the disk, write the information to the memory and ask the disk controller to extract the information from the memory. For a multi-processor system, this mechanism can be used. Let such a system be used in a multi-processor system. At the beginning, we found that it was more complex than the original protocol, because we could not transmit data on the bus as before. If all the data is connected by bus, we need a point-to-point connection, this makes the Protocol complex. Later, Intel used a mathematical method to further improve the protocol and eliminate excessive complexity of the Protocol.

Why is consistent memory used? I think a lot about this. In fact, there are two main possibilities. First, if we want to make programming easier, it is easier to have a unified view of memory. The second possibility is that we can ensure that we can run on a very small scale, and there is no need to predict the complexity that can be brought in on a larger scale. In fact, messages running in the same memory can solve the problem, but can we make such a choice today? Because we can see that it is really difficult to make a system with consistent memory, especially if many processors work. Is it possible to make such a choice today. In the third example, threads and locks refer to concurrent programming and are difficult to do, even if the best person is doing or fails, or a bit of success at a price. Of course, it refers to a single-processor computer, which is suddenly added to two processors. In fact, lock-based programming is very difficult today. Single locks can protect access to all data, but the efficiency is very low. If we make a precise division of power, this is more likely to be done well. More often, there is uncertainty that you cannot predict. Even if someone tells you how to do it right, programmers still encounter a lot of difficulties, and in many kernel systems, the program can be written without saying that there are the best people, we need a lot of programmers, and we have encountered many practical challenges in our field, so we need more programmers who are super programmers. I told you to make it correct before you can talk about concurrency. Today, one of the previous speakers talked about the trade-off between performance and correctness. I don't think the trade-off is correct. It is useless for a computer program to give a wrong answer, no matter how fast it is, so the next time you want to optimize it, you need a correct answer or a quick answer, if you choose to get the answer faster, you are wrong. Of course, there are also some alternative solutions that can make this job better, such as the transaction memory and transaction memory, which will take some of the database system technologies, then, we can take the database technology, database asset technology, consistency isolation, and durability. All operations in a transaction, or self-made transactions at the end of the transaction, or nothing will happen.

The fourth example is the complex CPU, which is a historical introduction. In the 1950s S and 1960s S, I called it an experiment. I saw many different types of CPUs were created. Some of them can only be used in languages like lisp, Fortran, and Algol, as well as some fighter planes, many new computer architectures have emerged. In 1970s and 1980s, the architecture was integrated around these two trends, more or less like a religious movement, still fighting between them. First, we will use a CICs architecture, which is promoted by IBM Stanford and Berkeley, to steal their ideas from the other camp, whether it is a Proteus or a CICs architecture. Today, there is no difference between them. So in 1990s, we now have a clear understanding of these basic architectures. As a result, more and more ILP, that is, the practicality of commands, that is, ILP, and we think it is a single-thread battle, however, some smart people can find another way to implement it better. We also tried to find that there are a variety of caches in this architecture team to speed up your speed. In fact, we can predict where the value of the future can appear. This will gradually improve the performance and greatly increase the complexity. One of the complexities will bring about great success rates and energy consumption. Many people spend a lot of time, but the research does not have good results. I didn't elaborate on many ILP for linear programs, so a lot of work was in vain.

There are several possible routes for the future development of CPU. We talked about intel as a single-core cloud computer, which includes many simple cores without kernel consistency, it also has a very efficient Intel core network. We have done a lot of things now, and the computer speed is fast enough. I often take the computer, including the computer that I do slides, and the battery can take 13 hours, because I don't need to use 100 watts of power when I put the PPT.

In the fifth example, many people think that interruption is important to computers. They think that the interrupted system is indeed very special. interruption is a good idea in the past, when the disruption first appeared, the computer efficiency was very low and the price was very high. If you do not have such an interrupted system, once your current computing suddenly stops, or your time is up, you cannot easily convert it. However, this is already a thing of the past. In the past, the Computer price was very expensive. Now the computer is on the same chip, and you may only need to spend a dollar, so there is no problem to leave such a gap over there. Today, we began to worry about software, not hardware. In terms of hardware, if the hardware has no software, like a stone, it has no function. As an architect, we need to simplify the work of programmers and programmers. Now we can build our system without interruption, for example, a BBN pluribus in 1972, in this way, the system can implement multi-task processing on the vro.

In the sixth example, many people may not agree. We call it a group exchange network. Over the past few years, we have seen academic groups in the architecture and network, especially research institutions. They are now gradually cooperating more and more, if a single chip contains multiple threads, it can be combined like a network. For those architects who have neglected the telephone company, they need to combine everything further. We actually have the opportunity to conduct big experiments in the entire large data center, but there are not many people, or there are not many people familiar with it. The data center itself is not an Internet, it is not even a micro-engine. It can be said that the difference here is very obvious. The topology of the data center is now clear, because we set it manually, and the names of each machine are also definite, and the entire network is also known, we don't need such a new method for naming, but not for the Internet. In addition, the entire network is also relatively small, and the latency is relatively short, because the data center may be several hundred meters from one place to another, and it has fewer nodes, tens of thousands of nodes, not hundreds of millions.

At the same time, you can add a process for various connections, but you can still do it now. What is not satisfied with the Internet protocol is that it uses the loss to evaluate the congestion. In fact, the loss and congestion are two different things and should be handled in different ways. In the past, the loss was acceptable, and the loss may be caused by congestion, because the entire network is relatively small and it is possible to achieve congestion and re-transmission. For every device in the world to determine an address, it is not a problem at the time, because there were few devices and there were not many allocated addresses, but with the network, as the CPU grows faster and faster, and the network is getting faster and more reliable, it brings us a lot of problems. As the speed increases, fewer and fewer errors are discovered. At the same time, switching and routing require a large amount of memory for caching. It can be said that the routers contain more and more caches and generally do not have good applications, once there is congestion, there will be a delay. The manufacturer said that my router can be implemented in 400 seconds. He is right, if the service is congested, there is no way to ensure its quality. Why? Because the routing on each switch is very complex, a processor must be added. In fact, the vro you purchased now contains a lot of memory and thousands of micro-computers. In the future, if the network connection speed is faster and faster, it will bring huge problems, because there is not enough time to focus on each packet and each group. The last few points are actually mentioned above. Today's system is totally different from the past, and it is totally different from the past 50 years. Based on the reality we are facing, we can now say there are both opportunities and necessity. We need to re-consider some previous decisions and implement the past decisions in another way. We need a new innovation and re-analyze the past practices, because the computer is not able to do everything we need, there are still many challenges. In such a speech, it is difficult for me to mention the huge challenges. I just listed the challenges of the future. The first challenge was that a computer could drive my car, or even drive better than me, so I wouldn't say that I had to drive my car to the right and crashed my car. I also hope that my computer can better understand me. My computer is like a slave, and I don't know anything about it, I hope that my information can be used to understand my preferences, like my colleagues, rather than my slaves. I hope my computer can help educate my grandson. For example, it plays a very good role in education. It makes sense, but it is not all right.

What the computer does is to make the text into an animation. What is a text processor? It is a typewriter. The person who first invented the list or chart actually knows that the chart has a great effect. Today we are still facing many new challenges, it is just to use a computer to do things that are not so easy to automate. However, if today's problems are solved, there will be new problems in the future. These new problems will require new programs and new computer architectures, there will be many challenges in the future. When talking to students, I usually envy you very much, because when you are learning computer science, you are now at the moment of reflection. Now you can have the opportunity to help us change the future, it's like the computer changed in 1950s. So I envy everyone and wish you all the best in the future. Thank you!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.