Programming Practice (both Chinese and English) (edsgerw. Dijkstra, winner of the Turing Award)
Everyone who studies and works in the computer field should understand and respect the pioneers. This book is his most important record and is called a classic in the programming field !)
[Load] edsger W. Dijkstra
Translated by Qiu zongyan
ISBN 978-7-121-20250-6
Published in July 2013
Pricing: 79.00 RMB
Page 1
16
Edit recommendations
This book was written in the middle and late 1970s S, but its profound impact on development, programming language development and program theory research in the field of programming technology has continued to this day.
Contents
This book is the classic book of edsger W. Dijkstra, winner of the Turing Award, in the field of programming. Based on his keen insights and long-term practical programming experience, the author makes a unique summary and development of the description of the basic sequence program and many key issues in development. The book discusses the essential features, descriptions, and reasoning of procedural behaviors (correctness) of ordered programs, and uses a series of ideas and Development examples from simple to complex programs, explains the process of developing a correct and reliable program based on strict logical reasoning.
This book was written in the middle and late 1970s S, but its profound impact on development, programming language development and program theory research in the field of programming technology has continued to this day. This book deserves every attention to the nature of computer science and technology, and seeks to be read by computer workers, teachers, and students who have long-term development in the field of program and software.
Directory
Sequence IX
Preface XI
Chapter 2
Execute abstraction 1
Chapter 2
Functions of programming language 13
Chapter 2
Status and features 19
Chapter 2
Semantic nature 29
Chapter 2
Semantic Features of a programming language 47
Chapter 2
Two theorems 73
Chapter 2
On the Design of the perfect termination structure 81
Chapter 2
On Euclidean Algorithm 89
Chapter 2
Several small examples of formal processing 101
Chapter 2
Restricted non-deterministic 143
Chapter 2
A short talk about the recording method: "variable scope" 157
Chapter 2
Array variable 187
Chapter 2
Linear search theorem 209
Chapter 2
The next sorting is 213
Chapter 2
Dutch flag issue 221
Chapter 2
Update sequential file 233
Chapter 2
Next, merge 245
Chapter 2
An exercise 257 from R. W. Hamming
Chapter 2
Pattern Matching Problem 269
Chapter 2
Write a number as the sum of two squares: 279
Chapter 2
Minimum prime factor of large numbers: 285
Chapter 2
297 problems with the most isolated villages
Chapter 2
Minimum Support tree problem 307
Chapter 2
Record equivalence class REM algorithm 321
Chapter 2
3D space convex hull 335
Chapter 2
Maximum strongly connected branch of a directed graph: 383
Chapter 2
Manual and implementation 401
Chapter 2
Banv 417
Highlights
Chapter 2
Functions of programming languages
In the "execution abstraction" chapter, I deformally describe the design of several different "machines" that calculate the largest factor of two (not big) positive integers. One machine moves stones on the cart, the other machine moves on the coordinate axis, and the last one is based on two registers, each contains an integer (up to a certain limit. Physically speaking, these three "machines" are quite different. In mathematics, they are very similar. The main reason for making this argument is that all three can calculate the most common factor, this is the commonality of the three. Since these three machines are just different implementations of the same set of "game rules", in fact, these rules are the core of an invention, which is the famous "Euclidean algorithm ".
In the previous chapter, Euclidean algorithms are described in a rather informal manner. Someone may suggest that, because the corresponding number of possible computations is so large, we must have a proof of its correctness. If an algorithm is provided only in a non-formal way, it is not easy to act as a formal object. In order to formally process the algorithm, we need to describe the algorithm using an appropriate formal recording method.
There are many possible advantages of such a formal description technology. Any descriptive technique involves the fact that everything actually expressed by it is a specific member of the object set that it may describe (usually an infinite set. Our descriptive technology should certainly provide an elegant and concise description of the Euclidean algorithm. Once this is done, that is, it is represented as a member of a huge class that contains various algorithms. When describing other algorithms in this class, we may expect to find some more interesting applications of the descriptive technology we use. Some people may say that the Euclidean algorithm is so simple that it can be dealt with a non-formal description. The power of the formal notation should be manifested in achievements that are impossible without it.
The second advantage of the formal description technology is that it makes it possible for us to study algorithms as a mathematical object. The formal description of algorithms will become the starting point of our intellectual gains, so that we can prove some theorem about the algorithm class, for example, some structural properties that are shared by the description used by the algorithm.
Finally, such a descriptive technique enables us to define an algorithm without ambiguity. In this way, an algorithm described with it is given and a set of actual parameters (input) are given ), there will be no doubt or uncertainty about what the corresponding answer (output) should be. It can be asserted that the corresponding computation can be completed by an automatic machine: Give it the algorithm (Formal Description) and the corresponding actual parameters, and it will produce the corresponding answer, no further manual intervention is required. Such an automatic machine that can deal with such corresponding algorithms and actual parameters has been created, which is what people call "Automatic Computer ". An algorithm that can be automatically executed on a computer is called a program. Since the Late 1950s S, the formal description technology used to write a program has been called a programming language ". (The introduction of the term "language" related to the program description technology has received many attention. On the one hand, the existing language theory provides a natural framework and a set of useful terms for relevant discussions, such as "Grammar", "Syntax", and "Semantics. On the other hand, we must also note that the similarity with the existing "Natural Language" has also caused a lot of misleading, because various natural languages, no matter how formal, both their weaknesses and strengths come from their ambiguity and non-accuracy .)
From a historical perspective, this last aspect is the fact that various programming languages can be used as a medium for directing existing automated computers, which has become their most important attribute for a long time. The efficiency of the existing automatic computer to execute programs written in a specific language has become the most important criterion for judging the quality of the language. This situation has led to a regrettable consequence: it is not difficult to see that many of the abnormal features of existing computers are faithfully reflected in the existing programming languages, and the price paid for this is, it is difficult for a program to describe in such a language to grasp it with intelligence (in fact, even if this is not the case, programming is already very difficult !). In the method to be proposed below, we will try to reconsider this balance. According to our understanding, the program to be written needs to be actually executed by a computer. This is only an actual situation caused by contingency. It should not be at the center of our consideration. (In a recent textbook for training PL/I programmers, we can see that the author strongly recommends avoiding Process calling as much as possible on the grounds that "because they will greatly affect program efficiency ". Since the process is the most important tool in PL/I to describe the structure, the above suggestions are so terrible that I cannot regard this book as truly "for education ". If you are sure that a process is a useful concept, and in your work environment, the overhead brought by the implementation of the Process mechanism is intolerable, what should be cursed is the improper implementation, rather than the performance of them as a standard! The trade-offs in this aspect should indeed be done again !)
I think of a programming language as a working medium for describing (possibly very complex) Abstract Mechanisms. As you can see in the "abstract execution" chapter, the most prominent advantage of algorithms is the conciseness of the arguments they can make. Our confidence in the relevant mechanisms is also based on this fact. If this conciseness is lost, algorithms will lose a large part of it (the right to exist). Therefore, we will keep this conciseness as a consistent goal. In addition, our selection of all programming languages will face this goal.
Collation
In poetry, music, art, science, and other fields with a longer history of intellectual cultivation, the historian dedicate the song to the Most Outstanding Practitioners there, their achievements have expanded the experience and understanding of their likes, and greatly enlightened and improved the talents of their followers. Their innovations are based on their excellent skills accumulated in practice and combined with their keen insight into the basic theories in the relevant fields. In many cases, their influence has been further improved due to their extensive cultural accumulation and their strength and thoroughness in expression.
In this book, the author describes in detail his radical new insights into the basic nature of computer programming in the style of his habits. Based on these insights, the author developed a set of programming methods and corresponding recording tools, and presented and tested them with a large number of elegant and efficient examples. This book is destined to become one of the most outstanding achievements in the field of intellectual cultivation in computer programming.
C. A. R. Hoare
Author Profile
Author profile:
Edsger W. Dijkstra, born in Rotterdam, Holland, was the first person in the Netherlands to take programming as a profession. He actively promoted structural programming in his early years and devoted his life to developing computing into a science. He has made pioneering achievements in many fields of computer science and technology, in addition, it won the 1972 Turing Award for its outstanding contributions in the Basic Research of program design.
Introduction to translators:
Qiu zongyan, a professor at the School of Mathematics, Peking University. The main research interests are the theoretical basis of software formal methods and program design, and also focus on program design practices. He has translated several related books, including "program design starting from specifications", "B method", "programming original", "construction and interpretation of computer programs", and "design and evolution of C ++ language "..
Media comment
In this book, the author describes in detail his radical new insights into the basic nature of computer programming in the style of his habits. Based on these insights, the author developed a set of programming methods and corresponding recording tools, and presented and tested them with a large number of elegant and efficient examples. This book is destined to become one of the most outstanding achievements in the field of intellectual cultivation in computer programming.
C. A. R. Hoare
Preface
For a long time, I have always wanted to write a book that basically follows the clues of this book. The reason is: on the one hand, I know that a program can have charming forms and profound logical beauty. On the other hand, I have to accept the fact that the vast majority of programs are simply expressed in a way suitable for machine execution, with no aesthetic feeling at all and not suitable for people to appreciate. The second reason for this dissatisfaction is that various algorithms are usually published in the form of a completed product, and play the most important role in the design process, and the main part of thinking that proves the legitimacy of the final form of the completed procedure is not mentioned at all. My initial thought was to present a series of beautiful algorithms in a way that readers can appreciate them. My idea at the time was to describe some practical and imaginary design processes, so that each of these processes will eventually get a required program. To a certain extent, I have implemented my original ideas. as the core part of this monograph, it is a series of chapters, each of which handles and solves a new problem. On the other hand, the final book I wrote is very different from what I expected earlier, because I especially hope to present it in a natural and convenient way, it is an important responsibility to impose tasks on you in this pursuit. I will always be glad that I have completed this job.
When I started writing a book like this, people immediately faced a problem: "Which programming language do I plan to use ?" In fact, this is not just a question about the presentation form! One of the most important (and most difficult) aspects of any tool is its impact on the work habits of people who are trained to use it, this influence, whether we like it or not, is an influence on our thinking habits. After analyzing the various aspects of the impact as much as possible, I came to the conclusion that there is no existing language and no subset of them fits my goal. Secondly, I also know that
I was not prepared to design a new programming language, so I vowed not to do it in the next five years. And I have a very clear feeling: this period is not over yet! (But there is another premise that this book must be written in addition to other things .) I tried to resolve this contradiction by designing a small language that fits my specific goals, making only a few seemingly unavoidable ones, and its legitimacy was fully proven.
This kind of hesitation and self-imposed constraints, if incorrectly understood, may disappoint many potential readers of this book. The difficulty of programming is equivalent to the sophisticated use of sophisticated and fancy tools called "Advanced Programming Language", or (worse !) People who are hard at "Programming System" are destined to be dissatisfied with this book. If they are cheated because I have ignored all the attractive fancy things, I can only answer: "You can be sure of all the attractive fancy things, and do the wonderful functions of your so-called 'power' programming language belong to the solution set, not the problem set?" I just hope that even if I use a small language, they can read this book. After doing this, they may agree that although there are no attractive fancy things, there are still a lot of questions to discuss. Therefore, whether or not to introduce most fancy things at the beginning should be questioned. Also, I am sorry for those who are obviously interested in programming language design. As I have already done, I cannot do anything clearer on this issue. But on the other hand, I also hope that over time, this monograph will inspire them and help them avoid some mistakes that may be made if they have not read it.
In the process of writing-it continues to surprise me and get excited-the text gradually presented is very different from what I thought at the beginning. At the beginning, I imagined to demonstrate the development process of a program in a (easy to understand) way, with more formal facilities than I did in the (Introduction) course, here, the semantics used is introduced in an intuitive way. The arguments about correctness are usually strictly elaborated, manually arranged, and persuasive texts are added. I was pleasantly surprised when I built the necessary foundation for a more formal method. The first surprise is the so-called "predicate converter". As a tool I have selected, it provides a way to directly define the relationship between the initial state and the final state, you do not need to refer to the intermediate states that may be experienced during actual program execution. I am very pleased with this situation because it clearly points two main points of attention of programmers: correctness of Mathematics (that is, whether the program defines the correct relationship between the initial state and the final state) -- the predicate converter is a formal tool for us to study this problem, the actual computing process does not need to be considered in the study), and the engineering attention to efficiency (it is also clear that this is only related to implementation ). This has become one of the most helpful discoveries, because the same program body always has two complementary interpretations: it can be interpreted as the encoding of a predicate converter, this seems to be more suitable for our needs; or it can be interpreted as executable code. I would rather leave this explanation to the machine. The second surprise is that the most natural and systematic "code of the predicate converter" that I can imagine is seen as "executable code, A non-deterministic implementation is required. For a while, I was chilling at introducing non-deterministic single-channel programming (I knew too much about the complexity it brought to multi-channel programming !), Until I realized that interpreting the program body as the code of a predicate converter had its own reason. (Looking back, we can see that the many questions raised in the past about multi-channel programming are nothing more than the consequences of over-emphasizing the importance of certainty .) I finally realized that we should regard Non-certainty as a normal situation, so that certainty will become a -- not very interesting -- special case.
After laying the foundation, I spent all my time on what I wanted to do, that is, to solve a series of problems. I was pleasantly surprised when I did this. Compared with my previous work methods, formal facilities allow me to better grasp what I have done. I am very happy to find that a clear focus on termination issues can bring many enlightening views, so that I feel it is very regrettable to consider a viewpoint that is partially correct. However, the greatest pleasure is that, for most of my original questions, I got a more beautiful answer this time! This is a very encouraging thing. I think of it as an indicator, indicating that the method I have developed has indeed improved my programming capabilities.
How should I learn this monograph? The best advice I can give is to stop reading the problem once I have read the Problem description and try to solve it myself. Trying to solve the problem by yourself is the only way for you to understand and evaluate the difficulty of the problem. It also gives you the opportunity to compare your solution with my solution, we also give you a chance to meet your needs, that is, we can see that your solution is better than what I have given. Let's talk about it first. Don't be discouraged when you find that the content here is far from very easy to read. People who have studied this monograph think that its content is usually very difficult (but there are also a lot of gains !). However, every time we analyze the difficulties we encounter, the conclusion is that we should "blame" The problems actually discussed, rather than the relevant texts themselves (that is, their expressions ). It implies that an extraordinary algorithm itself is extraordinary, and compared with thinking about the correctness of its design, the algorithm description made in a programming language is highly compact: do not be misled by the final length of the program body! The advice given by one of my assistants is also faithfully adopted by me, because it can be very valuable-it is to divide students into small ones.
Group to learn this book. (Here, I must add an additional description of the difficulty level in the text. I have been working on figuring out the programmer's task for many years in my scientific career, with the goal of trying to make it an intellectual and manageable job. After years of clarifying work, I was pleasantly surprised to find that the repeated feedback was "I made programming more difficult ". But the difficulty is always there. Only by making it visible can we hope to design a program with a high level of trust, instead of simply making some "Random Code ", that is, the program text overturned by the first counterexample is prepared based on the assumption that the program cannot be supported at all. Needless to say, no program in this book has been tested on machines .)
I also want to explain to readers why I have made small languages so small that they do not contain processes or recursion. Since each language extension adds several chapters to this book, it also becomes more expensive accordingly. Therefore, for most possible extensions (such as multi-channel programming ), I don't need more explanations. The process is always at the core of the programming process, and recursion is also the most important symbol of computational science. Therefore, I must give some explanations.
First of all, this monograph is not a new one. Therefore, I expect readers of this book to be familiar with these concepts. Second, this book is not a reference material for a particular programming language. Without such structures and examples of using them, it will not be interpreted as I cannot or do not want to use them, it is not recommended that those who have the ability to use these structures do not use them. The key here is that I don't need these structures when passing the information I want to give. I would like to discuss how to carefully separate various concerns and why this practice is the most important basis for designing high-quality programs. Taking the small language here as a restrained tool, we can be given enough freedom of action for a variety of extraordinary and extremely satisfactory designs.
Although the previous explanations are quite adequate, they are not all about the story. Under all circumstances, I feel that the repeated structure itself must be a structure in the language, because in my opinion, such an explanation is something that should have existed for a long time. When a programming language was born, the "dynamic" nature of its value assignment statements seemed to be inconsistent with the "static" nature of traditional mathematics. Since there is no suitable theory, mathematicians feel very dislike of it. Furthermore, since the repetitive structure is the most fundamental reason for variable assignment, mathematicians do not like the repetitive structure. Many people breathe a sigh of relief when people develop programming languages without assignment or repeated structures, such as pure lisp. This allows them to return to their familiar venues and see a glimmer of hope: It is possible to turn programming into a solid and widely respected mathematical foundation.
. (Until now, there is still a broad sense of presence among computational scientists who prefer the theory, believing that recursive Programs are "more natural" than repetitive programs ".)
Another angle is to provide a reliable and available mathematical foundation for "Repetitive Structures" and "assigning values to variables", so we have to wait for another decade. The result of this research is that, as explained in this monograph, the semantics of a repetitive structure can be defined based on a recursive relationship between predicates, A semantic definition of generalized recursion is based on a recursive relationship between the predicates. This clearly demonstrates why I think the complexity of generalized recursion is an order of magnitude higher than that of the compound structure. Therefore, I am afraid to see the semantics of the repeated structure below
Defined as call
Here, whiledo is the recursive process defined as follows (described in algol60 syntax)
Although this is good, it hurt me, because I don't like to use a sledgehammer to knock an egg, no matter how effective it is to use a sledgehammer. For the generation of theoretical computing scientists involved in this issue in 1960s, the above definition is often not only "the natural one", but also "the real one ". We should see the following fact: if we do not resort to the concept of repetition, we cannot even define what the Turing machine can do. This indicates that some trade-offs need to be made again.
I am not prepared to explain or apologize for any issues without reference.
Thanks: The following people have a direct impact on this book. They have either raised their expectations for the proposed content of this book, or asked them to write comments for this book (or some of them): C. bron, r.m. burstall, W. h. j. feijen, C. a.r.
Hoare, D. E. knuth, M. REM, j.c. Renault, D. T. Ross, C. S.
Scholten, G. seegm üller, N. Wirth and M. woodger. It is my pleasure to write down my gratitude for their cooperation. I would also like to express my special thanks to Burroughs for providing me with the opportunity and convenience, and to my wife for her constant support and encouragement.
Edsgarw. Dijkstra
Nuenen, Netherlands