Bertrand Meyer and Bill Venners discuss the importance of increasing software quality, the impact of the market on software quality, and the challenges of software complexity.
Bertrand Meyer is a software pioneer in both academic and commercial industries. He is currently the Chairman of the Swiss Institute of Technology's Software Engineering Association. He has written a large number of papers and books, such as the classic Object-Oriented Software Architecture (Prentice press, 1994,200 ). In 1985, he founded an interactive software engineering company. The company has now changed its name to Eiffel, which provides software tools, training and consulting services based on the Eiffel language.
In September 28, 2003, Bill Venners conducted a telephone interview with Bertrand Meyer. In this interview (which will be published multiple times on artima.com), Meyer spoke about many software problems, such as software quality, software complexity, contract design, and test-driven development. In the beginning, Meyer discussed the importance of the increasing quality of software, the impact of the market on the quality of software, and the challenges of software complexity.
Importance of software quality
In an interview with you in 2001 informit, you said: "the current situation is very good. In the software industry, quality has become a topic for many people. It will increasingly become the most important topic ." Why?
Computer applications have been applied to all aspects of social life. Therefore, low-quality software is unacceptable. With the rapid development of the software industry, our dependence on the software is also increasing. Today, we must face problems that have been around for a long time but are not valued.
Alan Perlis's preface to "structure and Analysis of computer programming" by Abelson and Sussman, a textbook at the Massachusetts Institute of Technology, is enlightening. He wrote:
I think it is vital for computer science to keep "computing" interesting. It was really interesting when it was just emerging. Of course, users often pay for it. After a while, we began to take users' complaints seriously. So we think we should be responsible for the successful and error-free operation of the computer. I don't think so. I think what we should do for it is to extend the functions of computers and give them a new direction of development, so that our room is full of fun.
A typical attitude is: "You can fully believe that we can do anything we want to do. If there is any problem, we will fix it ." However, this is no longer possible. People are so dependent on software that they simply cannot accept it. In the early days of the. com era (1996 to 2000), this concept was barely available, but it was no longer possible by 1998. This kind of free-riding attitude, which has been easily accepted by people in the past, is no longer available.
In May 2003, Harvard Business comments published an article, "it insensitive" by Nicholas Carr. The article pointed out that it has not fulfilled its promise. This clearly signals that society as a whole requires us to have a much higher serious attitude towards our commitments than in the past. Although at present we seem to be able to ride freely, this era will soon end. People are increasingly concerned about what we do and whether our expenditures can be rewarded. The core of these features is quality.
Market impact on Software Quality
"The motivation for quality improvement exists, but it is only allowed to reach this level: there may be defects in the market that do not affect the effectiveness of products. Most managers believe that quality improvement measures beyond this range can only produce a sharp drop in return on investment ." So how does the market affect the software quality?
The influence of market power on software quality is both positive and negative. This is almost similar to the raver curve theory popular during the Reagan economic policy period. I am not an economist, and I have heard that this theory is no longer trustworthy, so I have not hinted at the true meaning of the laver curve in economics. However, the Laval curve tells us that if you collect taxes on people at a rate of 0%, the country will suffer a disaster because it will not get any income; if it increases to 100% (no higher level ), then people cannot get any income, so they will refuse to work, and the country will not get any income. This is a simple discussion. Although the laver curve has obvious rationality at this point, I am sure it is correct and accurate in economics. However, as an analogy, It can well describe the influence of market forces on software quality.
If the quality of your software products is very poor, you will not get any income because no one wants to buy it. On the contrary, if you waste time, manpower, material resources, and financial resources to build absolutely perfect software, the long development cycle and high cost will determine that you will inevitably leave the market. If you do not miss the market opportunity, you will eventually exhaust all resources. Therefore, people in the software industry should try to find a hard-to-grasp balance point: products should be good enough so that they will not be immediately negated in stages such as evaluation; at the same time, they do not pursue perfection or refinement. Otherwise, they will be unable to achieve their purpose due to time and money.
Complexity challenges
In your masterpiece object-oriented software architecture, you said: "reliability, or software quality, the only biggest enemy is usually complexity ." Can you talk about this?
I think there are some complex parts in our software that reach the limits of human imagination, and sometimes these parts make us lose. The only way to build a large and satisfying system is not to constantly complicate it, but to maintain control over the complexity. For example, Windows XP contains about million lines of code, which is not understandable or even imagined by someone alone. If you want to maintain control over them, or want to have a little reliability, the only way is to eliminate unnecessary complexity and try your best to keep control of the complexity of the rest.
Complexity control is a basic principle in Eiffel programming. Eifel is helpful for complex and difficult development. Of course, you can use the Eiffel language to build a simple, difficult, and moderate system, or even do better than using other tools, but when the problem is complicated to exceed your will, when you have no way to control its complexity, Eifel will truly shine. For example, one of its basic principles is to strictly define object adequacy and information hiding. In many other languages, you can find a simple way to obtain hidden information, but Eiffel does not exist. In the beginning, these strict rules may irritate programmers because they cannot do what they want, or they need to write more code to achieve their goals. But when you need to expand your original design, these rules will become a powerful guard to avoid disasters.
For example, in many object-oriented languages, although there are some restrictions, you can assign a value to an object field: X. a = 1, where X is an object and A is a field. Anyone with modern methodology and Object-based technology knows why this is wrong. Almost everyone will say, "It is indeed wrong, but in most cases I will not care about it. Of course I know what I am doing. They are my objects and classes. I should control all their access interfaces, so don't bother me. Do not force me to write additional code to encapsulate the modification process for field ." On the surface, they are right. In the short term, there is no problem in small scope. Who cares about it?
However, the typical small problem of direct assignment is that you have tens of thousands, 100,000 or even millions of lines of code and thousands of classes. Many people participate in a project and the project has undergone many changes and modifications, in addition, when targeting different platforms, things will be taken in a completely different direction. Such problems as direct assignment of object fields will thoroughly disrupt the entire architecture. In the end, a small problem will cause a big headache.
With such a small problem, you can easily fix it in the source code. You only need to prohibit direct access to object fields like the Eiffel language. Instead, you must encapsulate the simple process to complete this job. Of course, such a process may require some contracts. This is an easy way to eliminate problems in the early stage, but if you don't do this, it will thrive and finally let you die.
Another example is overload: Define the Same Name (method) in the same class to implement different operations. I know that this issue is controversial. People have been brainwashed and believe that heavy loading is a very good thing, but in essence it is dangerous and not a good thing. Like direct assignment of object fields, reload is now supported in various languages. In various libraries compiled by people, overloading is rampant and many different operations are implemented using a single name (method. On the surface, it has its convenience in the short term, but in the long term, it will pay an increasing cost of complexity, because you must understand the exact meaning of each variable in different operations. In fact, the dynamic binding mechanism in object technology (including Eifel, of course) can fully provide people with the desired flexibility than heavy load.
Therefore, there are many examples to prove that, if you are cautious in programming language design, you can greatly approach the goal of complexity control. This may also be the reason why people do not trust our commitment to Eiffel. The use of Eiffel is very simple. The example we published is also very simple, but it is not because the problem itself is simple, but because our solution is simple. Eifel is actually a tool that separates human complexity and finds the inherent simplicity that is often hidden. However, we now find that people sometimes do not believe in this and do not believe in simple solutions. They don't think that we have hidden something, that is, this language and these methods cannot really solve the actual problems in software development, because they believe it should be more complicated. There is such a hateful cliché: "if it is not as good as it is, then it is not true ." This may be the most stupid statement ever made by humans. Many people have such arguments, but in front of Eifel, it is just wrong. As long as you use the right tools to solve the problem, you can remove unnecessary complexity and find hidden and essential simplicity.
Every day, a person who is building a large system faces a central issue: how to remove unnecessary, man-made, and self-seeking complex parts, and control the remaining and unavoidable complexity. On this issue, inheritance, contract, generics, and object-oriented development can all play an important role, especially Eifel.
According to my understanding, you should discuss two things: removing unnecessary complexity and processing complexity. I can find some tools, such as object-oriented technology and language, to help us deal with the inevitable complexity. But how does the tool help us remove the complexity of self-seeking? What do you mean by "finding the simplicity behind the complexity?
Let's take a look at some of the current operating systems. For example, the complexity of Windows is under severe criticism, but I don't think other competitors are doing better. People do not need to attack any vendor, but it is obvious that some systems are indeed too chaotic. If we review some of the problems that people have mentioned, we can indeed design a better architecture. But on the other hand, the complexity of an operating system is unavoidable. Windows XP, RedHat Linux, and Solaris must both process Unicode and provide user interfaces for hundreds of languages. Especially for Windows, it must be compatible with different devices that are difficult to count produced by a large number of vendors. This is not the complexity of self-seeking criticized by academia. In the real world, we have to face the various requirements imposed by the outside world. Therefore, complexity can be divided into two categories: the Inevitable complexity, which requires us to find a way to deal with it by optimizing the organization, analyzing hidden information, modularity, and other means; the other is human complexity, we should solve the problem by simplifying it.