It is hard to predict the human life after one hundred years. Only a few things can be determined. At that time, cars will be able to fly at low altitude, city planning regulations will be relaxed, and the building will be able to build hundreds of floors. The sun will not be seen from the night on the street, and all women will have learned anti-body technique. This article only wants to discuss one of the details: What languages do people use to develop software one hundred years later?
Why is this question worth thinking about? The reason is not that we will eventually use these languages, but that we will be able to use these languages from now on.
I think programming languages, like biological species, have an evolutionary context, and many branches will eventually become a dead end of evolution. This has already happened.
The COBOL language has been popular for a while, but now it seems that there is no subsequent language to inherit its ideas. It's like a niandte man.
The road to evolution has come to an end.
My predictions
This is also true for Java. Someone wrote a letter saying, "How can you say this?
What if Java fails? It is successful ." I think it depends on your success criteria. If the standards are the volume of books published, or believe that
Java can find the number of college students who work, so
Java is indeed successful. When I say
When Java fails, I mean it and
Just like COBOL, the evolution has come to an end.
This is just my guess, not necessarily correct. The focus here is not on looking down
Java, but proposes that the programming language has an evolutionary context to guide readers to think about where a language is located throughout the evolution process? The reason for asking this question is not to make future generations lament that we were so wise, but to find the backbone of evolution. It will inspire us to select the languages that are close to the backbone, which is most beneficial to the current programming.
At any time, choosing the backbone of evolution may be the best solution. It would be too bad if you chose the wrong person to become a niandte person. From time to time, your opponent crumanu will attack you and steal all your food.
This is why I want to find out the programming language one hundred years later. I don't want to bet on the wrong bet.
The evolution of programming languages is different from that of biology because languages of different branches aggregate. For example,
The Fortran branch seems to be working
Aggregate the successors of Algol. In theory, different biological species may also aggregate, but the possibility is very low, so it probably never really happened.
One reason a programming language may aggregate is its probability space.
Small. Another reason is that its mutation is not random. Language designers always consciously learn from the design ideas of other languages.
It is especially useful for Language designers to recognize the evolutionary path of programming languages, because they can design languages as they look. At this time, recognizing the backbone of evolution will not only help to identify existing excellent languages, but also serve as a guide to design languages.
Any programming language can be divided into two major components: a set of basic operators (assuming the role of an ordinary operator) and other parts except operators (in principle, this part can be expressed using basic operators ).
In my opinion, basic operators are the most important factor for the long-term existence of a language. Other factors are not decisive. This is a bit like when you buy a house, you should first consider the geographical location. There will be solutions to problems in other places in the future, but the geographical location cannot be changed.
Careful choice of justice is not enough, but it is necessary to control its scale. Mathematicians always think that less truths are better. I think they have come up with ideas.
You carefully examine the kernel of a language and consider which parts can be discarded. This is at least a very useful training. In my long-term career, I found that redundant code can lead to more redundant code, not only software, but also lazy people like me, I found that this proposition was also true under the bed and in the corner of the room. A piece of garbage will generate more garbage.
My judgment is that the smallest and cleanest programming languages in the kernel will exist on the backbone of evolution. The smaller and cleaner the kernel design of a language, the more tenacious it will be.
Of course, it is a big assumption to guess what programming language people will use after one hundred years. Maybe one hundred years later, humans are no longer programming, or simply telling computers what they want to do will automatically complete.
However, computer intelligence has not made much progress so far. I guess that after one hundred years, people still use similar programs to run computers. There may be some problems that we need to solve by programming today. At that time, no programming is required, but I think there will be a lot of programming tasks that are the same as today.
You may think that only those self-righteous people will predict the technology after one hundred years. However, do not forget that the history of software development has passed.
50 years. Here
In the past 50 years, the evolution of programming languages has been very slow. Therefore, looking forward to the one hundred years later, the language is not an empty idea.
The reason for the slow evolution of programming languages is that they are not real technologies. A language is just a writing method, while a program is a description that strictly complies with the rules and records in writing how computers solve your questions.
Question. Therefore, the evolution speed of programming languages is more like the evolution speed of mathematical symbols, rather than the evolution speed of real technologies (such as transportation or communication technologies. The evolution of mathematical symbols is a slow gradual change, not a real change.
The rapid development of technologies.
No matter what computers look like after one hundred years, we can basically conclude that they will run much faster. If Moore's law still works, the computer will run at its current speed in one hundred years.
74 multiplied
10
Power 18 (to be precise
73 786 976 294 838 206 times ). It's hard to imagine. But in fact, the more realistic predictions are not as fast as they will increase, but Moore's law will not be true. Whatever it is, if every
It doubles in 18 months, so it is likely to reach the limit. However, there was no doubt that computers were much faster than today. Even though it was just a little faster
1 million times, it will also substantially change the basic programming rules. If other conditions remain unchanged, the language that is considered to be slow (that is, the running efficiency is not high) will have more room for development in the future.
At that time, there will still be applications with high running speed requirements. We hope that some of the problems solved by computers are actually caused by computers. For example, the speed at which a computer processes videos depends on another computer that generates these videos. In addition, there are some problems that require unlimited processing capabilities, such as rendering and encryption.
/Decryption and simulation.
In reality, some applications are less efficient, while others consume all the computing capabilities provided by hardware, with a faster computer, the programming language has to deal with more
To cover a wider range of efficiency requirements. We have seen this happen. Based on standards decades ago, some popular applications developed using new languages are very surprised by the waste of hardware resources.
Person.
Not only does programming language have this phenomenon, but it is actually a general historical trend. With the development of technology, every generation is doing things that the previous generation feels wasteful.
People 30 years ago would have been shocked to see that we were using long-distance calls at will today.
If people who saw a normal package before 100 could even enjoy the benefits of sending it from Boston, passing through memphans, and arriving in New York within a day, they would be even more shocked.
I have predicted what will happen if the performance of the hardware is greatly improved in the future. New computing capabilities will be ruined.
In my years of programming, computers were rare. I remember that the microcomputer Model was
TRS-80, its memory only
4 K.
Basic Program
I have to delete all spaces in the source code. I felt intolerable when I thought of the extremely inefficient software, repeatedly repeating some stupid operations and taking up all the computing power of the hardware. However
Yes, my reaction is wrong. I am like a poor child who is from a poor age. Once I hear that I want to spend money, I will not be able to give up, even if I spend money on important occasions (such as going to a hospital to see a doctor) it is hard to accept.
Some waste is really annoying. For example, some people hate it.
SUV, even if it uses renewable clean energy, cannot change the view because
SUV comes from a nasty idea (How to Make a minivan look more masculine ). However, not all waste is bad. Now that China's telecom infrastructure is already so developed, it is a bit difficult to call long distances. If you have enough resources, you can treat long-distance calls and local calls as the same thing, making everything easier.
Waste can be divided into good waste and bad waste. I am interested in a good waste, that is, getting a simpler design with more money. So the question is, how can we make full use of the more powerful performance of new hardware to "waste" them the most advantageous?
The pursuit of speed is a deep-rooted desire in the human heart. When you look at the computer, you can't help but want the program to run as quickly as possible. It really takes some time to restrain this desire. When designing programming languages, we should consciously ask ourselves when we can give up some performance in order to improve convenience.
Many data structures exist because of the speed of the computer. For example, many languages today have strings and lists at the same time. In terms of semantics, a string can be understood as a subset of the list.
Each element of is a character. So why do we need to list strings as a data type? You can skip this step. Strings exist only to improve efficiency. However
It is not advisable to take the row speed for the purpose, but to make the semantics of the programming language very complex. Programming Language string settings seem to be an example of premature optimization.
If we imagine the kernel of a language as a set of basic principles, we will add redundant principles to the kernel just to improve efficiency, but it will not improve the expression ability, this is definitely a bad thing. That's right, efficiency is very important, but I think modifying the language design is not the correct way to improve efficiency.
The correct method should be to separate the semantics of the language from the implementation of the language. In terms of semantics, a list and a string do not need to exist at the same time. Just a list is enough. In terms of implementation, the compiler is well optimized so that it processes the string as a continuous byte as necessary.
For most programs, speed is not the most critical factor, so you usually do not have to worry about micro-management on the hardware layer. As computers get faster and faster, this is becoming more and more obvious.
When designing a language, less restrictions on the implementation method will make the program more flexible. Changes in language specifications are both inevitable and reasonable. Through the processing of the compiler, the software developed according to the previous specifications will run as usual, which provides flexibility.
Essay is a verb from French.
Essayer, which means "Try it ". In the original sense, the paper is an article you write and try to understand something. The same is true for software. I think some of the best software is like a thesis. That is to say, when the author really starts writing these software, they don't know what the final result will be.
LISP
Hackers have long understood the value of data structure flexibility. When we write the first version of a program, we usually process everything in the form of a list. Therefore, these initial versions may be incredibly inefficient and you have to work hard.
Restrain yourself from being able to stop optimizing them. This is like trying to restrain yourself from where the steak came from, at least for me.
The programming language that programmers need most after one hundred years is the one that allows you to write the first version of the program without any effort, even if it is incredibly inefficient (at least from our perspective today ). They will say that what they want is a very easy-to-use programming language.
Inefficient software is not a bad software. A language that allows programmers to do nothing is really bad. It is really inefficient to waste the programmer's time, not the machine's time. As computers get faster and faster, this will become more and more obvious.
I think it is acceptable to give up the string type.
The arc language has already done so and looks good. Previously some operations that were hard to describe using regular expressions can be expressed very easily with regression functions.
How will the flattening trend of this data structure develop? I tried very hard to imagine various possibilities, and the results even gave me a fright. For example, will the array disappear? After all, an array is only a subset of the hash list, and its keys are all integer vectors. Further, will the Hash itself be replaced by the list?
And even more astonishing predictions. Logically, you do not need to set a separate representation for integers, because they can also be seen as lists and integers.
N can use
List of nelements. This can complete mathematical operations, but it is very inefficient and intolerable.
Will the programming language develop to discard the integer step of one of the basic data types? This question does not really require you to seriously think about it. It is more about opening up your thoughts on the future. I just proposed a hypothetical
Situation: if an irresistible force encounters an object that cannot be moved, what will happen. Specifically, in this article: a language with low efficiency and unimaginable performance has encountered an unimaginable hardware,
What will happen. I don't know what's wrong with dropping the integer type. The future is quite long. If we want to reduce the number of basic truths in the language kernel, let's look a little farther and think about it.
T tends to be infinite. One hundred years is a good reference indicator. If you think that an idea may still be unacceptable after one hundred years, it may still be unacceptable after one thousand years.
Let me make it clear that I do not mean that all integer operations are implemented by the list, but by the Language kernel (without any compiler implementation. In reality, any program that performs mathematical operations may represent numbers in binary form, but this is an optimization of the compiler, not part of the language kernel semantics.
Another way to consume hardware performance is to set many software layers between application software and hardware. This is also a trend we have seen. Many emerging languages are compiled into bytecode.
. Bill Woods once told me that, based on experience, every time an explanation layer is added, the software runs at an order of magnitude slower. However, the redundant software layer can make programming flexible.
Arc Language
The original version is an extreme example. It has many layers and runs very slowly, but it does bring corresponding benefits.
ARC is a typical "metaloop "(
Metacircular) interpreter, in
Developed on the basis of Common LISP, much like John input in his classic
The
Eval function.
The arc interpreter has only a few hundred lines of code, so it is easy to understand and modify. We use
The Common Lisp version is
Clisp is developed based on another bytecode interpreter. Therefore, we have a total of two interpreters. The efficiency at the top layer is astonishing, but the language itself is usable. I admit that it is barely usable, but it does.
Even applications are developed in multiple layers. The bottom-up programming method means dividing the software into several layers. Each layer can act as the development language of the layer above it. This method often produces smaller and more flexible programs. It is also the holy thing for software-reusability (
Reusability) -- the best route. In terms of definition, languages can be reused. With the help of programming languages, the more your applications adopt this multi-layer form of development, the better its reusability.
The concept of reusability is more or less
20th century
80 years
The rise of object-oriented programming has some associations. No matter how we look for evidence, we cannot completely separate these two things. Some software developed using object-oriented programming is indeed reusable, but this is not because it
It uses object-oriented programming because its development method is bottom-up. Taking function libraries as an example, they are reusable because they are part of the language, rather than because they use object-oriented or
Other programming methods.
By the way, I don't think object-oriented programming will
To die out. In my opinion, except for some specific fields, this programming method does not bring many benefits to excellent programmers, but it is irresistible to large companies. Object-Oriented Programming allows you
A mess of garbled code for sustainable development. Through continuous patching, it allows you to expand the software step by step. Large companies tend to develop software in this way. I expect this will happen in one hundred years.
Now that we are talking about the future, we 'd better talk about parallel computing (
Parallel Computation), because parallel computing seems to exist for the future. In any case, parallel computing seems to be part of the future.
Will it be implemented in the future? In the past two decades, we have been saying that parallel computing is coming soon. However, it has not had much impact on programming practices so far. Is this true? Chip designers have to take it into consideration.
This is also true for programmers who develop system software on CPU computers.
However, the real question is, What abstract level can parallel computing reach? After one hundred years, will it affect the programmers who develop application software? Or is it just something the compiler author needs to consider, and there is no place to look for in the application code?
One possibility is that in most cases where parallel computing can be used, people will give up using parallel computing. Although my general prediction is that in the future, software will waste most of the new hardware performance, parallel computing is
Special case. I guess the hardware performance has been dramatically improved. If you want to say that you want parallel computing, you can certainly get it, but you usually won't use it. This means that apart from some special applications
Program, the parallel computing after one hundred years will not be the kind of Large-scale parallel computing (
Massive parallelism ). I expect that for common programmers, everything is more like splitting processes and then running multiple processes in parallel in the background.
This is something that can only be done later in programming. It is an optimization of the program. It is similar to developing a specific data structure to replace the existing data structure. The first version of the program usually ignores the various benefits of parallel computing, just as the benefits of ignoring a specific data structure at the beginning of programming.
Except for some specific applications, parallel computing will not become popular in one hundred years. If the application software really uses a lot of parallel computing, this is premature optimization.
How many programming languages will there be in one hundred years? Recently, many new languages have emerged. Hardware performance improvement is one reason, which allows programmers to make different trade-offs between running speed and programming convenience based on their purposes. If this is the trend of the future, the powerful hardware in one hundred years will only increase the number of languages.
However, on the other hand, there may be only a few common languages in the one hundred years. Partly because of my optimism, I believe that in the future, if your work is really outstanding, you may choose a development
Language. The first version of the software written in this language is very slow. It can be improved only after the compiler is optimized. Since I have such optimism, I have
Predictions. Some languages can achieve the highest efficiency of the machine, while the efficiency of other languages is just as slow as it can be run, there is a huge gap between the two. I predicted that one hundred years later, the gap would be different from each other.
A corresponding programming language exists.
Because the gap is getting bigger and bigger, the Performance Analyzer (
Profiler)
Will become more and more important. Currently, performance analysis has not been taken seriously. Many people still believe that the key to improving the program running speed is to develop a compiler that can generate faster code. Code efficiency and Machine
The gap is increasing, and we will see more and more clearly that the key to improving the running speed of application software is to have a good performance analyzer to help guide program development.
I said that there may be only a few common languages in the future, but I did not use the "niche language "(
Little language. I think these embedded languages have good ideas and will certainly flourish. However, I have determined that these "niche languages" will be designed into a rather thin layer, so that users can see at a glance the general-purpose language as the foundation, which reduces the learning time, reduces cost.
Who will design these future languages? Past
One of the most exciting trends in the past 10 years is the rise of open source languages, such
Perl,
Python and
Ruby. The language design has been taken over by hackers. So far, it is still unclear whether it is good or bad, but the development momentum is encouraging. For example,
Perl has some amazing innovations. However, it also contains some bad ideas. This is also normal for a language that is full of initiative and bold exploration. At the rate of its current change, God knows about one hundred years later.
What Will Perl become.
As the saying goes, if you cannot do it yourself, you should become a teacher. This is not true in the field of language design. Some of the best hackers I know are teaching. However, teachers do have a lot to do.
. Research jobs impose some restrictions on hackers. In any academic field, some questions can be done, while others cannot. Unfortunately, the difference between the two types of questions is usually determined by their writing
After the paper, it seems that they are not very advanced, rather than depending on whether they are important to the development of the software industry. The most extreme example may be literature. Any achievements of literary researchers have almost no impact on Literary creators.
Although the status of the field of science is a little better, the intersection between the questions that researchers can do and those that can help design good languages is rather frustrating. (Olin •
Sivers once expressed dissatisfaction with this and said the first thing .) For example, there seems to be a lot of research papers on variable types, although in fact static language does not really support macros (in my opinion, one language does not support macros, it is not worth using ).
New languages appear more in the form of open-source projects, rather than research projects. This is a development trend of language. Another development trend is that new Language designers need to use their application software authors rather than compiler authors. This seems to be a good trend, and I expect it to continue.
Physics after one hundred years is basically impossible to predict. However, the computer language is different. Now we can design a new language that will attract users after one hundred years. This seems theoretically possible.
One way to design a new language is to write down the program you want to write directly, whether or not the compiler exists or whether it supports its hardware. This is to assume that there are infinite resources for your control. Whether today or one hundred years later, such assumptions seem to make sense.
What program should you write? Anything you want, as long as you can write it as hard as possible. However, it must be noted that your thinking is not affected by the current programming language. This impact is everywhere
However, we must work very hard to overcome this problem. You may think that it is natural to write programs in the most effort-saving way for a lazy creature like humans. But in fact, our thoughts may be limited
An existing language is used only in a simpler form in this language, and its binding effect on our thoughts will be astonishing. New languages must be discovered by yourself, and you cannot rely on those that make you sink naturally.
Mindset.
It is a useful technique to use the length of a program as an approximate indicator of its workload consumption. The program length here is not the number of characters, but the total length of various syntactic elements, basically the entire parsing tree.
. Maybe it cannot be said that the shortest program is the most effort-saving program, but when you want to write the program concise rather than loose, you are closer to the effort-saving goal, your days will become better.
Yes. Therefore, the correct way to design a language is to look at a program and ask if you can write it shorter?
In fact, the reliability of writing a program in a language one hundred years later depends on whether your estimation of the language kernel is correct enough. You can write the regular sorting now. However, it is hard to predict which function libraries will be used in the language one hundred years later. It is very likely that many function libraries target fields that do not exist yet. For example, if
If the SETI @ Home program is successful, we need a function library to connect to aliens. Of course, if alien civilization is highly developed
To exchange information in XML format, there is no need for a new function library.
Another extreme is that I think today you can design the language kernel one hundred years later. In fact, in some people's opinion, most of the language kernels are
It was designed in 1958.
If we can use the programming language one hundred years later today, will we use it for programming? Guan Gu and Zhijin. If
Today's programming languages will be available in 1960. Will people use them at that time?
In some ways, the answer is no. Today's programming languages rely on hardware in
January 1, 1960 does not exist. For example,
In a language like python, correct indentation (
Indentation) is very important in writing,
In 1960, computers had no monitors and only printer terminals, so it was not very smooth. However, if you exclude these factors (you can assume that we only program on paper ),
20th century
Will programmers like programming in the current language in 1960s?
I think they will. Some people who lack imagination and are deeply influenced by early programming languages may feel impossible. (How to copy data without pointer operations? No
GOTO statement, how to implement the flowchart ?) But I think the smartest programmers at that time will be able to use most of today's languages easily, assuming they can.
If we can have a programming language one hundred years later, we can at least write excellent pseudo code.
. Will we use it to develop software? Because the programming language one hundred years later needs to generate fast code for some applications, it is very likely that the code it generates can run on our hardware, and the speed is acceptable. Compared with users one hundred years later, we may have to optimize this language more, but in general, it should still bring us a net income.
Now, our two points are :(
1) the programming language one hundred years later can be designed theoretically today ;(
2) if such a language can be designed today, it is very likely that it is suitable for programming now and can produce better results. If we associate these two ideas, we will come up with some interesting possibilities. Why don't I try to write the programming language one hundred years later?
When you design a language, it is good to remember this goal. When learning to drive a car, you need to remember the principle of driving the car straight, not by alignment the car body to draw the separation line on the ground, but by aiming at a distant point. This is correct even if your target is only a few meters away. I think we should do the same when designing programming languages.
From 《
Hackers and painters
"