Then "the Programmer's Cry" Reading Notes (on), continue to share the next article, this dry more oh, there are static dynamic types of advantages and disadvantages, the strength of the type system of confrontation, design patterns, Programmer's math, compiler importance and the conservative liberal contest, temporarily digest the proposal to save in order to read it later.
Advantages and disadvantages of static types and dynamic types
Advantages of static types
The main benefits of static types are listed below:
(1) A static type can rely on its innate limitations to detect some type errors early before the program runs. (or in the case of inserting/updating records, parsing XML documents, etc.) )
(2) static types have more opportunities (or easier to say) to optimize performance. For example, it is easier to implement an intelligent database index as long as the data model is fully enriched. The compiler can make better decisions with more precise variable and expression type information.
(3) in languages such as C + + and Java, which have complex type systems, you can directly view the code to determine the static types of variables, expressions, operators, and functions.
This advantage may not be obvious in type derivation languages such as ML and Haskell, where they clearly think it is a disadvantage to have a type tag everywhere. But you can still identify type one in the context of reading comprehension and these are simply not available in most dynamic languages.
(4) Static type labeling simplifies the processing of specific types of code automation. such as automated document generation, syntax highlighting and alignment, dependency analysis, style checking and other "let code to interpret code" work. In other words, static type tags make it easier for compiler-like tools to play: Lexical tools have more explicit grammatical elements and less guessing when it comes to semantic analysis.
(5) As long as you see the API or database structure (instead of looking at the code implementation or database table), you can roughly grasp its structure and usage.
Do you have anything else to add?
disadvantages of static typesAs follows:
(1) They artificially limit your ability to express.
For example, Java type systems do not have operator overloading, multiple inheritance, mix-in, reference parameters, functions, or a class of citizens. The original use of these technologies to make a very natural design, but now have to accommodate the Java type System. Either the ADA or C + + or any of the static type systems, such as OCaml, have this problem. Almost half of the design patterns (not just those of gof) are distorted by natural, intuitive designs that plug them into some kind of static type system: This is Fangruiyuanzao.
(2) They slow down the development process.
Create a lot of static models in advance (top-down design), and then change them according to the requirements. These types of annotations also make the source code bloat, which makes the code difficult to understand and maintenance costs rise. (This problem is only serious in Java because it does not support the alias of the type.) And there's what I've mentioned above that you'll have to spend more time tweaking the design to fit the static type system.
(3) The learning curve is rather steep.
Dynamic type language is more studious. Static type systems are relatively picky, and you have to spend a lot of time learning how they are modeled, plus static types of grammatical rules. In addition, static type errors (which can also be called compiler Errors) are difficult for beginners to understand because the program has not yet run at all. You do not have the opportunity to debug with printf, can only luck-like adjustment code, pray for the compiler to be satisfied. So learning C + + is harder than C and Smalltalk, OCaml is harder than Lisp, nice language is harder than Java. And Perl has a series of static complexities--all sorts of weird rules, how to use them, when to use them--to make it harder than Ruby and Python. I have never seen a static type of language that is very studious.
(4) They create a false sense of security.
Static type systems can actually reduce runtime errors and improve data integrity, so it's easy to mislead people into thinking that as long as they can get the program running by compiling it, it's basically nothing. It seems that people seldom rely on unit tests when writing programs in a language with a strong static type system, which is probably just my imagination.
(5) They can lead to a decline in document quality.
Many people feel that the auto-generated Javadoc is enough, even if the code is not annotated, Sourceforge is full of such projects, and even the sun JDK often have this problem. (for example, sun often does not add Javadoc annotations to static final constants.) )
(6) It is difficult to write systems with highly dynamic and reflective features.
Most static type languages (presumably) discard almost all compiler-generated metadata at run time for performance purposes. However, these systems are often difficult to make changes at runtime (not even introspective), for example, to add a new function to a module, or to add a method to a class, except to recompile, close the program, and then restart it. This is not only affected by the development process of the entire design concept is also difficult to escape. You may need to build a complex architecture to support dynamic functionality that can inevitably be mixed with your business code.
Advantages and disadvantages of dynamic types:
Just swap the list above and you can basically list the pros and cons of a dynamic type language. Dynamic language is more expressive and flexible in design, easy to learn and use, fast to develop, and generally more flexible to run. In contrast, dynamic language can not give a timely type error (at least the compiler can not do), performance tuning is more difficult, it is difficult to do automated static analysis, in addition, the type of variables and expressions in the code is not intuitive, no way to see out.
Static languages end up adding some dynamic features to the user's submission, while dynamic languages often try to introduce optional static type systems (or static analysis tools), and they also try to improve performance by increasing error detection to detect problems early. Unfortunately, unless you start designing a language with an optional static type in mind, the strong-twisted melon will not be sweet.
The contest of strong type and weak type system
strongly-typed camp basically works like this: the first is to design according to current requirements; it doesn't matter if the document is just a draft, then define the interface and the data model. Assume that the system is going to withstand huge traffic, so every place has to consider performance. Avoid using abstractions such as garbage collection and regular expressions. (Note: Even Java programmers often try to avoid triggering garbage collection, and they are always starting to write programs that discuss object pooling issues.)
They will only consider the dynamic type when there is nothing to do. For example, a CORBA-based team would only add an XML string parameter to each interface call in extreme cases so that they could bypass the rigid type system that was chosen.
the second camp basically works like this: The builds the prototype first. As long as you're writing code faster than writing an equivalent level of detail, you can get feedback from users earlier. Define a reasonable interface and data model according to current requirements, but don't waste too much time on it. Everything to be able to run up to prevail, how convenient how to come. Assume that you have to face a lot of changes in demand, so the first thing every place to consider is to get the system up and running as soon as possible. can use the abstract place as far as possible (for example, every time to collect data and not consider the buffer, can use the regular place first without the string comparison) even if it is clear that the sledgehammer is not OK, because you change back is greater flexibility. The amount of code is less, and the number of bugs is usually less.
They perform performance tuning and prohibit modification of interfaces and data definitions only when they are forced to do so. For example, a Perl team might rewrite some of the key core modules with C and then create XS bindings. Time-long, these abstractions gradually become the established standard, they are wrapped in the data definition and meticulous OO interface, can no longer be modified. (Even Perl programmers will often be tempted to sacrifice silver bullets to write OO interfaces for common abstractions)
What do you think will be the result of the eventual adoption of these strategies?
Design Patterns
- But now everyone is awake, isn't it? Design mode is not an attribute. A factory is not an attribute, nor is a delegate, an agent, or a bridge. They simply provide a nice box to load the features in a loose manner. But don't forget, boxes, bags, and partitions themselves are also taking up space. Design patterns are no exception (at least in most of the patterns described in the book "Gang of Four"). What's more tragic is that the only interpreter (interpreter) that can streamline the code in the "Gang of four" mode is ignored by programmers who wish to have their design patterns tattooed on their bodies.
Dependency Injection is another new type of Java design pattern, with Ruby, Python, Perl, and JavaScript, and programmers probably never heard of it. Even if they had heard it, they were able to draw the conclusion that they didn't need it at all. Dependency injection is an amazing, descriptive architecture that allows Java to become more dynamic in some ways and more advanced languages. You guessed right that dependency injection would make Java code larger. Getting bigger is something that can't be avoided in Java. Growth is a part of life. Java is like a Tetris, but the gap between the bricks and the bricks is filled with discontent, and the result can only be piled higher.
me: Java programmers now believe that dependency injection is important, because it's so significant that in various frameworks, such as spring, dependency injection makes it possible to configure classes and their relationships in a file, which, of course, makes Java more flexible and powerful.
What are the math branches that programmers need to know?
- In real life, the math used by computer scientists is hardly overlapping with the list above. First, most of the math taught in primary and secondary schools is continuous, that is, math on real numbers. for computer scientists, 95% of interesting mathematics are discrete, that is, math on integers.
me: The mathematical problems that programmers have to solve are generally discrete mathematics, the most useful of which is combinatorial mathematics and probability theory statistics.
- In addition to probability and discrete mathematics, other branches of mathematics are also helpful to programmers. It is a pity that the school will not teach you unless you go to minor math. They include the following:
(1) Statistics. My discrete math book mentions a little. But statistics is a complete discipline, but also a very important discipline, important to do not need additional introduction.
(2) Algebra and linear algebra (e.g. matrices). Linear algebra should be taught immediately after algebra. It's not very difficult, and it's very useful in many areas, such as machine learning.
(3) Mathematical logic.
(4) Information theory and the degree of Coriolis complexity. Information theory (roughly speaking) is mainly about data compression, and Coriolis complexity (also roughly speaking) is about the complexity of the algorithm (such as the minimum space, how long it takes, how elegant the program or data structure is). They are fun, interesting, practical subjects.
There are, of course, other branches, and some disciplines overlap each other. But the point is: math that works for you is very different from the math that school finds useful.
- The essence of calculus is the velocity of a continuous change, the area under the curve, the volume of solids. Very useful, memory and a lot of cumbersome steps programmers usually don't need these things. Know the general but need a lot of concepts and skills to be able to, the details of the time to wait until the need to check again is too late.
compiler, do you understand?
- I have a knack for hiring. is in the search for a good software engineer "generalist", usually on your resume you can see a variety of keywords and words that make you feel bad, but the "compiler" is the only word I am interested in.
me: The author strongly requires programmers to learn the principles of compilers, do you remember?
- The compiler receives a sequence of symbolic flows, analyzes the structure of the symbol according to predefined rules, and then converts it into another stream of symbols. Isn't it very general? That's true. Can a picture be considered a symbolic stream? Of course. It can be a stream of pixels per row. Each pixel is a number. Each number is a symbol. The compiler can of course convert pictures. Can English be called as a symbol stream? Of course. The rules may be complicated, but natural language processing can be seen as some sort of flashy compilation.
- The first big stage in the compilation process is parsing , which turns the input into a tree. The intermediate is preprocessed, lexical analysis (also called word) and then the steps of parsing and intermediate code generation. Lexical analysis is usually done by regular expressions. Parsing is done according to the syntax. You can take recursion down (the most common) or parser generator (which is more common in small languages) or a more flattering calculation, but the corresponding execution speed will be slower. In any case, the final result is usually a parse tree.
The second big stage is the type check. This is a group of fanatical academics (including their organization and graduate students) who are confident they can write very clever programs that can analyze what your program wants to do and help you point it out when you're wrong. Oddly enough, they don't feel like they're studying AI after all, the AI community has (wisely) given up on certainty.
The third camp is code generation , and they are usually marginalized. As long as you have enough knowledge of recursion and know that your ancestors are not Adam and Eve, then code generation is quite straightforward. The thing to say here is that optimization is the kind of code that generates the right stuff, so that the vast majority of users are unaware of the art of the problem. Wait, excuse me, this is Amazon. Optimization is the art of generating "correct" code based on the junk code written by your expensive rookie programmer.
Conservatives and liberals, which faction do you belong to?
- after all, the adjective "conservative" is synonymous with prudence and aversion to risk. Financial conservatism is often (and obviously) associated with age and wealth. Companies will become more conservative over time because they have survived various lawsuits, technical failures, public crises, financial storms and other crises. Even the parable of the ants and grasshoppers tells us that winter will come and store food.
Essentially, conservatism is risk management.
The same liberal views are often associated with youth, idealism, and innocence. In companies, startups tend to be typical liberals, in part because they exist to (to some extent) change the world (and liberalism is meant to change), and the other part is that they have to go all out to complete the goals set by the investor. So it makes sense to give up a little bit of software security.
Me: Conservatives, try to fix all bugs, avoid errors, learn not new syntax, through the compiler security checks, the data must follow the pre-defined format, the public interface must be strictly modeled, the production system will never allow the risk of risky back door, security doubts can not be online, Faster than slow good, pay attention to performance. Liberals are the opposite.
Indescribable freedom: assembly Language
Extreme Freedom: Perl, Ruby, PHP, script
Very free: Javascript, VB, Lua
Free: Python, Common Lisp, Smalltalk/sqeak
Moderate freedom: C, object-c, Schema
Moderate conservatism: C + +, Java, C #, D, Go
Conservative: Clojure, Erlang, Pascal
Very conservative: Scala, Ada, Ocaml, Eiffel
Extreme Conservative: Haskell, SML
(1)Facebook is extremely free. Their main use is C + + and PHP, their data are put in memcached: only key value pairs, no database structure. They export the data to a backend hⅳe data warehouse and then use Hadoop for offline data analysis. Every two weeks or so they still hold an all-night hackathon, anyway, most of their programmers are single young men (at least when I visited last time) and the stock valuations are still high (I didn't seem so good last time I checked the price). As a company, Facebook is very close and powerful, with a strong focus on the ability of programmers to post new features on the site, with little bureaucracy. This is invaluable for a company of this size and so many users. Conservatives will doubtless loathe to despise them. But Facebook has proved that no matter what the world view of programmers, as long as the United, can solve a lot of problems.
(2) Amazon is free.
(3) Google is conservative. At first it was a bit of freedom, and then it became more and more conservative. Only at the beginning of the software is free, then the search engine was written in Python. As the company grew, they quickly turned to software conservatism, which was entirely an engineer-led. They have written a lot of warnings about the dangers of too many languages, and in only a few languages, there are strict style guidelines that limit the use of language features that are conservative, risky, or "hard to read."
(4) Microsoft is a hard-to-say conservative.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
"The Programmer's Cry" reading Notes (bottom)