Briefly
I think the concept of "encapsulation" is the most basic concept in object-oriented thinking, which is essentially by putting together a bunch of related functions and a bunch of objects, having functions as operational channels, and internally using variables as raw materials. It is left to the external programmer to operate without exposing the specifics of the execution. A typical example of a book is a car and a light bulb: you don't need to know the engine principle of different cars, you can run on the accelerator, you don't need to know that your light bulb is the kind of bulb, and the switch will light up. We all intuitively think this is a great thing, right?
But sometimes it feels like there's something wrong with the object-oriented language, I vaguely feel that encapsulation may not be as good as we intuitively think, that is, object-oriented is not as good as our intuition, though it has been popular for many years.
1. Is it really reasonable to put data structures and functions together?
Functions are doing things, they have inputs, they have execution logic, they have outputs. The data structure is used to express the information, either as input or as output.
The two are essentially completely different things, object-oriented thinking put them together, so that function is limited to a certain area, so that although the operation can be well categorized, but this classification method is based on the "sphere of action" to classify, in the real world can, but in the program of the world, some inappropriate.
There are several reasons for improper:
In parallel computing, because the execution part and the data part are bound together, this makes the scheme constrain the degree of parallelism. In order to better achieve parallelism, industry engineers found a new idea: Functional programming. Use functions as data to ensure that the functions performed are correct in timing. But don't you think, as long as the data expression and the execution part separate, form the pipeline, this can not be very convenient to increase the number of parallel?
Let me give you an example: when the data and functions are not separated, the procedure executes as follows:
A. Function1A. function2A. Function3 finally got processed a
When in a concurrency environment, assume that so many tasks arrive at the same time
A.F1()-A.F2()-A.F3()Finally get the processedAB.F1()-B.F2()-B.F3()Finally get the processedBC.F1()-C.F2()-C.F3()Finally get the processedCD.F1()-D.F2()-Df3 () finally get processed de f1 () -> e. F2 () -> e. F3 () finally get processed ef. F1 () -> f. F2 () -> f. F3 () finally get processed f ...
Assuming that the number of concurrent numbers is 3, then complete a number of tasks similar to the one above, which is the timing.
|Time|1|2|3|4|5|6|7|8|9|10|11|12||------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----||A|A.1|A.2|A.3|||||||||||B|B.1|B.2|B.3|||||||||||C|C.1|C.2|C.3|||||||||||D||||D.1|D.2|D.3||||||||E||||E.1|E.2|E.3||||||||F||||F.1|F.2|F.3||||||||G|||||||G.1|G.2|G.3|||||H|||||||H.1|H.2|H.3|||||I|||||||I.2|I.2|I.3|||||J||||||||||J.1|J.2|J.3||K|| | | | | | | | | k.1 | k.2 | k.3 || l | | | | | | | | | | l.1 | l.2 | l.3 |
When the data and functions are separated, the concurrent number is also 3, you can form a pipeline, there is no discovery of throughput suddenly come up?
|Time|1|2|3|4|5|6|7|8|9|10|11|12||------|---|---|---|---|---|---|---|---|---|---|---|---||F1()|A|B|C|D|E|F|G|H|I|J|K|L||F2()|Z|A|B|C|D|E|f | g | h | i | j | k || f3 () | y | z | a | b | c | d | e | f | g | h | i | j |
If you look at it, eh? How come the 13th cycle K is just over? Does the above scenario end in the 12th cycle? You can't see that, actually, in 12 cycles, Y and Z have already been delivered. Because the throughput of pipelining is a process, the fragment I intercept should be a fragment of the machine during the continuous operation.
We cannot simply go to the ABCD, but see the number of tasks delivered. In 12 cycles, we are able to complete 12 tasks, in 11 cycles, the pipeline completed 11 tasks, the previous one only completed 9 tasks, the advantages of the pipeline is reflected here: each time period can be stably delivered tasks, throughput is very large. And the more the number of concurrent, with the first scenario compared to the greater advantage, specific people can also be verified by drawing.
The data part is the data part, the execution part is the execution part, not the same kind of thing put together is not suitable
function is a black box execution, as long as the necessary and sufficient conditions to satisfy the function call (enough parameters), is able to determine the output results. Object-oriented thinking binds functions and data together, and such encapsulation expands the granularity of code reuse. If you take the functions and data apart, the basic elements of code reuse are changed from the object to the function, which makes it more flexible and easier to reuse code.
Well, anyone who has experienced the reuse of objects has to move everything that this object relies on, even if you want to use only one of the methods in this object, but it is possible that your dependencies are irrelevant to the method you need.
But if it is a function, because the function itself is already the natural perfect encapsulation, so if you want to use this function, then this function all depends on you need, this is reasonable.
2. Does everything need to be object-structured?
Object-oriented language has always been proud of its own "Everything is Object", but the fact is: do all things need to be object-oriented?
In iOS development, there is a class called NSNumber, which encapsulates all values: double,float,unsigned int, int ... And so on, it weakens the type of numerical value when used, making it very convenient. But the problem also comes, when the calculation is not directly on the object to do the operation, you have to split them into numerical values, and then the operation, then the result into a NSNumber object, and then return. This is the 1th unreasonable. The 2nd unreasonable place is that when you do not know the type of the original data, the unpacking process will inevitably lead to the waste of memory (such as the original uint8_t data into unsigned int), which is not necessary.
There is our file descriptor, which is itself a resource identification number, if the resources are abstracted into objects, then the inevitable will make this object becomes very large, the resources have a lot of usage, you need to put these functions in the object. In the real delivery of resources, in fact, we are only concerned about resource identification, the other really do not care.
We already have the function as a black box, and holding the data into the black box is enough.
3. Type explosion
Because data and functions are bound together, two objects that have a logically derived relationship can often be treated as one, whichever is the object at the top of the derived chain. A simple look at this phenomenon intuitively feels great, and the father has sons. But in the actual project, derivation is very difficult to control, it leads to the same class type in the project flooding: Viewcontroller, Aviewcontroller, Bviewcontroller, Thisviewcontroller, Thatviewcontroller ...
Did you find that once the execution and data are disassembled, there is no need for so much viewcontroller, and derivation simply adds properties and methods to the object. But the truth is this:
{b number number ; {
The same points of the former and the latter are: in memory, the layout of their numerical parts is identical. The difference is that the former expresses the combination more strongly, and the latter expresses the inheritance more strongly. Yet we all know a common sense: The combination is more appropriate than inheritance, as mentioned in my first article in this series.
The expression on both is not different in memory, but in the actual development phase, the latter makes it easier to introduce the project into a bad direction.
Summarize
Why is object-oriented so popular? I thought about the industry. The most talked about this is the following points:
- It works very well for code reuse,
- It can be very easy to deal with complex code
- Object-oriented is more in line with programmer's intuition when it comes to programming
The 1th is true in theory, but in fact everyone understands that in the context of object-oriented, writing a reusable code is more difficult than a process-oriented context. On the 2nd, don't you think it's the object-oriented one that makes the project complicated? If the hierarchy is clear, calling the specification, whether object-oriented or process-oriented, dealing with complex business is just as good, and so very complex, the intricate relationship between objects will only make you more difficult to deal with, rather than the process of simplicity. On the 3rd, this is actually a camouflage, because no matter what the design, the final implementation, or to the process, object-oriented only in the processing of the call relationship is intuitive, in the architecture design, the clear demand is the first step, clarify the call relationship is the second step, the process of clearing the implementation is the third step. Object-oriented allows you to create the illusion of design completion in the second step, and only when you fall down to the implementation process will you find out what's wrong in the second step.
So in summary, my point is: Object-oriented is a very good idea in architecture design, but if it is simply mapped to the implementation of the program, the introduction of the shortcomings will make us lose the candle.
Postscript
Distance from the last blog update is nearly one months, not my lazy, is too busy, and now finally have time to "jump out of the object-oriented" series completed. 3 Pillars of the object-oriented concept I wrote three articles to pick its thorn, it seems to have a negative feeling, and I do not want everyone to go back to the next project on the development of process-oriented, I hope you can address this series of articles put forward the object-oriented shortcomings, strict code behavior, Know what works and what is not. I have suffered in my past work, and often have no time to explain in detail why such intuitive things are actually not feasible, and to explain these things requires a lot of lengthy speeches. The most painful is, even if the long talk is finished, the other side can not understand, so write out the garbage code out of harm.
Now, the long speech fell on the paper, said when you do not understand, go back can always turn the article slowly understand it.
Original: http://casatwy.com/tiao-chu-mian-xiang-dui-xiang-si-xiang-san-feng-zhuang.html
Jump out of object-oriented thinking (iii) encapsulation < transfer from Casa Taloyum>