The "pointer is not good" argument has been repeated many times. Unfortunately, this argument is fundamentally incorrect. A pointer itself is an abstract form of knowledge. There is no doubt about its necessity: If there is no pointer like a thing, how can we accurately operate the memory?
If it is an equivalent alternative, it will inevitably be accompanied by all the shortcomings of the pointer that people think. If it is a restricted alternative, such as reference, some tasks will inevitably fail to be completed.
If you ask most people how to solve the problem, just like the conventional multi-language programming, where to use the language, it's all right: Those who are programming at the underlying level, should we be more conscientious in dealing with issues that do not need to be concerned?
The true flaw of pointers is that such abstraction itself does not carry necessary information for use, as well as necessary mechanisms and policies that depend on such information. The reference count is an attempt to increase information and automate processing, but it has an efficiency problem. GC collects and automatically processes information at runtime, and there are still efficiency problems (such as LAG ).
Let some people say that this has become a price and a trade-off. Unfortunately, this is just another laic. In this case, there is a statement: A memory must be released once it is applied.
Technically, the release point may not be followed by the application point, but may be separated by several use points.
The first is the internal transmission of the module. There are two difficulties: one is the application that is determined only at the running time, and the other is the uncertainty caused by the branch. Second, the pointer can be passed to the outside of the module, which makes it even more difficult. Once again, considering the lifecycles of different threads, it is clear that it is impossible to simply decide when to release a thread Based on the transfer process.
(Please add)
But can't these problems be solved? This is something to consider. I am very disgusted with the fact that many influential people have not really tried and obtained definite results, and they have instilled various ideas into new people, let programmers have a variety of preconceived ideas.
For pointer, most memory applications are more secure than reference counting and GC. We can see that the memory occupied by the value type parameter will be returned to the stack when the function returns, and many situations are similar on a larger scale. Second, there are also many methods for triggering the memory release conditions (during compilation and runtime) to explore.
(Please add)
So why do we have to bear the loss of GC or reference count for a few cases (if it is confirmed to exist or the processing cost is too high? Or let the programmer handle it manually?
Not to mention the model of the problem. Many people have never thought about the rationality of stack and stack, the introduced problems, and whether or how to solve them, I was stunned by the seemingly non-obvious opinions of the big mouth, and then I changed myself into a big mouth. I would say a set of things (if I could sit down on the phone, I would also be dizzy ..).
Well, it seems that I am excited again. This is not good.
So far. In a word, the current environment for our work and study is unprecedented. It seems that we are in a huge vegetable market and there are so many selling voices that we cannot seriously think about it. Maybe the only thing I can do is buy earplugs.