From here: http://www.cnbeta.com/articles/121128.htm
After debugging the iPad program for more than a week, I found that I was not only looking for memory leaks, the reason why iOS does not support multitasking is that the operating system has a wide variety of Self-leaks and the memory is not released in time. I guess that if I keep working in the background, accessing the network, and running multiple threads, it will be difficult to keep the memory intact and it will be difficult to run for a long time.
Looking at the various strange behaviors of the NSAID utoreleaspool and the system library, I couldn't help but feel: "this year, even cucumber cannot help ." Regarding product release, I guess this is the case: Apple programmers can't solve these bugs, but the product cannot be dragged on. Therefore, the Helper decides:
1. Focuses on optimizing the Code related to music playback.
2. Second, focus on Code related to telephone functions (why is this not the first? Because I nested guess: If I guess this is the first one, it will be despised by yeppy)
3. User Programs cannot be executed in the background.
4. Educate users: you do not need to perform multiple tasks.
Now that iOS4 is available, it is estimated that apple programmers have changed a lot of bugs. Therefore, some multitasking functions are available. However, the background running mode recommended by iOS4 is not resident running, but notification mode.
After a while, the bug is almost changed. The Helper will put on jeans, jump onto the podium, and announce to the universe: "revolutionary innovation! The iPhone supports full multitasking! You have got a phone number as powerful as Mac Book Pro !" It must be like this. The "reverse Qing Fu ming" is just a slogan. The real goal is silver and women, right? You have turned around android and windows phone 7, right? Haha!
--------------------
What did this brother do in Kingsoft? :)
But to be honest, it is hard to say that this is the best reason, although these technical flaws may help the instructors make the final decision without having to worry. Android multi-task is actually like this: 1. It is best to put the background thread in a separate Service. 2. Non-system services may be killed. 3. The experience is not good. The last 3 is what I think is the most terrible. For example, it is very easy to get stuck when installing software on Android's old machine, and I think it is difficult to improve by the evolution of hardware: there are always Killer more Killer than Killer.
Multi-tasking should be more appropriate. In fact, a large number of user terminal devices are not needed in most cases. As a matter of fact, you can see from Windows that, unless you use a better machine and at least dual-core, many software will be stuck. No developer will consider the smoothness of applications other than the Code he wrote. They always think that is an operating system. Unfortunately, these operating systems are all lagging behind, and they have adhered to the design tradition since Unix, there is no one that fundamentally intends to have enough information to make the right choice when allocating resources.
In addition, from my personal perspective, users should be able to easily determine their priorities (because only users can trust them, and even operating system installers may make stupid decisions on their own ), this ensures that the interface and other key programs are not affected. In this regard, it is necessary to think about how to arrange it, and no one can learn from it. Apple is not an innovative company. Since its founding, every point is basically copied (more than M $ ), simply packaging capabilities are really strong (this is also a user's need), and you cannot expect rapid progress in technological innovation.
To put it more specifically, the above words are not ironic about Apple. Even a small innovation is hard. Many people know that I am working on a Web framework + platform. However, it has been more than three years since I started with a clear purpose, it takes about one year to get rid of intermittent playing in the middle. As I said, "I just saw the Starting Point" has a few simple interfaces, the core is just the initial expression of a few of my ideas, and there are not many innovations.
In the past few years, I have made waste for the design and practice of various parts. The code for back-and-forth experiments has to be calculated in six digits. How many lines can be left behind? Apple is not much bigger than me, but much faster than me: the decision is always made by a few people, and others can only get their wages while waiting. The larger the company, the less cost it can afford for one-time overlay.
Another key point is that there are fewer and fewer good programmers. Although there are many developers and some of them are great, they are only developers. It does not mean that developers do not know the overall arrangement, do not know the skills such as cache variables, reduce loops, or do not have to carefully check and prevent memory leaks; the key is that most developers do not have real abstract capabilities and cannot quickly identify common and generalized solutions to problems (I am not doing well myself, not to mention the cooperation problem of large teams ), in this way, every problem is converted into a workload. For a very large software project, the software release period is unacceptable.
As mentioned above, the software card and startup speed are slow. For example, my PD820 is usually not optimized. For example, when FireFox is enabled on Windows, do you have anything that must be started for half a day? For example, the JS smart prompt in VS2010 is slower than the WingIDE in Python. What do you want to eat? If the developers of these software companies are like this, to be honest, the only expectation is that only the power consumption ratio of hardware is left.
Many Java/. NET siblings may think of garbage collection. Here, I would like to say that it is not the garbage collection that has been specially configured for our own program. That is to say, the general garbage collection we are currently using is itself garbage collection. The key is that it may not smell so bad in your project. For example, applications running on many servers cannot be seen, however, it is absolutely obvious that a device with limited resources is frequently operated. So this is actually a question about when/when to select a solution. on mobile devices, I currently support Apple.
Update: It seems to be a street abuse and there is no constructive solution. This should be changed. Otherwise, it will be no different from the SB.
How does. NET/Java solve the garbage collection problem without memory leakage, and avoid performance loss during reference counting and quick application release? Very simple: a: Write a memory distributor dedicated to this project. This is very simple, because you are very clear about how much memory to use, what should be reused, and what can be released. B: write more advanced generic splitters, but you can customize them based on the issues you are most concerned about. The amount to be considered will increase several times, so be cautious.
The specific combination of the distributor, runtime, and interfaces used,. NET: C ++/CLI, Java: JNI. Of course, reflection is indispensable if you want to use it.
But the task priority issue is... It is certainly impossible to rewrite the operating system. For the * nix system, the kernel implements a driver to enable user space registration, and then implements its own allocation mechanism. It provides an interface for users, or use more "intelligent" judgment methods. Well, I can't tell the specific things, because I have to be careful when judging the level of mod, but I have not considered the possible problems.
However, I should note that the. 32 kernel contains a group-specific task for the process? For more information, see. The most important thing is the conciseness. Instead of making a complicated algorithm, the CPU is reduced by 10%. Even if the user does not notice it, the electricity will be useless.
Of course, if no one in the organization undertakes the cost of development (for example, if you do this, who does other jobs, the cost of reconstruction, and so on), you can leave it all.
Update2. The spread of pointers is optimistic that 90% of the cases are traceable, and a compilation tool is enough to check the situation that most new users do not delete. Therefore, the risk of Memory leakage is the reason for garbage collection, which is totally infeasible.
Specific methods have been mentioned above to optimize the memory application/release frequency. Even if you do not use the above method, you can also implement a common cache-containing recycler. Some people say this is not the same as garbage collection? Of course not. Although there is a checker and a distributor, however, we still require programmers to specify when the memory is not needed (or even insert delete can be generated automatically at the last position of the propagation through the runtime code), with more comprehensive information, naturally, it is more accurate and efficient.
In fact, I have discovered problems in all software fields over the past few years, whether it's poor design, bug, or operational efficiency, basically, it is caused by the constant loss of necessary information in various processes of software construction and use. This is a core issue that far surpasses the existing methodologies, and even surpasses the software development itself. If there are no influential organizations or directly or indirectly solve this problem, it is unlikely that the silver bullet (relative) will appear.
But at least, we can pay attention to this issue in our own projects. Although we can maintain information using the thinking (methodology) and practical tools (programming tools such as languages) that can be found at this stage, the cost seems a little high.