Make your software run: Make sure the software is safe

Source: Internet
Author: User

 

One of the biggest problems with computer security today is that software is generally not robust enough. Software faults have disastrous practical consequences, including the red mark in the company's financial situation.

Security is only one aspect of software faults that can cause major financial harm-especially in the accelerated Internet e-commerce world. Other software faults include reliability and insurance issues. In some cases, software may cause people to lose their careers or even their lives. It is only a matter of time that the decline or extinction of large companies is caused by insecure software.

Everyone seems to know or perceive the existence of these problems, but few people know how to overcome them. There are not many resources available to guide developers in writing secure code. Even worse, today many software is developed at incredible speed and under tremendous market pressure. Software Quality (any type of quality) is often the first to bear the brunt of such market pressure ). In the best cases, security is often just an afterthought, and usually it is completely forgotten.

It is not just a tragedy, but also a disaster.

Take an active approach to security

Improving security is always a wrong solution. On the contrary, security should be designed into software from the very beginning. At Reliable Software Technologies, we developed a security-oriented Software assurance method that has been successfully used for many years in security consulting, code development, and code security analysis. This method has proved its practicality many times in practical experience-although it cannot be guaranteed that it can solve all your problems with the least effort.

Security is a complex field, and developing secure applications is no exception. The main purpose of our method is to help developers avoid the unprepared "insert patching" Method for security, that is, it can be repaired only when developers are aware of the error, otherwise, security is not a consideration. (We have discussed many disadvantages of this method in the previous "getting your software running" column .)

Our method has five steps:

There must be a security concept when designing a system.

Based on known and expected risk analysis systems.

Risk Levels are classified based on severity.

Perform risk tests.

Repeat the above steps for the damaged system from the design process.

 

There is a subtle but important difference between our method and the "insert and fix" method: Our method encourages you to use the cracker) software for Security Testing and fixing before being able to test and detect security vulnerabilities. However, even if we do that, we are still doing some insert patching. The real difference is that we try our best to fix all the inserts before the software is released. In this attempt to eliminate errors in advance, our approach requires developers to be well educated about all potential security risks. In addition, it relies heavily on good software engineering technologies.

This is certainly good, but there is also an important commonality. All five features of our method share one thing in common, but none of them is easy. To use this method most effectively, the key is to let yourself understand all important events in the computer field.

 

Why is it designed for security?

Contrary to the unprepared Security Development Technology (often not working well), you should consider security at all stages of the development cycle, rather than remedy it later. Improving security in existing systems is a bad idea. Security is not a basic function that can be added to the system at any time. Security is like fault tolerance. It is a feature that needs to be effectively and carefully planned and designed throughout the system.

A better way to write secure software is to design security into the system from the very beginning. We have learned from many system examples that security was not taken into account when we first designed these systems, but then added security features. Many of these systems are "Nightmare" of security ".

Windows 95 and Windows 98 are an example of this, which is notorious for many security vulnerabilities. It is widely believed that attackers can crash or hack any Windows 9x machine on the network. For example, the authentication mechanism in Windows 9x is easily frustrated. The biggest problem is that anyone can sit next to a Windows 9x host, shut down the computer, reboot the computer, and log on without a password, thus completely controlling the computer. In short, the Windows operating system is not designed for today's network environment; it is designed when the PC is not connected. Microsoft tried to improve its operating system to provide security for this new type of computer use, but it was not very successful.

UNIX was developed and used by university researchers, and security was not taken into account during design. It serves as a platform for sharing research results among team members. Because of this, it has undergone numerous patching and security improvements; like Microsoft, these efforts are not very successful.

We have seen that many real-world systems (designed for use in protected private networks) are doing similar rework to use on the Internet. In each case, the specific risks of the Internet cause these systems to lose all security features.

Some people refer to this as an environmental problem, that is, the system is safe enough in one environment and completely insecure in another environment. As the world is more closely connected through the Internet, most machines sometimes have unfriendly environments.

Designing from scratch for security is always better than adding security to an existing design. Reuse is a commendable goal, but the environment in which the system is located is so closely related to security that changes to any environment may cause various troubles-so much trouble, so that all previously well tested and understood work needs to be redone.

Security should have a higher priority for software developers, because for you and your customers, it is related to trust and ultimately to business stability.

How to assess security risks

There is a basic balance between the functionality and security of the application. Generally, you cannot determine whether your system is completely secure. A common joke is that the safest computer in the world is the one buried in a 10 feet-depth hole filled with concrete. If you use such a machine, please do it yourself! For real systems, security problems come down to the specific risks that enterprises are willing to take to effectively solve the coming problems. Security is actually a problem of risk management.

The first step in risk management is to assess risks: Identify potential risks, the likelihood of these risks, and the severity they may reach.

Effective risk assessment requires professional security knowledge. Evaluators must be able to identify where known attacks may be applied to systems around them, because few completely unique attacks expose themselves. Unfortunately, it is difficult to obtain such professional knowledge. Software Security is a big problem. It can be said clearly without a few words in an article. (We hope this column can help solve that problem in a timely manner .)

When you have detailed system specifications to work on, risk identification will be the most effective. If Authoritative Resources tell you how the system performs in a specific environment, the value is immeasurable. The entire process becomes blurred when the specifications are in the minds of developers rather than on paper. You may find the required information twice in your mind and get conflicting information.

Once risks are identified, the next step is to classify them by severity. The relative severity of the risk depends on the needs and objectives of the system. Therefore, it is helpful to refer to the detailed requirements document in this step. Some risks may not be worth mitigating, because the risk level is small or the negative impact of successful attacks does not require too much attention.

Risk assessment is the key to determining how to allocate test and analysis resources. Because resource allocation is a business problem, it is helpful to have reliable data to make good business decisions related to resource allocation.

 

Security requirements for development and Division

We have hinted that good software engineering practices are important to any reliable security method. Any system designed to meet fully understood and fully documented needs is much better than a system barely patched with chewing gum and fixed wire.

Make sure that your requirements are carefully constructed. For example, the requirement "the application should use password as much as possible" is not a good requirement, because it even sets a solution without diagnosing the problem. The requirement document should express not only what the system must do and what is prohibited, but also why the system should run as described. In this example, a better requirement may be that "credit card numbers are sensitive information and should be protected against possible eavesdropping ". Choose how to protect this information-password or some other means-should be postponed until the system has been standardized.

We recommend that you create templates or documents for the standard security guide for your organization so that you can derive security-related requirements for any specific project. Such a document allows individual applications to have individual needs and different security focuses have different priorities. It also provides a framework that allows cross-application consistency analysis. For example, "dos" may not be an important concern for the client application, because only the client is affected. However, denial-of-service attacks against commercial Web servers may simultaneously reject services against thousands of people.

Such a guide usually includes both a summary explanation of how security analysis is performed and a list of risks that application developers should consider. Of course, developers should not expect such a list to be complete. However, developers can expect other application developers in the same organization to consider the same set of risks.

System specifications are generally created by a set of requirements. The importance of reliable system specifications cannot be overly emphasized. After all, if there is no specification, the system operation cannot be an error. It can only be surprising! Especially when running a business system, no one wants security incidents.

Reliable specifications can also depict a coherent panorama of what the system is doing and why it is doing so. The specifications should be as formal as possible, but should not be obscure. Remember that the purpose of a specification is to understand the system. Generally, the clearer and easier to understand the specifications, the better the system generated.

 

Importance of external analysis

No one will intentionally build a bad system. Developers are proud and generally work hard to create reliable work systems. This is precisely why the security risk analysis team should not include any members from the design and development team. The fundamental difference between security testing and standard testing is the importance of getting rid of the impact of design and maintaining a completely independent view of the system. If a developer assumes additional responsibilities as a security tester, the trouble may occur. Designers and developers are usually too close to their systems and are skeptical that their systems may have defects.

Therefore, it is better to ask an external team for security analysis and testing (often referred to as the "Internal Security Investigation Team "). The additional benefit of doing so is to test the integrity of the system design documentation, because a good internal security investigation team will use that documentation in a large amount in their analysis. Of course, prior to that analysis, the design team must ensure that the requirements and procedures are well-defined so that the external team can fully understand the system.

An experienced external analyst group will consider many different solutions during the analysis process. Examples of frequently-considered solutions include decompilation risks, eavesdropping attacks, replay attacks, and DoS attacks. The test is most effective when there is guidance rather than random. Consider how these solutions may be applied to your system to generate extremely relevant security tests.

Another good reason for choosing an external security analysis team is that even the best developers often lack the security expertise necessary to effectively perform this analysis. Of course, the best results will generally be obtained by forming a highly-paid external security expert group. Thank God, such a group is often not necessary; it is sufficient to select people from your own organization who have good security knowledge and are not involved in design decisions to form a group. Unfortunately, security expertise now seems to be a rare product. Are you sure you want to seek help outside of the Organization based on risk analysis: Can your resources Reduce sufficient risks?

Security test comparison function test

Security tests can be conducted after the potential risks of the system are classified, despite some difficulties. Testing is an activity that relies entirely on experience. It requires a system to run and a careful observation of the system. Security Testing usually does not produce self-evident results, such as obvious system insertion. More often, the system will run in a strange or odd way, which implies to analysts that interesting things are about to happen. This type of doubt can be further explored through manual checks on the code. For example, if the analyst crashes the program by providing very long input, there may be a buffer overflow and may evolve into a security gap. The next step is to check the source code to find out where the program crashed and why it crashed.

Functional testing includes dynamically detecting the system to determine whether the system is running as expected in a normal environment. Security Testing is different (when it is good ).

Security Testing should include detecting the system in a way that attackers may detect and looking for vulnerabilities that can be exploited in the software. Security Testing is most effective in guiding system risks found during risk analysis. This means that security testing is basically a creative form of testing, which is only as powerful as the risk analysis it is based on. Security Testing is inherently subject to identified risks and the security expertise of testers.

Practice has proved that code coverage is an excellent measure of how many system defects can be found in a particular set of tests. It has always been a good idea to use code coverage as a measure of functional testing effectiveness. For security testing, code coverage plays a more critical role. In short, if a region of the Program (functional or security) has never been put into practice during the testing phase, you should immediately doubt whether there is a security issue in the region. An obvious risk is that unpracticed Code will contain Trojan Horse vulnerabilities, and harmless code may launch attacks accordingly. The less obvious (but more common) risk is that unpracticed Code contains serious errors that can evolve into a successful attack.

Comparison between dynamic testing and static testing

Dynamic Security Testing helps ensure that such risks do not attack you back. Static analysis is also useful. Today, many security problems are already fully understood, and they will come back. This was highlighted by the fact that more than 1998 of CERT warnings involved buffer overflow issues in 50%. There is no reason to suspect that all code has a buffer overflow problem, but the buffer overflow problem is still the biggest source code security risk.

It is possible to statically scan the source code that is critical to security to identify known issues and modify all problems encountered. Many hackers have code scanning tools that can scan your code from start to end to discover potential problems, and then they will personally check to see if there are security issues. The key to blocking this approach is to be knowledgeable about potential problems. Such tools are a good start to reduce the threshold for entering the security analysis field, because they will only code the knowledge that exists in the security expert's mind, and they will benefit all developers. Therefore, it is reasonable for developers to use such tools as fair third-party evaluators.

 

Conclusion

Software must now run in countless ways. Good software practices help ensure proper software operation. Insurance-critical and high-assurance systems have always made great efforts to analyze and track software behavior. Systems that are critical to security must be followed. Only when we regard security as an extremely important, complex, and complete system attribute, rather than a simple functional component or an aftercare remedy can we avoid applying a bandage-like insertion patch to security.

Computer security is becoming increasingly important because the world is becoming highly interconnected and networks are being used to execute critical transactions. Deciding to connect a local network to the Internet is a crucial decision for security. The environment on which machines depend has undergone fundamental changes in recent years, and software security must better predict these risks than before.

Software with unexpected faults is the root cause of most security issues. Although Software Assurance remains to be improved, it will give great gains to practitioners who want to directly address potential security issues.

 

References:

 

  • For more information, see the original article on the developerWorks global site.
  • DeveloperWorksIn the first "run your software" column, Gary and John introduced their security ideas and explained why they focused on the software security issues faced by developers.

 

 

Author Profile

Gary McGraw is vice president of corporate technology at Reliable Software Technologies, based in Dulles, Virginia ). He is engaged in consulting services and research and helps determine the direction of technical research and development. McGraw started from a research scientist at Reliable Software Technologies and focused on Software engineering and computer security. He has a dual-Doctorate Degree in cognitive and computer science from Indiana University and a bachelor's degree in philosophy from the University of Virginia. He has written more than 40 articles tested by his peers for technical publications and served as a consultant for major electronic trading suppliers (including Visa and Federal Reserve, he also served as chief researcher under the sponsorship of the Air Force Research Laboratory, DARPA, the National Science Foundation, and NIST's advanced technology project.

McGraw is a well-known authority in mobile code Security. In partnership with Princeton Professor Ed Felten, McGraw wrote "Java Security: Hostile Applets, Holes, & Antidotes" (Wiley, 1996) and "Securing Java: Getting down to business with mobile code" (Wiley, 1999 ). McGraw and Dr. Jeffrey Voas, one of the founders of RST, wrote "Software Fault Injection: Inoculating Programs Against Errors" (Wiley, 1998 ). McGraw regularly writes for some popular commercial publications, and its articles are often cited in national publications.

John viable is a senior associate researcher at Reliable Software Technologies. He has studied many security-related topics, including static and dynamic Vulnerability Detection Technology in source code and binary files, mobile agent security, e-commerce system security, and malicious code detection technology. His research also extends to Software Assurance, program testability, and programming language design. John has been closely involved in RST security consulting practices. John has a master's degree in computer science from the University of Virginia. He is also an active member of the Open Source Software movement and has compiled Mailman, namely, GNU Mailing List Manager.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.