Learn about the new features of code access Security (CAS) in. NET Framework 2.0 __.net

Source: Internet
Author: User
Tags cas class definition hosting least privilege
This article describes the following:

CAS Overview

Sandbox technology and trust level

Developing hosts and frameworks

AppDomain and security

This article covers the following technologies:

. NET Framework 2.0, Visual Studio 2005

content of this page
Why do I use CAS.
Understanding Sandbox Permissions
Host and framework
AppDomain and security
Define Sandbox
How to Host
CAS and Frames
Security Transparent Code
Use transparency
Summary

The Microsoft®.net Framework uses a variety of security technologies: for example, base class libraries (BCL) and ASP.net, role-based security, cryptography classes in BCL, and new support for using Access control lists (ACLs), which Just a few of them. Code access Security (CAS) is one of the. NET security family technologies provided by the common language runtime (CLR). This article explores the role of CAs in. NET security and some of the major new features and changes in the. NET Framework 2.0 for CAs.

Most developers who use the. NET Framework need to know about the presence of CAS without having to learn more. As with floating-point numbers, CAS is a feature of the Framework that is very necessary for some applications, but not for most applications. Why do I use CAS.

When people ask me questions about access control, they usually want to learn more about role-based security, which is based on user identity control access to resources. CAS can be difficult to understand because it is not based on user identity, but rather based on the identity of the code that is running, including information such as source of code (for example, from the local computer or the Internet), code builders, and code signers. Depending on the "evidence" associated with the code and its identity, the system restricts what the code can do and which resources to access.

Why do you limit the activity of your code based on your code identity? Usually it is not necessary to do so. For example, suppose you are running an advanced and expensive graphical editing program that allows it to access any resources (files, registry settings, and so on) on a computer that you (as the user running the program) can access. At this point, you understand and trust the publisher of the software and are willing to allow the software to have high-level access to the platform on which it is running. Although in order to avoid accidental damage to your computer, you typically want the user account that the application runs with the lowest possible permissions, but in these cases the trust level of the application itself is not a security consideration.

In some cases, however, you need to run the code without knowing or fully trusting its author. This is most often the case when browsing the Web. Modern browsers often run code from the WEB site they visit. These codes are usually JavaScript or DHTML, but they may also be other forms of executable files that the browser can recognize. When the browser runs the code on the computer, it goes into the sandbox environment. A sandbox is a restricted execution environment that is used to control the computer resources that the code is authorized to access. The sandbox must be used; otherwise, relatively anonymous code from the Internet can read your private files and install the virus on your computer every day. Many browsers provide different levels of sandbox processing based on the source of the code being run. For example, Microsoft Internet Explorer defines the concept of a zone. Code from the Internet zone can perform relatively few activities compared to code from the Trusted sites zone. A security vulnerability occurs when the software that implements the sandbox rule is flawed, or if the user approves the execution of code outside the sandbox, if it is spoofed.

Running code from a Web site in a browser is a common scenario for sandbox processing and code identity based security. CAS in the CLR provides a general mechanism for code sandbox processing based on code identity. This is the main advantage and main use of CAS. Today, Internet Explorer uses CAS to sandbox managed code (controls or stand-alone applications) from a Web site running in a browser. CAS can also be used to sandbox applications that run from a local intranet network share. In addition, Visual studio®tools for Office (VSTO) hosts it as a managed add-in for Microsoft Office documents. As far as the server is concerned, ASP.net uses it for WEB applications, and SQL ServerTM 2005 uses it for managed stored procedures. CAS will pop up in many cases, which usually occurs in the background. Back to the top of the page understanding sandbox permissions

Developers often ask what they need to know about CAS. For application developers, the answer depends on whether the application you write, such as managed controls in Internet Explorer, is run in a sandbox. If you are running in a sandbox, you need to know the following:

What you can do in the sandbox

Are these permissions sufficient for the application to run successfully

If necessary, how to raise the trust level of the application for more permissions

For example, a managed control in a browser uses a set of default sandbox permissions: The Internet permission set. As far as resource access is concerned, this sandbox allows applications to create "safe" user interface elements (for example, transparent windows are treated as unsafe elements because they can be used to implement spoofing or man-in-the-middle attacks), to return WEB connections to their starting sites, to access file systems and printers with the consent of the user, Store limited data (similar to Internet Explorer cookies) in the quarantine store. It does not allow the application to read random portions of the file system or registry or environment variables. The sandbox does not allow the application to establish a connection to the SQL Server database, nor does it allow calls to COM objects or other unmanaged code. Different hosts can define different sandbox. For example, SQL Server and asp.net can define different permission sets for their low level of trust applications. Figure 1 outlines these sandbox permissions.

Once you know the various options for the sandbox, you can determine whether a sandbox application meets your needs. For example, if you plan to connect to a SQL database when you write managed controls in Internet Explorer, your control cannot be run in a sandbox by default. You either change the schedule (such as using a WEB service that is deployed on a control server to access data) or elevate the trust level of the control on the client computer. For managed controls in the browser, to raise the trust level, you need to deploy CAS policy so that the control's source site and control itself are trusted. Other hosts have different application-trusted mechanisms. To change the trust level of the ASP.net application, you must change the settings in the configuration file.

In some cases, your application must run in a sandbox because the user is less likely to install and run it in other ways. Suppose you are writing a simple survey control for a Web site. The control presents a few questions to the user and records the user's feedback information. Most people don't download and run fully trusted applications for such a simple thing. But sometimes, while you're writing a trusted application, it's still very useful to keep it running in the sandbox. Figure 2 shows the level of trust with sandbox processing and no sandbox processing. If your application must run with full trust, make sure it is still running well in a partial trust environment [For more information, see the sidebar, "properly respond to partial trust" (English)].

In general, it is much easier to deploy a sandbox application. For example, with ClickOnce, a sandbox application simply runs without any action by the user, and a high trust application either must sign with a trusted certificate or display a hint to the user [For more information about ClickOnce, see ClickOnce: Deploy and update smart client projects using a central server (English) and get rid of DLL Hell: Use ClickOnce and registration-free COM to simplify application deployment (English)].

The application's sandbox processing can also improve reliability. For example, running as many managed stored procedures as possible in SQL Server at the lowest trust level means that code that has full server access will be reduced, thereby reducing the number of code that can damage the server (intentionally or unintentionally).

Visual Studio 2005 has improved significantly in support of building sandboxed client applications that use ClickOnce deployments. The security pane of the Project property page is where most operations are performed (see Figure 3). You can enable security settings here and set the target area for your application. For sandboxed applications, you typically select an Internet zone. You can then use the Calculate Permissions (Compute permissions) button to roughly estimate the permissions that your application requires and whether it is appropriate to run in a sandbox (this calculation is also available through the Permcalc command-line tool in the. NET Framework 2.0 SDK).

Figure 3 Visual Studio Security Pane

It should be clear that the calculation results are only a rough (usually Conservative) estimate based on pure static analysis. You must test it by using the permissions that the application runs on. To do this, you can simply debug your application after setting the target area. Visual Studio will run the application with the permissions you specified in the Security pane. Back to the top of the page host and Frame

If you are writing a host or framework, the sandbox is much more knowledgeable than the basics above. Host developers need to know how to host low trust level code. The framework developer needs to know how to write a framework or library that allows the minimum trust level code to access each feature.

But before discussing these issues, I'd like to start with a general confusion about running code with the least privilege. Running code with least privilege means running code with the minimum permissions required to complete the job. This limits the potential damage that can be caused when there are exploitable vulnerabilities in your code. To ensure the security of Windows®, it is always best to run code with the least privileged user identity (for example, a regular user, not an administrator user). For CAs, running with least privilege means that the minimum CAS permission set required to perform a task is used by the application or library when the code is run. While running code with the least privilege is usually a good practice, it has its limitations. For example, almost all of the host or skeleton code must use powerful permissions outside the default sandbox to run. Such code requires frequent calls to other managed code and control resources on WIN32®API or machines, such as files, registry keys, processes, and other system objects. Since such code already requires a high level of permissions, and since many powerful permissions can be elevated to full trust, it is often not worth trying to remove one or two permissions from a security perspective.

You can use time more effectively on auditing and test code to ensure that your code is safe against malicious callers. (In a CAS environment, running with full trust means that the user who runs the code can do what it does on the system, and what the code can do.) If a process running Full-trust code does not have administrator rights, even fully trusted code cannot access the machine's resources indefinitely. For simplicity, the. NET Framework 2.0 Adds a new technology called security-transparent code. I will discuss this technology with other issues of the framework development later. But now let's get back to the trusteeship issue.

In general, hosting refers to the use of CLR execution code in other application environments. For example, SQL Server 2005 managed a run-time program to execute stored procedures written in managed code. Here, I'll start with a review of AppDomain, focusing on the role of hosting in security. Of course, the role of trusteeship is much more than that. For more information, see the custom Microsoft. NET Framework Common Language Runtime (Microsoft Press®, 2005) authored by Pratschner. Back to the top of the page AppDomain and security

AppDomain provides child process isolation for managed code. In other words, each AppDomain has its own set of states. Verifiable code in one AppDomain cannot interfere with code or data in another AppDomain unless the interfaces created by the managed environment allow them to interact. How did this happen? Verifiable type-safe code (generated by the C # and Visual basic®.net compilers by default) cannot access memory arbitrarily. Each instruction is checked by the runtime using a set of validation rules to ensure that the instruction accesses memory in a type-safe manner. Therefore, when you run verifiable code, the runtime can guarantee AppDomain isolation and prevent it from running when the code is not verifiable.

This isolation enables the host to run code safely at different levels of trust in the same process. Code with low trust levels and trusted host code or other low-level trust levels can run in different AppDomain. The number of AppDomain required to host low trust level code depends on the isolation semantics of the hosting. For example, for managed controls, Internet Explorer creates a AppDomain for each site. Multiple controls from the same site can interact in the same AppDomain, but cannot interfere (or maliciously exploit) controls from another site.

Figure 4 AppDomain Firewalls

CAS allows you to specify the level of trust in code when code is loaded (mapping evidence to permissions by policy). In addition, CAS allows you to specify its trust level when creating AppDomain. In the. NET Framework 1.1, you can achieve this by setting up evidence for AppDomain. This evidence is mapped to a permission set by policy. Whenever the control flow transitions to AppDomain, the permission permissions of the AppDomain are pushed to the stack, like the permissions permission of the assembly. When a stack walk is performed, the AppDomain permission is considered to evaluate the request from the AppDomain. This mechanism, known as the AppDomain Firewall (see Figure 4), prevents decoy attacks across AppDomain and provides another isolation mechanism to isolate code between AppDomain.

CAS allows code (assemblies) to be loaded into AppDomain with different trust levels (permissions), but this is not recommended for safe sandbox of code. Because if all code in a AppDomain shares state, it is difficult to prevent elevation of privilege between multiple assemblies within that AppDomain. For example, suppose you load an assembly that has only execute permissions and another assembly that has a LocalIntranet permission set. The former (run only with execution privileges) may be elevated to the LocalIntranet permission set unless the code is written with careful consideration of how to respond to the situation. And you're not enjoying the benefits of AppDomain firewalls. Therefore, as mentioned earlier, it is useful to create multiple AppDomain to host code that requires isolation, even if the code runs under the same trust level (for example, Internet Explorer uses the same trust level for controls from different sites).

Therefore, I do not recommend loading multiple assemblies into a single AppDomain with different trust levels, but instead recommend using separate AppDomain for each logical chunk of the low trust level code (for example, each site in Internet Explorer) and simplifying the security model. Each AppDomain should have two trust levels:

The first trust level is Full-trust platform code, which includes the. NET Framework and the host trust code that interacts with the low trust level code.

The second trust level is the low trust level code itself, which runs at a single trust level rather than a mixed trust level.

Figure 5 AppDomain Trust

This model is easier to understand and more secure. This is one of the reasons why ClickOnce uses it. For any ClickOnce application, there are always two trust-level code runs in AppDomain: platform code that runs at full trust level and application code that runs at the trust level specified in the application manifest. The CLR security system has been optimized for this mode, so code that conforms to this model will work well. For a description of the two trust levels, see Figure 5. Back to the top of the page definition sandbox

Another managed task is to define the sandbox or permission set to which you want to grant low trust level code. As mentioned earlier, like other hosts, such as ASP. NET), the CLR also defines certain permission sets for the sandbox. The two related permission sets defined by the CLR are Internet permission sets and LocalIntranet permission sets. The Internet permission set is used to ensure the security of anonymous, malicious code that is running. Microsoft has audited it for this purpose, and it is the only permission set that has been thoroughly tested for this situation. Therefore, it should be the default choice for managed low trust level code. The LocalIntranet permission set, while preventing malicious code from controlling the computer, can reveal certain information that the user does not want anonymous code to obtain, such as the user name of the currently logged-on user, and so on. As a result, it is more appropriate for code that is trusted to some degree, such as code on your organization's intranet.

A host that defines its own permission set, such as ASP. NET) and how. For example, the asp.net medium trust level uses CAS and Windows identity-based security technology to restrict code to provide security. A member of our team has developed a very appropriate word for this situation: dune treatment. A "dune" is a restricted set of permissions that does not in itself prevent malicious code from unauthorized access to compute resources. However, it is useful in combining with other security enforcement mechanisms and in maintaining the credibility of trusted code. If multiple applications share computing resources, and you want to prevent a bad-running application from consuming the server, this technique is typically used for reliability. Returns how the top of the page is hosted

How are these to be achieved? In the. NET Framework 1.1, create a AppDomain to ensure incoming evidence. This evidence will be used to compute the AppDomain permissions. Then, set the AppDomain policy level to grant the full trust level to platform and host code, and to grant low trust levels to all other code. Next, set the AppDomain policy level on the AppDomain. Finally, call the trusted hosting code in the new AppDomain to run the low trust level code from the bootstrap.

In the. NET Framework 2.0, this process is simplified:

1.

Create AppDomain using a simple new sandbox processing API. In this step, you can set the AppDomain trust level, the sandbox permission set, and the list of trusted host assemblies without creating the AppDomain policy level.

2.

Call the trusted hosting code in the new AppDomain to run the low trust level code from the bootstrap.

For full details on these topics, see the articles published by Shawn Farkas in this issue. Shawn describes how to do this and discusses all the techniques for securing these operations. Although the simple sandbox processing model is sufficient for many cases, you can further control the creation of AppDomain through the AppDomainManager class. Shawn's article describes how to use AppDomainManager to implement custom policy behavior when needed. Back to top of the page CAS and frames

Now, I'm going to change the subject of the CAS-related situation that another developer encounters, the framework build. Frameworks are libraries of reusable classes that are intended for use by other applications. A framework can be an assembly of several types, or it can be a large set of assemblies with multiple types, such as the. NET Framework. The framework is typically deployed in the global Assembly cache (GAC). In this way, these frameworks can be used by multiple applications. They are platform components running at full trust, and you can use all the functionality that the system provides. This topic is also relevant to host developers because they typically have to build at least one restricted framework that interacts with managed code, if not a mature framework.

If a framework developer wants to provide a feature to code that is not fully trusted, you must understand the CAS. Take the example of building a sound database. You can define a custom permission, such as Soundpermission, to request that permission when the method that plays the sound is invoked. If the request succeeds, you will declare permissions to invoke unmanaged code, and then call the Win32 API to play the sound.

Another more common scenario is to interact with existing system permissions when building a framework. For example, suppose you want to implement a math library that can be accessed from a low trust level code. If your library is only calculated and does not have access to any system resources, it can basically ignore CAS. But suppose that during the initialization of the math library, you can help decide how to optimize your calculations by reading some environment variables. You will need to audit the code to ensure that calls under the Low trust level code environment are safe. For low trust level code, the call must be completely opaque, that is, the low trust level code cannot control which environment variables are checked and the values of environment variables cannot be obtained. For a successful implementation in a low trust level environment, you must declare the environment permissions to read the environment variables.

A declaration is a elevation of privilege. When a framework code executes a declaration, it will be able to perform actions that the caller normally does not have permission to perform (but the framework must already have that permission and the permissions to execute the declaration). When a framework code executes a declaration, the code must be carefully audited to ensure the security of the Declaration. This usually means performing these checks: checking incoming parameters to ensure that they are validated and canonical (if applicable), ensuring that there is no inappropriate data leakage back to low trust level code, and that the low trust level code does not unduly change the state of the High trust level code. To trick it into performing unsafe operations (called decoy attacks) at a later time. Declarations are often very easy to spot. These declarations can be found by searching the source code, and there are some relevant FXCOP rules in the declaration.

Satisfying a link request is a more difficult to discover and therefore potentially dangerous privilege elevation. When a method contains a link request, if the method is Just-in-time (JIT) compiled, its immediate caller is checked. Therefore, this is actually a first-level request. It can be dangerous because the trusted code satisfies a link request and then turns around to provide low trust level code without realizing that it satisfies the link request. To ensure security, a policy that trusted code can try is to avoid executing any declarations, but simply to have all the requests flow to the caller. However, for a link request, the trusted code executes an implicit declaration. To ensure full security, the trusted code must also convert all link requests to full requests. Although there are some FxCop rules that can help find code that satisfies a link request, you cannot visually audit the code to see if it satisfies the link request. This makes the code's auditing work more error-prone. Back to the top of the page security transparent Code

Transparency is a new feature of the. NET Framework 2.0, which helps framework developers write libraries that can more securely deliver functionality to lower-trust-level code. You can mark the entire assembly, some classes in the assembly, or some methods in a class as safe and transparent. Security-Transparent code does not elevate permissions. Here are three specific rules:

Security transparent code cannot execute declaration

To change a link request that is satisfied by security-transparent code to a full request

Unsafe (unverifiable) code that must be executed in security-transparent code will raise a full request for bypass authentication security permissions

These rules are enforced by the CLR during execution. In general, security-transparent code passes all security requests for code that it invokes to its caller. The request simply flows through the code and the Code does not elevate the permissions. Therefore, if an application at a low trust level invokes some security-transparent code that raises a high permission request, the request flows to the low trust level code and fails. Security-Transparent code cannot stop a request even if it wants to stop. The same security-transparent code that is invoked from full trust code can trigger a successful request.

Transparency has three properties, roughly as shown in Figure 6. When using transparency, add code to security-transparent methods and security-critical methods (as opposed to security-transparent methods). Most of the code used to handle data manipulation and logic can usually be marked as security-transparent, while a small portion of the code that actually performs elevation of privilege is marked as security-critical. So far, transparency teams in Microsoft can mark more than 80% of code as safe and transparent. This allows them to focus on auditing and testing 20% of security-critical code. For backward compatibility, all code that does not have transparent properties in the. NET Framework 2.0 and previous versions is considered to be a security-critical code. Now, you must choose to use transparency. With respect to transparency, there are also FxCop rules that help developers ensure the correctness of the transparency rules when they build code early on, without having to debug Run-time errors later.

The reason for using the SecurityTreatAsSafe property may not be obvious at first. Security-Transparent code and security-critical code within an assembly can be viewed as actually being separated into two assemblies. Security-transparent code cannot see private or internal members of security-critical code. In addition, security-critical code is typically audited for security-critical code access to the public interface. You do not want to be able to access private or internal state outside the Assembly, and you will want the status to always be isolated. Therefore, in order to achieve the state isolation between security-transparent code and security-critical code and to overwrite it if necessary, the SecurityTreatAsSafe attribute is introduced. Security-Transparent code cannot access private or internal members of security-critical code unless these members are marked with SecurityTreatAsSafe. Before adding SecurityTreatAsSafe, the author of the key code should approve the member as if it were public. Return to the top of the page using transparency

I'll take the math library as an example to illustrate the syntax of transparency. First, to select transparency, you must add the SecurityCritical property as an assembly-level attribute:

Using System.Security;

//Select low trust level caller
[assembly:allowpartiallytrustedcallers]
//Select transparency, but some code is critical code
[Assembly:securitycritical]

Now, by default, all types in the assembly are safe and transparent. However, suppose that the constructor of the math library class needs to be declared in order to read environment variables, and all other methods in the library can be safe and transparent. The class definition is shown in Figure 7.

Transparency allows you to build a framework in which most code runs in its caller environment, while explicitly marking which code can prompt permissions. In this way, you can focus on the security of the most sensitive code (which becomes very important as the code base expands), or you can reduce the cost of building and maintaining code that is exposed to low level trust callers. Back to the first summary of the page

This discusses the many possibilities of the default sandbox. Developers who want to host low trust-level code or extended platforms must know more about CAS. With the new features of the. NET Framework, it is easier to host low trust level code and to extend the platform more securely. With more and more extensive application of managed code, CAS is also evolving to make sandbox handling for low trust code easier, allowing developers to build a more secure platform [for additional changes in. NET Framework 2.0 about CAs, see the sidebar "Other new features of CAs" (English)]. NET Framework 2.0 CAS is designed to enable more developers to take full advantage of these scenarios and achieve a more secure extension.

Mike Downen is the Security project manager for the CLR team. He is responsible for "code access Security", cryptography class, and ClickOnce security model work. You can read Mike's Blog and contact him on the blogs.msdn.com/clrsecurity.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.