This principle should actually take this name: "You should create assemblies that are reasonable in size and contain a small number of common types." But it was too long, so it was named after what I thought was the most common mistake: developers always put everything except the ditch in the kitchen (kitchen sink may be a colloquial word, not able to find out what it means, so literally.) are placed in an assembly. This is not conducive to reusing the components, but also to the small and medium parts of the system update. Many small assemblies that exist as binary components can make these simple.
However, this title is also very striking for the cohesion of the Assembly. The cohesion of an assembly refers to the degree of responsibility of a concept unit to a single component. The aggregation component can be simply summed up in one sentence, and you can do it from a lot. NET to see these in the FCL assembly. There are two simple examples: the System.Collections Assembly is responsible for providing a data structure for an ordered set of related objects, while the System.Windows.Forms assembly provides a model of the Windows control class. Web form and Windows Form are in different assemblies because they are not related. You should use a simple sentence to describe your assembly in the same way. Don't play tricks: a myapplication assembly provides everything you want. Yes, it's a simple one too, but it's too lazy, and you're probably my2ndapplication (I think you'll probably want to reuse some of that stuff.) Here "some of the content" should be placed in a separate assembly. Assembly does not need to use all of the features.
You should not create a program assembly with just one public class. There should be a compromise approach, if you're too extreme, with too many assemblies created, you lose some of the benefits of using encapsulation: First, you lose the opportunity to use an internal type, which is a public class that is not encapsulated (packaged) in one assembly (see Principle 33). An internal type is one that can access a class only in a common assembly and restrict access outside the assembly. The JIT compiler can have a very high efficiency in an assembly, which is much more efficient than the shuttle in a multiple-program set. That is, it is good for you to put some related types in an assembly. Our goal is to create the most appropriate assembly for our components. This goal is easy to achieve, that is, a component should have only one responsibility.
In some cases, an assembly is the binary representation of a class, and we use classes to encapsulate algorithms and store data. Only public interfaces can become "official" contracts, that is, only public interfaces can be accessed by users. Similarly, assemblies provide binary packages for related classes, except that only public and protected classes are visible outside the assembly. A tool class can be an internal class of an assembly. Sure, they should have a wider range of access for private nested classes, but you have a mechanism to share common implementations within an assembly without exposing the implementation to all users. That is to encapsulate the related classes and then detach them from the assembly into multiple programs.
In fact, using multiple assemblies can make a lot of different layout options easier. Consider a three-tier application in which a program runs as a smart client and another part runs on the server. You provide a number of validation guidelines on the client to ensure that user feedback is correct for data entry and modification. On the server you have to repeat these principles and compound some validation to ensure that the validation is more stringent. These business principles on the server side should be a complete collection, but only a subset on each client.
Indeed, you can also create different assemblies for client and server business principles by reusing source files, but this can be a complex issue for your agency's mechanism. When you update these business principles, you have two installations to complete. Instead, you can detach part of the validation from strict server-side validation and encapsulate it into different assemblies to the client. In this way, you reuse binary objects that are encapsulated into assemblies. This is much better than reusing code or resources and recompiling into multiple assemblies.
As a program, it should be a library of organizational structures that contains related functionality. This is already familiar, but in practice it is difficult to achieve. In fact, for a distributed application, you may not know in advance which classes should be distributed to both the server and the client. Even if possible, the functionality of the server side and the client is likely to be mobile; you will likely face both sides in the future. By making the Assembly as small as possible, you have the possibility of a more simple redeployment of servers and clients. An assembly is a binary block of an application, and it is easy to add a new component plug-in for a working application. If you're not careful what's wrong, creating too many assemblies is much easier to do than individual programs.
I often consider assemblies and binary components similar to Lego. You can easily pull out a Lego and replace it with another one. Similarly, for an assembly with the same interface, you should be able to easily pull it out and replace it with a new one. and other parts of the program should continue to run as usual. This is a bit like Lego, if all your parameters and return values are interfaces, then any one assembly can be easily replaced with another with the same interface (see Principle 19).
Smaller assemblies also allow you to process the cost of a program's startup in stages. Larger programs will take more CPU time to load, and more time to compile the required IL to machine instructions. You should only JIT some of the necessary content at startup, and the assembly is full loaded, and the CLR will save a stub for each method in the assembly.
Take a little break and make sure that we don't go to extremes. The principle is to ensure that you do not create a single monolithic circuit program, but rather to create a monolithic system based on the binary, and a reusable component. Do not refer to this principle and go to the other extreme. The overhead of a large application based on too many small assemblies is related. If your program uses too many assemblies, the shuttle between the assemblies can generate more overhead. When loading more assemblies and converting IL to machine instructions, the CLR loader has a little extra work to do, which is to adjust the function entry address.
Similarly, security checks can be an additional cost when moving between assemblies. All code in the same assembly has the same level of trust (not the same access level, but the trusted level). Whenever code access exceeds an assembly, the CLR completes some security validation. The less time the program spends traveling between assemblies, the higher the efficiency of the program.
None of these performance-related instructions is to discourage you from separating a large assembly into a small assembly. The loss of performance is second, and the design of C # and. NET is focused on components, and better scalability is often more valuable.
So, how much code or how many classes do you decide to put in an assembly? More importantly, how do you decide which code should be in an assembly? This is largely dependent on the actual application, so there is no certainty. I have a recommendation here: Merge these classes into an assembly with a common base class by looking at all the public classes to begin with. Then add a few tool classes to this assembly, which are primarily responsible for providing the functionality of all related classes. Encapsulates the associated public interface into a separate assembly. The final step is to look at the objects that are accessed horizontally in your application, which are candidates for potentially widely used tool assemblies that might be included in the application's ToolPak.
The final result is that your component is only in a simple correlation set in which there are only a few necessary public classes and some tool classes to support them. In this way, you create an assembly that is small enough and easy to benefit from updates and reuse, while minimizing the overhead associated with multiple assemblies. A well-designed cohesive component can be summed up in one sentence. For example, Common.Storage.dll manages all offline user data caching and user settings. Describes a low cohesive component. Instead, do two components: "Common.Data.dll manages offline data caching." Common.Settings.dll Manage user settings. "When you separate them, you may also want to use a Third-party component:" Common.EncryptedStorage.dll for local encrypted storage management file system IO, so you can update these three components independently.
Small, is a relative condition. Mscorlib.dll is probably 2mb,system.web. RegularExpressions.dll is only 56KB. But they all meet small core design goals and reuse assemblies: they all contain a collection of related classes and interfaces. The difference in absolute size should be determined by the function: Mscorlib.dll contains the lowest class that is to be used in all applications. But System.Web.RegularExpressions.dll is very special, it contains only some regular expression classes to be used in Web controls. This creates two different types of components: one is small, while large assemblies are focused on special features, and the widely used assemblies contain common functionality. In either case, they should be as small as possible until they are no longer small.
Back to the Tutorial directory