Enterprise Application Architecture Reading Notes and summary

Source: Internet
Author: User

I. Chapter 1 and Chapter 4 mainly describe the basic ideas of layered architecture, data relationship ing, and basic data architecture diagram.

Ii. Chapter 5 provides the basic content for subsequent chapters, such as concurrency, transactions, locks (such as pessimistic offline locks, optimistic offline locks), and ACID principles.

3. Chapter 6 describes the session Status in Chapter 2.

4. Chapter 7 describes the basis of distributed architecture and provides a basic description of distributed architecture. It also proposes a solution that prevents excessive design and excessive use of distributed architecture. Because cross-process access consumes a lot of resources. The first law of Distributed Object design is proposed: do not distribute objects. In addition, it is important to use distributed objects in a small range to make full use of the cluster power (normally, objects are always distributed to different processes ). It proposes to give priority to the remote call mechanism provided by the platform under the same platform, and then use webserivce.

Chapter 8 describes the basics of the domain model. The script transaction model, table model, and domain model are proposed. The script transaction model must rely on one to directly write SQL scripts to solve business problems, it is suitable for simple applications, and the cost is relatively low. The table model relies mainly on dataset and datetable provided by the platform to directly perform logical operations on data. It is convenient and quick for small-sized applications. NET is quite convenient. In the domain model, all operations are performed in the form of objects. NET. I think it is to map database values to classes and use classes for logical operations, A large number of object-oriented ideas are used for programming. This mode is too costly for small-scale applications, for systems that are complex and gradually larger, the cost will gradually be lower than the cost of the other two models. In practice, it is impossible to use only one of the modes in the system, but multiple modes may coexist in the system. I also took a long time to understand the differences between several modes and find these modes in a system.

6. At this point, Chapter 8 is the first part, mainly laying the foundation for the second part, so that readers can understand this book through a smooth process. All problems will not be described later if the fifth statement is clear.

Chapter 9 describes the transaction script, domain model, and table model in depth, as shown in Figure 5. It adds a description of the service layer. The main idea is to encapsulate internal business logic (three encapsulation modes) provide an access interface with initial granularity or fine granularity (it is a method of encapsulating the property logic, not necessarily a real interface ).

Chapter 10 describes the table data entry, row data entry, activity record, and data er.

  1. Table data entry: retrieve all data from the database, A filter is used between dateset and datetable to select data, update, and insert Based on dataset, datetable, and other ADO. NET, such as directly using the update method of the adapter. The basic idea is to extract data in two steps. The first step is to retrieve all the data in the database (select * from table, Note: There is no where Statement ). Then, you can use the datetable filter or filter (select, filter) to select a statement using the filter condition (such as filtering with known IDs) to obtain a data record; it is a manifestation of the use of the table mode
  2. Advantages and disadvantages of the table data entry: large data needs to be read in the memory, occupying a large memory space, and the CPU pressure is high. For small data or data operations, the direct operation in the memory is faster than reading data from the database every time. The update operation is not convenient. There may also be situations where data is less real-time. The operations such as comparing data to dataset cannot read the latest data in the database in real time.
  3. Row data entry: When querying data from a database, you can directly select a specific row of data from the database using the "select * from table where id = parameter" method. His operations on data are entirely based on databases, rather than ADO. NET objects than table data entries. The Update and delete operations also carry the where condition to perform operations on the database data directly. It is the embodiment of the script transaction mode.
  4. Advantages and disadvantages of the row data entry: The data selected by the database is more accurate. However, when a large amount of data about the database is required to be read, too many operations on the database may cause a great pressure on the database. When similar data needs to be read multiple times, the performance may be affected.
  5. Activity record: the data in the database is mapped to form an object, and related logical operations are performed in the object, which must depend on data ing. Database object data must be converted to class objects. Data required for all insert, update, delete, and query operations is transmitted as objects, if a dataset is in NET, you can use generics to perform operations on the dataset for data transmission between objects. This is a manifestation of the use of Domain Models.
  6. Advantages and disadvantages of activity records: it must depend on data ing and its efficiency is lower than the other two in most cases. In addition, the Code is more than the other two types, and more computing objects (classes) need to be created ). However, it is convenient in the code writing process, and if the demand changes, it is advantageous for the amount of code changes and maintenance in large systems.
  7. Data ing: maps database data to object objects. It is easy to use directly during the operation. Instead of using SQL scripts to retrieve data when object data is required. A large number of ORM such as EF and linq to SQL are written here. It saves the convenience of developers who do not understand the database language. They can read data from the database and map it to a class with just a little more effort. However, its performance has certain defects, but it has been well done now, and it is completely acceptable to use it in small and medium systems.

Chapter 9 describes the work unit, Id ing, and delayed loading.

  1. Unit of work: the basic idea is to encapsulate a group of related operations in a method to implement one-time commit. The basic data required is prepared earlier and exists in a specially designed array, you only need to submit the method. The submission method contains three methods. One method is to insert data, the other is to update the inserted dirty data, and then delete the newly inserted data. This mode is the same as the transaction in the database. But we can introduce his idea into the code.
  2. ID ing: inserts the obtained data into a temporary variable or in the cache. When you want to obtain the object, you must first obtain it from the temporary variable or cache using the ID and other identifiers, if it cannot be obtained, it will be queried in the database and cached.
  3. Delayed loading: Only computation-related keys (such as IDS) are carried during data computation. When the computation requires multiple data types, you can use this key (ID) read and retrieve all the data of an object from a database (or a storage object) to implement a delayed loading method. The key (carrying one or more real data) is a shadow of real data. Use Methods: Virtual proxy, value holder (similar to the ID ing method), and shadow. If excessive data is loaded with delayed loading, it is called fluctuating loading.

Chapter 10 describes the identification domain, foreign key ing, associated table ing, dependency ing, embedded value, serialization LOB, single table inheritance, class Table inheritance, specific table inheritance, and inheritance er..

  1. Id field: in fact, it is generally referred to as a primary key. It is used to manage in memory and database objects, and a primary key is given to a row in the table, and then there is a primary key in the memory, then, you can use the primary key to find the object with data. This key can be a unique value such as an auto-increment ID, GUID, or ID card number. At the same time, table keys are unique and database keys are unique. A table key is a unique value in a table. A database key is a unique value in the whole database.
  2. Key combination: Multiple key values are used as the primary key. Only when multiple key values are used together can a unique row be determined.
  3. Foreign key ing: stores the primary keys of table A and the primary keys of Table B, and A primary key value stored in Table B through Table A, you can find the data rows associated with table B and table.
  4. Join table ing: Add A primary key of table C between table A and table B to join the primary key of Table B to form A row, the associated data of Table B can be found through Table C in table; this method is usually A 3rd-square table that already exists in tables A and B and is too expensive to modify tables A or B. Insufficient: a large number of associated query loss performance.
  5. Dependency ing: When the database is persistent, a class (dependent) in the database depends on other classes (owners ). Each dependent person has only one owner. For example, table A has column a. c, and table B has multiple rows of a. c with the same value as Table A. c. It is similar to a directory with the same structure as multiple files.
  6. Embedded value: when the values of some fields in the database are used in class instantiation, the class has some displayed values through its own calculations to form an instance class. The values of this class (except those read from the database) can be calculated. If these values are directly stored in the database, the efficiency can be improved (direct query without computation ).
  7. Serialize LOB: serialize an object into a string (json, binary) or xml and store it in the database. Json, binary serialization, and xml serialization are available. This method will cause class modifications, which may prevent the data stored in earlier stages from being deserialized and lead to data loss. In addition, the data is hard to understand and can be understood only after the program deserialization. If the serialized object is copied in multiple places, when you want to modify a value, all the copied objects must be taken out and modified. It will be a very painful process.
  8. Single-Table inheritance: if there are three class inheritance relationships A: B: C, that is, C inherits B, and B inherits. There are different ways to create A table, while single table inheritance creates all fields A, B, and C in one table, and all data is stored in one table. You do not need to modify the table (except for new fields) to adjust the inheritance relationship of the three classes. You do not need to connect to the table. fields not in Class A may also have data columns in the table, this is a waste of data space.
  9. Class table inheritance: three tables, table A, table B, and Table C, are created for the three classes respectively, when adjusting the inheritance relationships of the three classes, you do not have to modify the table. The new class does not waste space, but it has multi-table join operations, resulting in performance problems. For example, if A field is moved from Class A to Class B, the table must be modified at the same time.
  10. Specific table inheritance: Creates A table for the fields of Class A, and Class B inherits Class A, the fields of Table B must contain the fields of table A and their own fields, class C inherits class B. All C tables corresponding to Class C must contain Class A fields, Class B fields, and class C fields. Advantages and disadvantages each table is self-contained and has no irrelevant fields. When reading data, no join operation is required. The table is read only when the class is accessed, and the load is scattered. It is more difficult to process the primary key than to abstract the database ing relationship to the class, and the field modification position in the class is greatly changed, for example, if the fields of class C are adjusted to Class B, you need to tune the values of Table C and Table B. This is troublesome and requires you to check all the tables in a super-class query, resulting in multiple accesses to the database, for special operations, you may need to query the tables A, B, and C, or perform special query operations to find information in Class.
  11. Inheritance Er: defines basic data read rules in abstract classes or interfaces, and searches in implementation classes to associate class attributes with database fields.

Chapter 11 describes metadata ing, query objects, and resource libraries.

  1. Metadata ing: two methods are proposed for code generation and reflection programs. Reflection generates executable code programs, such as reflection in NET. Reflection debugging is difficult and the code execution speed is slow, but it has high flexibility. You can dynamically select the program execution and keep the metadata to a moderate place in xml. NET can be reflected from the dll, and the NET code prompts depend on metadata. Generating pdb files in the debugging mode is also a method of metadata, reflection is difficult to find problems when modifying some code. Code Generation mainly involves inputting metadata and outputting the source code of the ing implementation. Code Generation I think is a way to dynamically build, splice js statements, and then execute spliced js statements in a non-forced type language (such as js.
  2. Query object: it is the application of interpreter mode in SQL query. The client can construct a variety of SQL statements and then hand in the server to explain the execution statements. The server constructs some methods to facilitate the construction of SQL statements through certain domain modulo classes, and then the statements generated by these methods are called executed by the server. For example, some methods defined in linq to SQL are as follows, in fact, the essence is to parse and construct SQL statements that are generated by the server.
  3. Resource Library: Coordinates domain and data ing layers, and uses interfaces similar to a set to access domain objects. It is an abstract class or interface that defines access to the database. You must implement the specific object interface for database query, delete object interface, and add object interface in the subclass. It defines a high-level data access rule, with sub-classes responsible for implementation. Because it only defines high-level interfaces, you do not need to care about the specific data source or high-level interface for business definition.

Chapter 12 describes MVC (common modes include MVP, MVVM, and PM), page controller, front-end controller, template view, conversion view, two-step view, and application controller.

  1. MVC: View charge (V) when the observer and Controller of the model (M) (C) are the view (V) policies, the essence is the mixed use of the observer and Policy modes.
  2. Page controller (MVP): M: Model, V: View, P: Presenter; its principle is that V modifies the information transmitted to M through P, then M is modified and passed to V through P.
  3. Front-end Controller: security verification is usually required for page access. Security Verification is common and the logic and display of each page are different, see the previous security verification before accessing all pages so that all verification logic follows the same logic. In NET, the pipeline mode is like HttpHandle, or global file.
  4. Template view: generate an HTML template page of the template, identify the content to be modified, and replace the ID with the content required by the website when running the website.
  5. Conversion view: Save the data required for a page to a document, and convert the data of this document into html code. If all the data we need is in xml, we are using a certain language (XSLT is the most advantageous language recommended in this book for processing such conversions) convert the xml into htmL code.
  6. Two-step view: the first step is to process the data logic on the dynamic page, and the final code will generate the corresponding html code and output it to the client. We assume that the htmL code is generated after the final display logic code is generated on the first running dynamic page, and the generated htmL code is truncated and saved. The dynamic page is converted to a static page. When you access the page later, you can directly access the static page without the complicated calculation process of the dynamic page. The first step is to run the logic to display dynamic code generation, and the second step is to save the html code formed by the page logic.
  7. Application Controller: determines the domain logic to run and determines the view to display the response message, just like the routing controller in MVC. Jump to the view logic based on the input command. Different views are displayed based on different commands.

Chapter 13 and Chapter 4 describe the remote appearance and data transmission objects

  1. Remote appearance: encapsulate a set of data in a coarse-grained method to return data. coarse-grained methods call many fine-grained methods. Some basic information is provided during access, while a group of comprehensive information is returned. Each operation may require only one part of the information. For example, webserivce Under NET is a better remote appearance. It implements cross-process and distributed data access, but it has performance problems. At the same time, it also mentions the appearance of sessions, but I have not fully understood the appearance of sessions. Focuses on providing a set of coarse-grained result sets, rather than fine-grained method return result sets.
  2. The service layer is mentioned in remote appearance. Its basic idea is to encapsulate some method sets into a service for remote appearance or other calls. The server does not involve specific logic, but is a service that returns a certain service.
  3. Data transmission object: data transmission between callers; coarse-grained result sets in remote appearance will inevitably lead to unnecessary data transmission over the network, the common method is to serialize and transmit objects, including binary data transmission, text, json data transmission, and xml data serialization based on (soap and wsdl protocols. Data transmission requires that the data client sent by the server must be able to parse the data sent from the server after serialization. The data encapsulation of the server and client must be instantiated back to the original object. The key is how to serialize the object for transmission. If the client parses the data after receiving the data, how can it map the data and serialize the transmitted data to the object. The serialization method provided by the platform is preferred for data serialization.

Chapter 14 describes optimistic offline locks, pessimistic offline locks, coarse granularity locks, and implicit locks.

  1. Optimistic offline locks: You can edit data for multiple people at the same time, and merge the conflicting data into classes, similar to the idea of the source code management tool TFS; relatively weak in transactions; it implements version management when submitting data. Only data with the same version number can be committed in the same transaction.
  2. Pessimistic offline lock means that only one person can exclusively edit the file, and others cannot modify it, similar to the idea of source code management tool VSS. Pessimistic offline locks have advantages in transaction processing and are complementary to optimistic offline locks. It has exclusive write locks, exclusive read locks, read/write locks.
  3. Coarse-granularity lock: place the heap method into the same lock when calling a bunch of methods. The heap method has the same version number, so the heap method shares this lock, which is called a shared lock. There are two types of single locks, and there are optimistic shared locks and pessimistic shared locks. At the same time, it generates a bunch of methods (or objects). These methods have a lock control object. Only when this object is locked, it locks all objects.
  4. Implicit lock: in some system platforms, some methods commit the system lock, or when calling a bunch of methods, one of them is locked, other methods do not have a lock, and we do not have a chance to lock the called place, and a lock is accidentally generated.

Chapter 15 and Chapter 5 describe the client session Status, server session status, and database session Status (Note: This session is not a session, but also includes session, but some data required for interaction)

  1. Client session Status: stores session information on the client, such as stored in cookies, URLs, and hidden domains. If complex objects are stored, they can be serialized and stored. The data is not secure on the client, and may be difficult to maintain due to data loss and large data volumes.
  2. Server session Status: stores a large amount of Interactive Data in the server memory, and stores the data as an identifier (such as the NET sessionid) for the client. when the data is required, the client presents an identifier to read data from the server memory. It can easily cause server crash, difficult cluster processing, and occupy large server memory.
  3. Database session Status: stores session data in the database, stores session data in the database, and gives the client a user ID (sessiondid). When data is required, the user ID (sessionid) is used) search for data in the database. The cluster is convenient, but the database performance is required. Session-based data is stored in a temporary table or session information in a data table that is associated with the business, and the session data is distinguished by a flag.
  4. In. NET, sessions can be stored in IIS processes, server memory (either in the memory of the IIS server, or in the memory of another server), and databases.

Chapter 16 describes the entry,, layer super-type, separation interface, registry, value object, currency, special circumstances, plug-ins, service piles, Record Sets

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.