. Net Framework-based N-Layer Distributed Application Development

Source: Internet
Author: User
Tags call back
. Net Framework-based N-Layer Distributed Application Development

Keywords: distributed, DCOM/CORBA, Web Service,. Net Framework, N-layer model, Client/Server, data transmission, and Remote Communication

Topic: Establish a maintenance and scalable site, develop efficient and highly scalable applications, create N-layer distributed applications, and implement cross-platform and cross-Internet application integration, is a task for countless developers. Traditional development methods and technologies face difficulties.

Many new technologies released by. Net Framework provide relatively simple solutions for the implementation of the preceding tasks. Among them, the SOAP-based Web Service has obvious advantages over the traditional DCOM/CORBA in processing distributed applications, combined with Web-based ASP. NET page development technology and SQL Server Data Storage Technology (or Xml document), in. it is no longer difficult to develop N-layer applications under. Net.

  1. Distributed Processing Overview

Distributed processing is a unit that logically distributes applications to two or more computers and is physically or logically separated. This concept is not a new concept and has been widely used in large-scale projects. However, the emergence of the Internet has given new features to distributed processing. The internal connection feature of the Internet allows hundreds of computers to work as a task, making it possible to implement distributed processing on a larger scale, and spans the traditional B/S (Client/Server) model.

The idea of distributed processing has gone through a long process. Developers and vendors at all levels have done a lot of work in different IT times, making distributed processing rich in protocols.

  1. DCOM/CORBA

In. before the. Net Framework, the main protocol for component-based distributed computing is the Common Object Request Broker Architecture (Common Object Request proxy structure), which comes from the Object Management Group (Object Management Group ), also, Microsoft's DCOM (Distributed Component Object Model, Distributed Component Object Model ).

DCOM is connection-oriented. The DCOM client holds a connection to the DCOM server. This connection method causes technical problems. For example, a client may hold reference information and generate a call only when the user clicks the button. After a long time, the server will be idle due to waiting for client requests. When the client crashes and cannot request the server, it will have serious consequences. In addition, on the Internet, the DCOM or CORBA server is used by thousands of clients, because each client has a connection with the server, for clients that seldom use the server or do not use the server at all, they should protect valuable server resources.

Although DCOM has a way to solve these problems, it adds a lot of complexity, which is also one of the problems that Web services are trying to solve.

  2. Web Services

With the launch of Microsoft. Net Framewwork, a new technology is available for distributed processing, namely, Web services ). Web services can provide data for another application, not just a browser, and allow other clients to use standard protocols (such as HTTP) that work on the same port and transport layer through external data) to perform the operation.

  Ii. Distributed Processing Technology under Web Service-. Net Framework

In. Net Framework, Web services are the logical units of applications that can be accessed by programs through standard Web protocols in a platform-independent manner.

. Net Framework developers place Web Services on open standards and can use them on any platform. It has the potential to be integrated as a cross-platform and cross-vendor technology. After implementing the Web service and Web Service Architecture, you can use many existing technologies on the Internet. The key to successful Web services is that they are based on open standards, which are supported by major vendors such as Microsoft, IBM, and Sun.

1. solutions to difficulties faced by DCOM/CORBA-Web Services

DCOM and CORBA are excellent in creating enterprise applications using software running on the same platform and tightly managed LAN. However, they do not have the ability to create cross-platform, cross-Internet, and Internet-based scalable applications. They are not designed to accomplish these goals.

Most companies face the reality that they have multiple platforms from multiple vendors. Communication between application systems running on different platforms is difficult. It is difficult to cooperate with business partners based on the traditional distributed architecture.

The problem with DCOM and CORBA is that users basically need to rely on a vendor. To use DCOM, you must use Microsoft Windows to run the server and client. Although there are DCOM implementation methods on other platforms, they are not widely adopted. Although CORBA is a specification implemented by multiple vendors, interoperability can only be achieved in a simple way. Integration Between DCOM and CORBA is unnecessary.

From a technical perspective, Web services try to solve problems encountered when using closely bound technologies such as CORBA and DCOM. These problems include how to use firewalls, protocol complexity, and integration of heterogeneous platforms.

2. Application of Web Services in Distributed Processing

Web Service is an excellent distributed processing technology.

Demonstrate the general situation of distributed processing under. Net Framework. Client Applications can be traditional Windows Form applications, Web-based Asp.net applications, cellular mobile applications, and other Web service programs. These client applications form the presentation layer (the left column in the figure) in the N-layer model for data display. The middle column is the middle layer, processing business logic; the right column is the data layer, processing data storage.

With the constant standardization of xml-based Simple Object Access Protocol (SOAP), web services are becoming a feasible way to interact with other servers and applications.

3. The client consumes Web Services under the N-Layer Model

Demonstrate how the client consumes Web Services. A client can be a Web application, another Web service, or a Word processing program such as Microsoft Word.

A Web service consumer calls a Method on a Web service named Method. The actual call is transmitted to the lower layer. As a Soap message, it is transmitted to the Web Service through the network and to the upper layer. The Web service executes and responds (if any ).

It is possible to implement two-way notification or publish/subscribe between the Web Service and the client, but it must be done manually. The client can implement its own Web service and pass the reference of the Web service in the call to the server. The server can save the reference and then call back it to the client.

  Iii. Design of A. Net Framework-based N-layer architecture

Object-oriented, modular-based component design requires convenient modification of each part of the application. A good way to accomplish this goal is to work on the layer and separate the main functions of an application from different layers or levels .. Net Framework provides a wealth of support for creating a maintainable and scalable layer mode, so that layer N is enough to replace the traditional Client/Server mode and is closely integrated with the Internet.

1. Layered Model

Essentially, the layer represents the main function of an application. Generally, we divide the application functions into three aspects, corresponding to the three-layer architecture model. They are data layer, business layer, and presentation layer ).

Data Layer: a component or service that contains data storage and interaction with it. These components and services are functionally independent from the middle layer (though physically independent-they can be on the same server ).

Middle Layer: includes one or more component services, which apply business rules, implement application logic, and complete the data processing required by the application program. As part of this process, the middle layer is responsible for processing data from the data storage or sent to the data storage.

Presentation Layer: obtain information from the middle layer and display it to the user. This layer is also responsible for interacting with users, comparing the returned information and sending the information back to the middle layer for processing.

It can be seen that the data layer obtains original data from the database, and the business layer converts the data into meaningful information that complies with business rules. The presentation layer converts the information into meaningful content for users.

This hierarchical design method is useful because each layer can be modified independently. We can modify the business layer, constantly accept the same data from the data layer, and pass the data to the presentation layer without worrying about ambiguity. We can also modify the presentation layer so that modifications to the site's appearance do not have to change the business layer logic below.

2. Common N-Layer Model Design

It is known that layers in an N-tier application are not defined by the physical structure (hardware) of the running application. The layer is a logical function of the application program running, and defines the different task stages that the application will execute. The N-layer design here is quite different from the classic client/server architecture.

1) Design a Simple Layer 3

The simplest N-layer model is Layer 3. Here, we already have a server and client separated by the network. The server contains data storage and data access components that constitute the data layer, which has formed the business logic of the middle layer. As the presentation layer, the client only needs to provide an interface for the application.

In this simplest case, we may have a relational database or a group of components or stored procedures that access data. Then we should have an asp.net page that accesses the component or stored procedure to extract information, process and format the information so that it is suitable for a specific client, and then transmit the information to the client through the network. What the client needs to do is to display information, collect user input, and send the information back to the middle layer.

2) Design a Layer 3 closer to reality

However, the previous example is just a very small application, which is hard to come across in the real world. Data storage is usually located on specially selected and specially set hardware. It may be running on a Windows-based Server group running SQL Server, but it may also be running on a non-Windows platform or Oracle or other database servers.

In this case, the separation between the data layer and the middle layer is even more obvious-they are connected through the network. In addition, the business logic is restricted on the server that processes all intermediate layer data.

3) design N layers

Obviously, the above situation assumes two things: one is that the client is a low-end device (so it is not involved in the actual data processing required by the application), and the other is that there is only one set of business rules.

However, these assumptions do not conform to the actual application. For example, we usually expect business rules to be somewhere else rather than in the middle layer. It is more appropriate to implement a business logic in the early stage of the data extraction process. Of course, we can also implement business logic in the components that access data storage. Therefore, this business logic package can be stored on the same server as the data, or even (in A group environment) on another intermediate routing server.

In addition, to make full use of the performance of the "fat client" to reduce network load and latency caused by access path loops, we can put some business logic on the client.

This shows that the business logic is separated from the middle layer and located on the data server or client.

It can be seen that this mode has no intermediate storage and requires almost no intermediate data processing, so it is more efficient.

4) Design a more realistic N-Layer

Generally, we use one or more isolated servers to maintain the data storage we are using. In this case, the distribution of business logic is more dispersed. Displays three machines separated by two networks. It can be seen that the current business logic is divided into three zones: Some will run on the same server as the data storage, and some will run on the intermediate server, some will run on the client.

From this we can see that it is not easy to accurately define each layer. The real meaning of "middle layer" is the business logic itself, and different elements of business logic can exist in different servers without question.

3. Inter-layer (remote) transmission object and technology under. Net Framework

. Net Framework implements many new technologies to support multi-layer distributed processing. It provides a wide range of class libraries, objects, and methods to enable physical or logical separation at different layers) data transmission between them is simpler.

1) objects that support remote data transmission:

L ADO. NET DataSet object

L ADO. NET DataTable object

L XmlDocument object

L XmlDataDocument object

2) classes/methods supporting remote data transmission:

L Serialization class

The Serialization class describes a process of converting data into an object that can be copied to another format. The objects that can be remotely transmitted previously have the ability to serialize the entire content so that it can be transmitted through a channel. This channel can be directly through TCP/IP or HTTP. Of course, they can also be de-serialized on the other end, so the client gets a complete copy of the original object.

L System. Runtime. Remoting class

The objects provided by the System. Runtime. Remoting namespace can be used to create a proxy for objects for remote data transmission. In this case, the object is retained on the server, and the client receives only one reference from the proxy object. This proxy object represents the original server-based object (this is how we remotely use a DataReader method), as shown in:

For the client, this proxy provides the same methods and attributes as the original object. However, when the client interacts with the proxy object, the call is automatically serialized and transmitted to the object on the server through a channel (network. Then, any response and results are transmitted back to the client through the channel.

Both remote technologies allow the client to use the objects originally created on the server. We can serialize A DataSet object or Xml document, and serialize other objects such as a set, such as a hash table or Array ).

4. Data Processing and object selection in the N-Layer Model

The first thing to consider is what data is extracted from the data storage? The answer to this question has a greater impact on the basic choice of the objects we use than anything else, and even defines the type of performance that we want to accomplish.

1) only used for displaying data

If you only want to display data for end users in a fixed format, generally there is no need to transmit data remotely. We do not need to transmit all the data online to the client-we can only transmit the final display information in any format acceptable to their customer devices.

In this case, the "Reader" Object provides a read-only, forward-only ideal technology with optimal performance. When used together with a server control that can bind server data, we can obtain an efficient way to display data.

2) data to be remotely transmitted

However, if we need to transmit data remotely, there is a problem. These fast and efficient "Reader" objects can be remotely transmitted only when they are referenced. When a DataReader is transmitted as a reference to a client, the DataReader is still on the server, but the client application can also use it. In this case, we do not actually transmit data remotely, but use a remote transmission object. This is often the case in many cases.

To achieve remote data transmission, you should store the data to an object that can store (or maintain) the data. It also allows the code to extract data and read the data multiple times as needed without having to enter the additional steps of data storage. In ADO. NET, this object is a DataSet object (or DataTable object ). When using xml, there are several objects to choose from. We can remotely transmit XmlDocument and XmlDataDocument objects. Both objects have the ability to maintain content and can be transferred between layers of an application.

  4. N-Layer Distributed Data Processing Architecture Model

To further understand how to divide different layers in an application, you need to determine how data is displayed and whether data needs to be updated and how to return updates to the server in a timely manner.

1. display all on the server

Display data on the client. The most common scenario is to execute all data processing on one or more sets of servers. The data layer and intermediate layer are limited to servers, and the client only provides the display interface. For a web browser, the common format is html. For a cellular phone or similar device, it may be represented in wml format.

Use a stored procedure or SQL statement to extract the required data, and then use asp.net for processing, or execute a web service. In addition, xml fragments are also used to extract data from the data storage, and then process the data and provide it to the client.

If you store data in the form of an xml document or in such a format: data is used as an external xml data layer, we have some other options.

Shows how to extract and process xml data for transmission to the client. In addition, data extraction is actually using a "Reader" object, and different technologies can be used to process data and provide the data to the client.

2. Expand the middle layer

Although data extraction and processing often occur in an object, such as an Asp. net page, but in order to effectively use the benefits provided by the use of component-based design, it is usually necessary to provide a more refined architecture. We should have business rules that apply to data before displaying data or transmitting it to the client. In other words, it may be for security reasons, or to achieve distributed processing, or simply to provide reusability and make application maintenance easier.

For example, you should have access to a data storage and extract multiple pages of a series of consumers. By establishing this process in a component independent of the asp.net page or other objects that provide data to the client, we can provide a layer for data extraction. Then, in the future, we need to change the data storage or data structure in some ways, or change the rules for accessing it. We only need to replace the component with a new version.

As long as the component interface remains unchanged, all applications that use this interface will see the same output from the component and continue running as before. However, you can modify the method that the component uses internally to extract and process data from data storage as needed. This architecture is illustrated.

Obviously, this process can use multiple components. If the data extraction is quite complex, or the same data is used in multiple places, further data processing (decomposed into more component layers) makes sense. For example, you can use a component to treat data as a series of rows containing all required columns (arranged in the order of keywords). This component can become another component that stores data in different order, or only the data sources of Components in some columns of external data.

3. Move data to the client

Generally, to obtain the data sent to the client, we use client scripts (JavaScript, VBScript, and WMLScript) and client components written in Java or a specific platform language, you can also use client executable programs written in languages such as Visual Basic 6.0, C ++, and Delphi. Of course, all the features we need are part of the. Net Framework. (Users can use the Framework on the client by downloading and installing the allocable Framework (about 5 MB ).

Therefore, the following shows some methods for obtaining and processing data sent to the client.

4. Send the update back to the server

In many cases, if our requirement is to obtain the basis for sending data to the client as quickly and efficiently as possible, the above example can well complete the task. However, many applications require clients to find a more reasonable mode when sending data back to update data storage.

There are at least three methods for sending data back to the server. One is to send Html forms and query strings (implementation method is similar to the previous ASP); the other is to use client components (such as XMLHTTP components of IE5 and later versions ); there are also Windows form applications and services that can be executed by the client.

Therefore, the client only requires us to send some data and let the client complete all the data processing. That is to say, the client acts as a certain type of service and uses the application data as its own source data. Then, after the client processes the data, it submits the changes back.

Once the client completes data updates or has collected new user input data, the client application packs the data in a suitable format (or uses the correct technology to organize the data ), and submit it to the server for processing and storage.

It is shown that the "fat client" is used to implement this architecture. The data is processed on the client and then sorted and returned to the server to update the original data storage.

Still, this is not a chart that contains all possibilities. The method for sending data back may be unrelated to the method for sending data. You should design an appropriate model based on the actual needs of the application.

  V. Conclusion

Establish a maintainable and scalable site, developing efficient and highly scalable applications, implementing cross-platform and cross-Internet application integration, and creating N-layer distributed applications is a task for countless developers. Traditional development methods and technologies face difficulties.

Many new technologies released by. Net Framework provide relatively simple solutions for the implementation of these tasks. Among them, the SOAP-based Web Service has obvious advantages over the traditional DCOM/CORBA in processing distributed applications, combined with Web-based ASP. NET page development technology and SQLServer Data Storage Technology (or Xml document), in. it is no longer difficult to develop N-layer applications under. Net.

To complete the above tasks well, you need to design a reasonable application architecture model based on the actual situation. This article is for your reference in this regard.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.