Ace Network Programming

Source: Internet
Author: User
Tags float double

Ace:adaptive Communication Environment Adaptive communication environment, belonging to host infrastructure middleware
SAP: Service Point Access
The ACE simulation provides a class for the C + + standard library
: http://ace.ece.uci.edu/or http://www.riverace.com
1. Design space for network communication:
1). Communication space: Interaction Rules, forms
2). Concurrency space: concurrency control, synchronization
3). Service Space: Duration, structure
4). Configure Space: Network Service identification, binding

2. Hierarchy of object-oriented middleware architectures
1). Host infrastructure Middleware: encapsulates the OS concurrency mechanism and interprocess communication mechanisms to gain the ability to object-oriented programming. For example, encapsulating sockets, POSIX threads
2). Distributed middleware: Extends the host infrastructure middleware to automate some network programming (connection management, memory management, integration, decoupling endpoint and request multiplexing, synchronization, multithreading). The main management supports the object-oriented distributed integration model of the terminal system resources. The core of distributed middleware is Orb,com++,java Rmi,corba
3). Public middleware: Extended distributed middleware, independent of specific services, mainly for the entire distributed system of resources allocation, scheduling and coordination
4). Domain-specific middleware: meeting specific needs in specific areas

3. Advantages of host infrastructure middleware
QoS requirements: Less overhead on host infrastructure middleware throughput and latency compared to distributed middleware to address jitter and reliability
Host infrastructure middleware allows programs to:
* Ignore unnecessary features: As in the context of the reorganization and reconciliation
* Fine-grained control over communication behavior, such as support for IP multiplexing and asynchronous I/O
* Custom network protocols to optimize the use of network bandwidth, or to replace shared memory traffic with loopback network traffic

The first chapter, communication design Space

. No connection protocol and connection-oriented protocol: No connection provides message-oriented services, such as voice, video allows certain data loss, connection-oriented provides a reliable service, suitable for services that do not allow data loss.
When using a connection-oriented protocol, the designer needs to be aware of the following issues:
* Data Framing Strategy: Connection-oriented provides a variety of data framing policies, such as the message-oriented messaging policy is supported by some connection-oriented protocols such as TP4,XTP. And TCP City One-word throttling protocol, not
Protect the boundaries of application messages, such as on TCP, if an application transmits 4 different messages through 4 send () calls, there will be 1 or more (possibly >4) TCP data segments
be transmitted to the receiving end. So if an application needs to be message-oriented, the sender and receiver must perform additional processing to split the 4 messages exchanged on TCP into frames. Such as
The total length of the message is always the same, and there is never a network error, the frame is relatively simple, otherwise it will become a small problem. When TCP is selected, it is necessary to implement the sub-frame mechanism on TCP byte stream
* Connection multiplexing Policy (non-I/O multiplexing): There are 2 general strategies for transferring data on a connection-oriented protocol: multiplexing (a process so that all the clients that the thread sends out
Requests are passed to a server process through 1 TCP connections. Advantages: Save OS communication resources; Disadvantages: difficult to control difficult to program without efficiency and certainty) and non-multiplexing (per guest
The user is communicating with different amount of connections and peering service programs. Advantages: Can better control the priority of communication, and the synchronization cost is small)
. Synchronous and asynchronous Message exchange: There are 2 optional mechanisms for managing request/reply protocol exchanges: Synchronous and asynchronous Message exchange. Requests and responses in a synchronous request/response protocol are exchanged in the order of lock steps, each
The request must be received synchronously to receive an answer before the next request can be sent. Each message in an asynchronous request/reply protocol is independent, but an asynchronous request often requires a policy to detect the request
Lost or failed, and then resend. Asynchronous applies to "communication delay" and "processing time required by request".
. Messaging and shared memory for data exchange: Message passing explicitly uses the IPC mechanism to exchange byte streams and record-oriented data. The messaging IPC mechanism uses the IPC channel to transmit data in the form of a message from
One process or thread is passed to another process or thread. If the data is large, the data is sent in the form of a message sequence. If more than one process receives the data, each
The message will be sent multiple times, each time for a recipient. For example, RPC, CORBA, and message-oriented middleware (MOM) are internally based on the messaging model.
Memory sharing: Allows multiple processes on the same or different hosts to access and exchange data as if the data were in the local address space of each process. In a network application, if the data must be
Multiple processes are read and processed, then the "Memory sharing" facility is a more efficient communication mechanism than "messaging". Shared memory has 2 types of local and distributed memory sharing:
Local shared Memory (LSM): Allows a process on the same host to have one or more shared memory
Distributed shared Memory (DSM):D SM expands the concept of virtual memory on the network to make transparent process communication through data in global/shared memory.

Chapter II, Socket API Overview (IPC)

1. The IPC mechanism of the operating system is divided into 2 types: local IPC and remote IPC. Local IPC: Shared memory, pipe, Af_unix domain socket, door door, signal, etc.
Remote ipc:af_inet domain socket, x., Named pipes
2.Socket interface: Each socket can be bound to a local address and a remote address. The Socket API has approximately 20 system functions, which can be divided into the following 5 classes:
1). Local Environment Management: socket (factory function, used to assign socket handle) bind (bind socket to local or remote address) getsockname (returns the local address of the socket binding) Getpeername ( Returns the remote address of the socket binding) close (releases the socket handle so that it can be reused)
2). Connection creation and termination: Connect (actively establishing a connection on a socket handle) listen (indicates a willingness to passively listen for a connection request from a customer) accept (Factory function. In response to a customer request, create a new connection) shutdown (the data stream of the reader and writer in the selected terminating bidirectional connection)
3). Data transfer mechanism: Send recv (transmits and receives data buffer data via a specific I/O handle) sendto recvfrom (Interchange without connection datagram)
Read write (receives and transmits data buffers via a handle) Readv Writev ("Scatter read" and "centralized write" respectively) to optimize mode switching and simplify memory management. ) sendmsg Recvmsg (General function, including the behavior of other data transfer functions)
4). Option Management: setsockopt (Modify options in different layers of the Protocol) getsockopt (query options at different layers of the protocol)
5). Network Address: gethostbyname GETHOSTBYADDR (handles the network address image between host name and IPV4 address) getipnodebyname getipnodebyaddr (processing hostname and ipv4/ Network address image between IPV6 addresses) Getservbyname (identifies the service through a readable name)
Limitations of the Socket API: error-prone, overly complex, non-portable

Chapter III, ACE Socket Wrapper facade

Aces define a set of C + + classes that are designed according to the wrapper facade pattern and encapsulate non-C interfaces in an object-oriented interface.
The ACE Wraper facade classes provided in this chapter are:
Ace_addr:ace the root of the network address inheritance structure
ACE_INET_ADDR: Encapsulates the address family of the af_inet domain (specifically initializes the address of the INET domain, the address of the other domain is initialized with ace_addr)
Ace_ipc_sap:ace IPC Wrapper facade The root of the inheritance structure
Ace_sock:ace Socket Wrapper facade The root of the inheritance structure
Ace_sock_connector: Connect the factory, connect to a peer recipient, and then initialize a new communication endpoint in a Ace_sock_stream object
Ace_sock_io: Encapsulates the data transfer mechanism supported by the "pattern" socket.
Ace_sock_stream: Ibid.
Ace_sock_acceptor: Receives the Connection factory, initializes a new communication endpoint in a Ace_sock_stream object, responds to the request from the peer connector.

#ACE_Addr和ACE_INET_Addr
Both the client and server ACE_ADDR addresses can use Sap_any:
* Clients can use Sap_any to create temporary port numbers for OS allocations that can be reused once the connection is closed
* The server program can select their port number via Sap_any, as long as they output the assigned port number to the client through some sort of positioning mechanism

#ACE_IPC_SAP
Ace_ipc_sap is the root of the Ace IPC wrapper facade inheritance structure, which provides basic I/O (handle) operational capability for other ace wrapper facade
Its purpose is not to be used directly by the application, but to provide subclasses such as the Ace Wrapper facade for files, stream pipelines, named pipes, System v Transport Layer Interfaces (TLI).

#ACE_SOCK_Connector
This class is a factory class that proactively builds a new communication end. Connects to the given server address and returns a Ace_sock_stream object for the application to read and write
Supports "blocking", "non-blocking", "timing"

#ACE_SOCK_Stream
This class is responsible for receiving and sending data, and provides a variety of send and recv method variants (including "scatter read, centralized write" methods). Supports "blocking", "non-blocking", "timing"
The role of decentralized read and centralized writes:
Scatter/aggregate I/O is useful for dividing data into several parts. For example, you might be writing a network application that uses a Message object, and each message is divided into a fixed
Length of the head and fixed length of the body. You can create a buffer that can hold the head exactly, and another buffer that can contain just the hard body. When you put them into a
The headers and bodies are neatly divided into the two buffers when they are read into the message by using the scatter read in the arrays.

#ACE_SOCK_Acceptor
This class encapsulates the underlying socket, bind, listen used to passively receive a peer connection, and after the connection is established, returns a Ace_sock_stream object for the server to read and write
Supports "blocking", "non-blocking", "timing"

The basis of

#ACE_Mem_Map
Ace_mem_map is the memory-mapped file mechanism. The OS virtual memory mechanism is used to map file addresses to the process address space. The mapped file contents can be accessed directly by the pointer
. Memory-mapped files can be shared by multiple processes on the same host.
Ace_mem_map map_file ("D://lk. TXT ");
Send_n (Map_file.addr (), map_file.size ());

The fourth chapter, the realization of the network log server

#ACE_Message_Block (Simple message Block (chain))
Allows multiple messages to be joined together to form a single linked list, thus supporting composite messages.
int main ()
{
Ace_message_block *head = new Ace_message_block (bussize);
Ace_message_block *mblk=head;
while (true)
{int nbytes=ace::read_n (ace_stdin,mblk->wr_ptr (), mblk->size ());
if (nbytes<=0)
Break
Mblk->wr_ptr (nbytes);
Mblk-cont (New Ace_message_block (bussize));//Link a new message block
Mblk=mblk->cont ();
}
Mblk=head;
while (MBLK)

Ace::write_n (Ace_stdout,mblk->rd_ptr (), mblk->length ());
Mblk-mblk->cont ();

Head->release (); Release all message blocks
return 0;
}

#ACE_Message_Blocks (composite message Block (chain))

#ACE_InputCDR ACE_OUTPUTCDR (Cdr:common Data Representation public representation)
Function: Unify the "byte order" rules of different hosts and different environments, and convert typed data and data streams to each other. Only the compilation and reconciliation of the original data type and its array is provided.
Raw data types include: bool char int16 int32 int64 float double character string
ACE_OUTPUTCDR Create CDR buffers based on data structures, save data in buffers, and reorganize
ACE_INPUTCDR extracting data from the CDR buffer

Fifth chapter, Concurrent design Space
Concurrent design space involves policies and mechanisms that use multiple processes, multithreading, and their synchronization devices.
Server classification: Cyclic, reactive, concurrent.
Circular server: Each customer request is processed in full before processing a subsequent request. Therefore, when a request is processed, other requests are queued.
For: Short-term services, infrequently-run servers.
Disadvantage: When the client is blocked waiting for the server to process the request, it prevents the client program from continuing to run down. If the server is delayed too long,
The "time-out" calculations used in the application and middleware layers for "retransmission" can become complex, causing severe network congestion and the root
The server may also receive duplicate requests based on the type of protocol used by the client and the server to exchange requests.
Concurrent server: Multi-threaded or multi-process processing with multiple customer requests. If it is a single service server, multiple copies of the same service can run concurrently.
If you are a multi-service server, multiple copies of different services can run concurrently.
Applies to: "I/O operations frequently" service or the execution time will change the long period of service.
Pros: You can use more granular synchronization techniques to serialize requests at the application-defined level. This mechanism requires the use of synchronization mechanisms such as semaphores, mutexes, to
Secure co-operation and data sharing between processes and threads that are running concurrently.
Reactive server (Synchronous event multiplexing): Multiple requests can be processed at the same time (although all processing is actually done in one thread, and when the request arrives, it can be separated out by the corresponding thread to handle the request exclusively.) )
Disadvantage: If an operation fails (for example, a deadlock or a hang), the entire server process will hang.

The thread scheduling model for the OS:
N:1 User Threading Model
1:1 Core Threading Model
N:M Hybrid Threading Model
The different thread scheduling models of the OS are in line with the range of competition:
Process competition scope (user space): Competing with threads in the same process for CPU resources
System competitive range (kernel space): Competing with threads in other processes for CPU resources
2 scheduling levels for the OS:
Time-sharing scheduling level (different priority)
* Priority-based
* Fair
* preemption
Real-time scheduling level (same priority)
* Take turns
* Advanced First Out
* Time Slice
There are 2 models of the concurrency system:
Task-based concurrency: Multiple CPUs are organized according to the service functional unit in the application. In this system, the task is active.
The message processed by the task is passive. Concurrency is done by performing service tasks on each CPU and passing data between task/cpu
and control messages are obtained. A "task-based" concurrency system can be implemented through the "producer/consumer" model.
Message-based concurrency: organizes CPUs from messages received by applications and network devices. In this system the message is active,
The task is passive. Concurrency is achieved through the use of a service task stack that processes multiple messages simultaneously on each CPU. "One request a thread",
A "one thread Connected", "thread pool" model can be used to implement a "message-based" concurrency system.

Sixth chapter, operating system concurrency mechanism

Synchronous Event Multiplexing: Select/poll is used to wait for a specific event to occur on a set of event sources. When a (or multiple) event source is activated, the function returns to the
Called by the caller. As a result, callers can process these "from multiple sources" of time. Synchronous event multiplexing is the basis of a reactive server.

Mutex: Can be used to execute multiple threads serially. There are 2 types of mutexes:
* Recursive mutex: A thread with a mutex can obtain it multiple times without creating a deadlock, as long as the thread eventually releases the mutex in the same number of times
* Non-recursive mutex: if the process that currently owns the mutex does not release it first, the official will get it again, causing the deadlock to fail.
Read/write Lock:
Signal Volume Lock:
Condition variables:

Seventh chapter, Ace Synchronous event multiplexing (Wrapper facade)
The event source in a network application is primarily the socket handle. Select to manage event sources.
#ACE_Handle_Set encapsulates all the handles that select can manage
This class uses the wrapper facade pattern to guide the encapsulation of the fd_set, providing a way to manipulate the handle.
Select managed handles must be set to non-blocking mode, otherwise the program will always be suspended

Reactive multi-Path separation execution steps:
The *select function returns the set of handles that were modified
* The event loop of the server searches the set of active handles and executes a "time-processing" code for each active handle
#ACE_Handle_Set_Iterator
Ace_handle_set_iterator constructs an iterative object with the set of active handles returned by select for efficient
The search select returns the set of active handles, and each iteration returns only one active handle until the Ace_invalid_handle


Eighth chapter, ACE process wrapper facade
Multi-process applies To:
* Where multithreaded scenarios are impossible to use
* Not suitable for use in multithreaded scenarios-affected by the "non-reentrant" method
Advantages:
* Multiple processes are more robust than multithreading
#ACE_Process
Portable creation and synchronization processes, as well as saving and accessing process properties (such as Process IDs)
#ACE_Process_Options
Specify platform-independent and platform-related options

#ACE_Process_Manager
Portable creation and synchronization of multiple sets of processes


Example: In the current process, start a child process
 ace_process_option opts;
 ace_process child;
 opts.command_line ("%s%d", "./main.exe", 10);//The name of the process to be executed is placed inside the ace_process_option
 child.spawn ( opts);
 child.wait ();
 return Child.exit_code ();

#ACE_Process_Option
  Role:
 *command_line ()     Specify the process image to run and its parameters
 *setenv (        Specify an environment variable, add it to the environment where the process is running
 *working_directory ()   Specify a new working path for the new process
 *set_handles ()        set stdin,stdout,stderr for new processes
 *pass_handl ()       Pass a handle to the new process
 *creation_flags ()      Specifies whether to run a new program image in the created process
 
 
#ACE_Process_Manager
  ..... Save internal Records, manage and monitor multiple sets of processes created by the Ace_process class
  Allows a process to create a set of processes
 *open ()      initialize Ace_process_manager
 *close ()       frees all resources (without waiting for the process to exit)
 *spawn ()      creates a process to add it to a managed process group Zhong
 * Spawn_n ()      create n processes that belong to the same process group
 *wait ()      wait for some or all of the processes in a process group to exit
 *instance ()    returns a pointer to Ace_process_manager singleton static
           

This article from "Tech record" blog, declined reprint!

Ace Network Programming

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.