Ice Middleware documentation

Source: Internet
Author: User

Ice Middleware documentation

1 Ice Middleware Introduction

2 Platform Core Features

2.1 Interface Description Language (Slice)

2.2 Ice Runtime

2.2.1 Communication Device

2.2.2 Object Adapter

2.2.3 Position Transparency

2.3 Asynchronous Programming model

2.3.1 Asynchronous Method invocation

2.3.2 Async Method Dispatch

2.4 Subscription/release programming model

2.5 Dynamic service Management (Icebox)

2.6 Ice Grid Computing

2.6.1 Distributed Deployment

2.6.2 Load Balancing

2.6.3 Registration Center Cluster

2.7 Icessl Application

2.8 Persistent Storage (Icefreeze)

3 Ice Platform Function Research summary

1 Ice Middleware Introduction

ICE is the abbreviation of Internet Communications Engine, is an object-oriented middleware platform, support object-oriented RPC programming, its original purpose is to provide similar CORBA technology, but also to eliminate the complexity of CORBA technology. The platform provides tools, APIs, and library support for building object-oriented client-server applications.

Applications developed by the Ice platform support cross-platform deployment, multi-language programming, where the server supports C + +, JAVA, C #, Python and several other programming languages, and the client also supports Ruby and PHP. Ice supports synchronous/asynchronous, subscription/release programming patterns, supports distributed deployment, grid computing, built-in load balancing, and supports SSL security encryption.

ICE can either use TCP/IP or UDP as the underlying transport mechanism, and also allow you to use SSL as a transport mechanism to encrypt all communication between the client and the server.

The Ice platform also provides a sequence of programming libraries, including threading model libraries, timers, signal handlers, which all support thread-safe APIs.

The current version of Ice is ICE3.4.1, released in June 2010, ICE3.1.1 (released in October 2006) and previous versions that support Windows, LINUX, AIX, HP-UX, Solaris operating systems, However, ICE3.2.0 (including) later versions of the official no longer guarantee full support for AIX, and 3.3.1 (including) later versions of HP-UX are not guaranteed full support, although the source code is still provided under the two operating systems of the code compiled files Make.rules.AIX and Make.rules.hp-ux, according to the official forum, is because they basically do not have the two systems of commercial customers, so no more to the two platform code testing, if necessary to contact them.

2 Platform core Functions 2.1 Interface Description Language (Slice)

Slice (Specification Language for ICE) is a fundamental abstraction mechanism that separates an object interface from its implementation. Slice establishes a contract between the client and the server, describing the various types and object interfaces used by the application. This description is not relevant to the implementation language, so it doesn't matter if the language used to write the client is the same as the language used to write the server.

The Slice definition is compiled by the compiler into a specific implementation language. The compiler translates language-neutral definitions into language-specific type definitions and APIs. Developers use these types and APIs to provide app functionality and interact with ice. Currently, ICE supports slice to C + +, JAVA, C #, Python, Ruby, PHP language mapping.

Slice Example:

Module Family//For C + + namespaces

{

interface child;

Sequence<child*> children; OK can take advantage of C + + vector

Interface Parent//classes corresponding to C + +

{

Children GetChildren (); Ok

};

Interface Child

{

parent* Getmother ();

parent* Getfather ();

};

};

Using slice to define interface/class methods is similar to other programming languages, requiring a return value and several parameters, but it is important to note that slice does not support reference-mode parameter passing, the parameter is either an input parameter, or an output parameter, Not both in and out. As an out parameter, no matter what the client initially assigns to the out parameter, it is not available on the service side, but the server can assign the parameter to the client. For example:

Interface Hello

{

String SayHello (int num,out string strMsg);

};

Also note that the out parameter must be placed behind all input parameters and not allowed to intersect.

2.2 Ice Runtime 2.2.1 Communicator

The primary entry point for Ice run time is represented by the local interface Ice::communicator, which typically uses only one communicator for one application server and one Communicator instance to manage the runtime resources, including the main runtime resources:

² Thread Pool: Responsible for receiving client connections and requests.

² Configuration Properties: Various aspects of ICE run time can be configured through attributes. Each communicator has its own configuration properties

² Plug-In Manager: Plugins are objects used to add features to a communicator. For example, ICESSL (see Chapter 23rd) is implemented as a plug-in. Each communicator has a plug-in manager that implements the ice::P Luginmanager interface, through which you can access the plug-in set of the communicator.

² Object Factory: In order to instantiate a class derived from a known base class, the Communicator maintains a set of object factories that can instantiate the class for ice run time. Each user-defined interface/class implicitly inherits the same base class, Ice:object.

² Default Locator: The locator is used to resolve the object identity requested by the client to a proxy object.

² Object Adapter: The object adapter is used to dispatch received requests to forward each request to the correct servant. Each object adapter can bind multiple ice objects, and object adapters and objects of different communicator are completely independent of each other.

2.2.2 Object Adapter

A communicator contains one or more object adapters. The object adapter is responsible for the line between the ice run time and the server:

² It maps the Ice object to the incoming request servant and assigns the request to the application code in each servant (that is, the object adapter implements an up-call interface that connects Icerun time with the application code in the server).

² It assists with life cycle operations so that ice objects and servant are not competitive when created and destroyed.

² It provides one or more transmission endpoints. Customers access the ice objects provided by the adapter through these endpoints. Each object adapter has one or more servant to represent the ice object and one or more transport endpoints. If the object adapter has more than one transport endpoint, all servant that have registered with the adapter can respond to incoming requests at any one endpoint. In other words, if the object adapter has multiple transport endpoints, these endpoints represent different communication paths to the same set of objects (for example, through different transport mechanisms).

² Each object adapter belongs to only one communicator (but one communicator can have multiple object adapters).

2.2.3 Position Transparency

One useful feature of ice run time is location transparency: the customer does not need to know where the ice object is implemented, and the call to an object is automatically directed to the correct target, whether the object is implemented in the local address space, in another address space on the same machine, or in a far Another address space on the ground machine. Location transparency is important because it allows us to change the location of an object's implementation without destroying the client's program, and by using Icegrid, information such as domain names and port numbers can be placed outside the application, not in the current serialization agent.

2.3 Asynchronous Programming model

The Ice platform supports client asynchronous invocation (AMI) and server-side asynchronous dispatch programming (AMD).

2.3.1 Asynchronous Method invocation

The term asynchronous method invocation (AMI) describes the client-side asynchronous programming model support. If you use an AMI to make a remote call, the thread that makes the call does not block while the ice run time waits for a reply. Instead, the thread that makes the call can continue with the various activities, and when the reply finally arrives, the Ice run time notifies the app. Notifications are sent to the programming language object provided by the app through callbacks.

To use an asynchronous method call, just precede the corresponding class or method with metadata ["Ami"], as shown in the following example:

Interface Hello

{

["Ami"] string sayHello (int num);

};

When a header file is generated by the slice compiler, the corresponding method is generated and the method is called asynchronously, which is convenient for the user to choose whether to use synchronous or asynchronous calls at the client's discretion. After compiling for each asynchronous method will generate the corresponding asynchronous callback class and asynchronous call proxy method, the class named Ami_ class name _ method name, the above slice compiled by the generated class is Ami_hello_sayhello, and this class provides two methods:

void Ice_response (<params>);

Indicates that the operation was completed successfully. Each parameter represents the return value of the operation and the out parameter. If the operation has a non-void return type, the first parameter of the Ice_response method is the return value of the operation. All out parameters of the operation are followed by the order in which they are declared, followed by the return value.

void Ice_exception (const ice::exception &);

Indicates that a local or user exception was thrown.

The client only needs to proceed from the Ami_hello_sayhello class and implement both methods. The sample code is as follows:

Class Ami_hello_sayhelloi:public Ami_hello_sayhello

{

void Ice_response (const::std::string& STRMSG)

{

printf ("%s\n", Strmsg.c_str ());

}

void Ice_exception (const ice::exception& Exception)

{

printf ("error\n");

}

};

The generated asynchronous call proxy method is named method name _async, and the above example generates a method of Sayhello_async. Then the client's program asynchronously calls the following:

Hello->sayhello_async (ami_hello_sayhelloptr,<params>);

Then simply process the returned results in the Ice_response method.

You can also call the synchronization method:

Hello->sayhello (<params>);

2.3.2 Async Method Dispatch

Asynchronous method Dispatch (AMD) is the server-side equivalent of an AMI, and when using AMD, the server can receive a request and suspend its processing to release the dispatch thread as soon as possible. When the recovery is processed and the results are reached, the server uses the callback object provided by the ice runtime to explicitly send the response.

In practical terms, AMD operations typically put the request data (that is, callback objects and operation parameters) into the queue, which is then processed by a thread (or thread pool) of the application. In this way, the server minimizes the usage of the dispatch thread and is able to efficiently support a large number of concurrent clients.

To use an asynchronous method call, just precede the corresponding class or method with metadata ["AMD"], as shown in the following example:

Interface Hello

{

["AMD"] string sayHello (int num);

};

Unlike asynchronous method invocations, the asynchronous dispatch is the service side, and the corresponding asynchronous dispatch class and the asynchronous dispatch method are generated on the server side. The dispatcher on the server assigns the request to the Sayhello_async (const demo::amd_hello_sayhelloptr&, int, const ice::current&) when it receives a client request. The user can then put the task parameter information into the queue, loop through the task in the queue and process it, and call the Ice_response method of the Amd_hello_sayhelloptr class to return to the client after processing is complete.

2.4 Subscription/release programming model

Icestorm is an efficient publish/subscribe service for ICE applications, and Icestorm has several more important concepts:

² message: Icestorm messages are a bit different from messages described in ordinary Message Queuing middleware, icestorm messages are strongly typed, represented by a call to a slice operation: The action name identifies the type of message, and the action parameter defines the message content. To publish a message, you can call an operation on a icestorm agent in a normal way. Similarly, subscribers receive messages as if they received a regular up-call (Upcall). So Icestorm's message delivery uses "push" mode.

² Theme: Apps want to subscribe to a topic (topic) to indicate that they are interested in receiving certain messages. The Icestorm server can support any number of topics that are created dynamically and are distinguished by unique names. Each topic can have multiple publishers and subscribers.

² Persistent Mode: Icestorm has a database that maintains information about its subject and links. However, messages sent through Icestorm are not persisted, but are discarded immediately after being delivered to the topic's current subscriber set. If an error occurs during the delivery of a message to a subscriber, Icestorm does not queue messages for that subscriber.

² Subscriber Error: Because Icestorm messages are delivered in one-way semantics, icestorm can only detect connection or timeout errors. If, in the process of delivering a message to a subscriber, Icestorm encounters such an error, the Subscriber is immediately relieved of the subscription to the subject that corresponds to the message. Of course, users can also use the process of setting QoS parameters to improve the problem, such as the number of retries (RetryCount), but for objectnotexistexception or notregisteredexception and other hard errors, The ice runtime does not retry, but still directly unlocks the subscription relationship.

Icestorm supports two main QoS parameters reliability and Retrycount,reliability values are ordered and null respectively, and when ordered is taken, the publisher's message is guaranteed to be delivered sequentially to subscribers.

From the features provided by Icestorm, it is suitable for applications that do not need to be forwarded for message persistent storage, but due to the immediate cancellation of the subscription relationship after the Subscriber has made an error, it is not the subscriber's initiative to dismiss it, which requires special attention in the application to meet the actual application.

Icestorm is implemented as a icebox service, so the Icebox service needs to be started when deploying icestorm applications.

2.5 Dynamic service Management (Icebox)

Icebox is used to dynamically load user services and centrally manage them, and the services in Icebox can be remotely managed through Iceboxadmin management tools, which can be developed into dynamic library components that can be dynamically loaded by Icebox User Services.

Service components that use icebox need to inherit the Icebox::service class, implement the Start (), Stop () method, and provide the service entry point function in the implementation class, typically the Create () function, in which the object of the service implementation class is created and returned. For example:

extern "C"

{

Ice_declspec_export icebox::service*

Create (Ice::communicatorptr Communicator)

{

return new Helloservicei;

}

}

For Ice3.3.0 and above, Iceboxadmin provides command management tools and application interfaces for starting, stopping, and stopping icebox servers, and the Administrative Tools command is as follows:

iceboxadmin [options] [command ...]

Commands:

Start service start a service.

Stop service Stop a service.

Shutdown shutdown the server.

2.6 Ice Grid Computing

Icegrid is used to support distributed network Service applications, and a Icegrid domain consists of a registry (Registry) and any number of nodes (node). The registry (Registry) and nodes (node) work together to manage some information and service processes that include some applications (application). Each application (application) is assigned a service on a specific node. This registry (Registry) is a persistent record of this information, and node is responsible for initiating and monitoring its specified server processes. For a typical configuration, a node is running on a single computer (called the Ice Server host). The registry (Registry) does not consume a lot of processor time, so it is often run on the same computer as a node. The registry (Registry) can also be run in the same process as a node. If fault tolerance is required, the registry (Registry) can also support replication (Replication) with a master-slave design.

The primary responsibility of the Registry (Registry) is to address the issue of indirect proxies as an ice location service, when the client first attempts to use an indirect proxy, and the client ice run time first connects to the registry (Registry), The registry translates the symbolic information of the indirect proxy into the endpoint of the direct proxy, and then the client and the direct proxy establish a connection. With adapter replication, adapters of the same name can be distributed across multiple nodes, and indirect proxies can be mapped to direct proxies on multiple nodes, and the registry service automatically selects a direct proxy to the client at run time based on load balancing.

When using an indirect proxy, the client can obtain the service object proxy directly in the following ways:

[Email protected]//Object @ Adapter

More simply, you can use the following methods

Myproxy=theobject//Object

2.6.1 Distributed Deployment

When you deploy Icegrid distributed services, you need to start the Registry service (Icegridregistry), and configure the Registry service address port, the communication protocol, and the directory address where the registration information is saved (the ICE's registration information is saved as a berkeleydb database file):

Icegrid.registry.client.endpoints=tcp-p 4061

Icegrid.registry.data=/opt/ripper/registry

Both the server node and the client need to configure the registry service's address port and communication protocol:

Ice.default.locator=icegrid/locator:tcp-h 172.0.0.1-p 4061

Then start the Registry Service (Icegridregistry) and the node service (icegridnode) separately.

ICE provides the deployment tool icegridadmin, which icegridadmin tool also needs to define Ice.Default.Locator properties.

The application deployment file needs to be written and the application deployment file is saved in XML. The following is an app configuration file that supports adapter replication, using a service template:

<icegrid>

<application name= "Ripper" >

<replica-group id= "Encoderadapters" >//define Adapter replication group

<object identity= "Encoderfactory"//identity will be used in the client.

Type= ":: Ripper::mp3encoderfactory"/>

</replica-group>

<server-template id= "Encoderservertemplate" >//define Server templates

<parameter name= "index"/>

<parameter name= "ExePath"

default= "/opt/ripper/bin/server"/>

<server id= "Encoderserver${index}"

Exe= "${exepath}"

activation= "On-demand" >

<adapter name= "Encoderadapter"

replica-group= "Encoderadapters"

endpoints= "TCP"/>

</server>

</server-template>

<node name= "Node1" >

<server-instance template= "Encoderservertemplate"

index= "1"/>

</node>

<node name= "Node2" >

<server-instance template= "Encoderservertemplate"

index= "2"/>

</node>

</application>

</icegrid>

The client can then obtain the object proxy in the following ways:

Ice::objectprx obj = communicator->stringtoproxy ("Encoderfactory");

2.6.2 Load Balancing

The ice platform is built-in load balancing, which provides multiple load balancing schemes for application services on most distributed nodes, with only the XML configuration file to complete the load balancing configuration. The configuration items include type (load balancer type), sampling interval (load information collection gap), number of replicas (how many adapters are returned to the client).

There are 4 ways to load balance types:

²random (Random mode): The registry randomly selects an adapter to the client and does not check the load on the adapter.

²adaptive (FIT): The registry selects one of the lightest-loaded adapters from all adapters to the client, and the sampling interval parameter is valid only in this type of load balancing, which specifies that the node periodically reports local system load information to the registry Load information);

²round Robin (least Recently used): The registry selects a least recently used adapter from the corresponding adapter group to the customer.

²ordered (sequential mode): The registry selects an adapter from high to low in order to the client based on the priority of the adapter.

Example configuration:

<replica-group id= "Encoderadapters" >

<load-balancing type= "adaptive"/>//configured to fit

<object identity= "Encoderfactory"

Type= ":: Ripper::mp3encoderfactory"/>

</replica-group>

2.6.3 Registration Center Cluster

The first two sections describe a distributed deployment that belongs to the user application, a very important support for the distribution deployment is the ice's registry, and all the clients are querying the registry for the real endpoint of the service agent to establish a communication connection, where the registry becomes a single point of service, in order to avoid the registration center becoming a bottleneck for the application, To improve the reliability of the system, ICE3.3.0 has provided the Registration center cluster function.

Ice registry cluster through the master-slave registry replication to achieve, a cluster has a primary registry, a number of sub-registries, master-slave differences through the IceGrid.Registry.ReplicaName property configuration to achieve, the primary registry name is master, The other names can be taken arbitrarily. Start the main registry first, and then start another registry, the information updated through the main registry will be synchronized to the Deputy Registry, the sub-registries do not communicate. If the primary registry fails, you need to make a primary registry call from another sub-registry, However, from the 3.3 version of the documentation, if you need to make a secondary registry call to become the primary registry, you need to restart the process and modify the IceGrid.Registry.ReplicaName property value to master, or delete the property. By default, the property value is master.

When using cluster mode, all the master and slave registry address ports are filled in Ice.Default.Locator when the client is configured, for example:

Ice.default.locator=icegrid/locator:default-p 12000:default-p 12001

The application node also binds all the registry address ports so that updates to the app will notify all registries at the same time.

2.7 Icessl Application

The ice platform can be configured to support SSL applications with a simple configuration process as follows:

² First, you need to modify the configuration file to enable the SSL plug-in, the C + + server configuration method is: Ice.plugin.icessl=icessl:createicessl

You only need to put the ICESSL dynamic library into the path contained in the Ld_library_path.

² then modify the listener options for the adapter:

Myadapter.endpoints=tcp-p 8000:ssl-p 8001:udp-p 8000//indicates that the adapter listens at the same time on three protocol ports.

ICE also offers a variety of configuration properties to suit the actual application, as shown in the following example:

Ice.plugin.icessl=icessl:createicessl

Icessl.defaultdir=/opt/certs//Default certificate directory

ICESSL.CERTFILE=PUBKEY.PEM//certificate file

ICESSL.KEYFILE=PRIVKEY.PEM//private key file

Icessl.certauthfile=ca.pem//Trusted root certificate file

Icessl.password=password//Private key File View password

2.8 Persistent Storage (Icefreeze)

The persistence scheme provided by ICE can support persistent storage of ordinary user data (key/value pairs) and persistent management of service object instances, the persistent storage usage of ordinary user data is relatively simple, and the management of service object instances is a little more complex, and is not concerned for the time being.

ICE's persistent storage medium is berkeleydb, the persistence of ordinary data in the C + + implementation of the use of map, the user needs to use slice to define the data to be stored, and Slice2freeze generate the corresponding map operation class, The operation of the data can then be done using the Map container function. Examples are as follows:

First, generate the data types you want to store:

Slice2freeze--dict Stringintmap,string,int Stringintmap

Code use:

Ice::communicatorptr Communicator =

Ice::initialize (argc, argv);

Create a Freeze database connection.

Freeze::connectionptr connection = freeze::createconnection (Communicator, "db"); Connect to the database file.

Instantiate the map.

Stringintmap Map (Connection, "simple");//CREATE TABLE.

Clear the map.

Map.clear ();

Ice::int i;

Stringintmap::iterator p;

Populate the map.

for (i = 0; i <; i++)

{

std::string key (1, ' a ' + i);

Map.insert (Make_pair (key, i));

}

Iterate over the map and change the values.

for (P = map.begin (); P! = Map.end (); ++p)

P.set (P->second + 1);

Find and erase the last element.

p = map.find ("z");

ASSERT (P! = Map.end ());

Map.erase (P);

Clean up.

Connection->close ();

Communicator->destroy ();

Icefreeze also allows the use of structs and class objects as values for storage, but only public member variables are stored and other member variables are not stored.

For higher versions of ice, the values are also allowed to be indexed, and if the value is a struct or a class object, the struct/object variable is also allowed to be indexed, and the corresponding index query function is generated after compiling by Slice2freeze. For example, define the following data structures that need to be stored:

Module Demo

{

struct STRUCT1

{

Long L;

};

Class Class1

{

string S;

};

};

Then execute the following command to generate the mapping table, generating the index at the same time as the Class1 member variable S.

Slice2freeze

--dict Demo::indexedstruct1class1map,demo::struct1,demo::class1--dict-index Demo::indexedstruct1class1map,s,case -sensitive Benchtypes Test.ice

Findbys (string &) is automatically generated in the compiled code and can be called directly in the program as follows:

indexedstruct1struct2map& m= ...;

Indexedstruct1struct2map::iterator p = M.findbys (Os.str ());

3 Ice Platform Function Research summary

The ice platform provides much more functionality, in addition to the sections listed in the documentation, support for package distribution (ICEPATH2), firewall penetration (Glacier), as the current project application is temporarily not described in these two parts.

The demo from ice and its own test program are working well at Iinux (openSUSE), and the test for 3.1.1 for AIX runs asynchronous programming on AIX, but no performance testing has yet been done on the application of the ice platform.

From the document introduction, the Ice platform supports synchronous/asynchronous, subscription/publish, distributed deployment, internal persistent storage, support interface Description language to various object-oriented development language mapping, can meet the technical requirements of ESB system development, but there are some risks, ICE3.1 later version of AIX, The HP-UX operating system does not guarantee full support, and it is necessary to compile and run the tests for each upgrade version after 3.1.

Ice Middleware documentation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.