If you're trying to handle a meta-class, or are stuck in asynchronous programming in Twisted, or are studying object-oriented programming that's exhausting you with multiple allocations, you're totally wrong! PEAK combines some of these elements into a component programming framework. There are some minor problems with PEAK. Documents similar to Twisted,peak-as large as possible-are difficult to read. But nonetheless, there is something very interesting about this project led by Python leader Phillip J. Eby, and I think there is the opportunity to develop highly productive and highly-layered applications.
The PEAK package consists of a number of sub-packages of different uses. Some of the important sub-packages are Peak.api, peak.binding, Peak.config, peak.naming, and Peak.storage. Most of those names are self explanatory. The sub-package peak.binding is used for flexible connections between components, and Peak.config allows you to store "infrequently changed (lazily immutable)" data related to declarative applications (declarative application) programming; Peak.naming allows you to create globally unique identifiers for (network) resources; Peak.storage, as the name implies, allows you to manage databases and persistent content.
However, for this article, we will focus on PEAK.API. In particular, the Pyprotocols package, which can be obtained separately and provides an infrastructure for other PEAK sub-packages. A version of the Pyprotocols package is included in the Peak.api.protocols. But what I'm interested in now is studying a separate protocols package. In a later section, I'll return to discuss the other parts of PEAK.
What is an agreement?
In abstract terms, a protocol is just a set of behaviors that an object agrees to follow. Strongly typed (strongly-typed) programming languages-including Python-have a set of basic types, each with a guaranteed set of behaviors: The Integer knows how to ask for their own product, and the list knows how to traverse their contents. ; dictionary knows how to find the corresponding value according to a keyword; file knows how to read and write, and so forth. You can expect a set of built-in types of behaviors to form a protocol that they implement. The object to which the protocol is structured is called an interface (interface).
For standard types, it is not too difficult to list all of the behaviors that are implemented (although the different Python versions are slightly different; or, of course, there are differences between different programming languages). However, at the boundary-for objects that belong to a custom class-it is difficult to declare what ultimately constitutes a "class-dictionary" or "Class-file" behavior. In most cases, a custom object that implements only a subset of a method, such as a built-in dict type, or even a fairly small subset, is sufficient to "class-dictionary" to meet the current requirements. However, it can be tempting to explicitly sort out what functions, modules, classes, or frameworks the object needs to be able to do. That's what the Pyprotocols Pack does (part).
In a programming language with static type declarations, in order to use the data in a new context, you typically need to force type conversion (CAST) or conversion (convert) to another type from one type. In other languages, transformations are implicitly performed according to the needs of the context, which are called forced-coercions. Python has both forced and forced type conversions, often using more of the former ("Explicit is better than implicit"). You can add a floating-point number to an integer, resulting in a more general float, but if you want to convert the string "3.14" to a number, you need to use the explicit constructor, float ("3.14").
Pyprotocols has a feature called "Fit (adaptation)", similar to the unorthodox computer science concept of "partial typing". Adaptation may also be considered to be "accelerated coercion of the same type". If an interface defines the required set of capabilities (i.e. object methods), then the object to do "everything needed" requires adaptation--through the PROTOCOLS.ADAPT () function--to provide the required capabilities. Obviously, if you have an explicit conversion function that converts an object of type X to an object of type Y (where y implements a IY interface), then that function should be able to IY the X adaptation protocol. However, the adaptation in Pyprotocols can do much more than that. For example, even if you have never explicitly written a translator from type X to type Y, adapt () can typically push a path that gives X the ability required by IY (that is, to find an intermediate conversion from X to interface IZ, from IZ to IW, and then from IW to IY )。
declaring interfaces and Adapters
There are many different ways to create interfaces and adapters in Pyprotocols. The Pyprotocols documentation describes these techniques in great detail-many of which are not covered in this article. Next we'll go into some details, but I think it's a useful way to give a simplified example of the actual pyprotocols code here.
For example, I decided to create a Python object class-lisp serialization. The description is not accurate Lisp syntax, I do not care about the exact advantages and disadvantages of this format. My idea here is simply to create a function that can perform the work like the repr () function or the Pprint module, but the result is not only significantly different from the previous serializer (serializers) but also easier to scale/customize. A very unlike Lisp choice was made for illustrative purposes: mapping (mappings) is a more basic data structure than a list (a tuple of Python) or a list is treated as a continuous integer-key mapping. Here's the code:
lispy.py Pyprotocol Definition
From protocols import *from Cstringio import stringio# like Unicode & even support objects that don ' t explicitly supp Ort ilispilisp = Protocolfortype (Unicode, [' __repr__ '], implicit=true) # Class for interface, but no methods specifically r Equiredclass ISeq (Interface): pass# Class for Interface, extremely simple mapping Interfaceclass IMap (Interface): def ite MS (): "A requirement for a map was to a. Items () method ' # Define function to create a Lisp like representation of A mappingdef map2lisp (MAP_, prot): out = Stringio () for k,v in Map_.items (): Out.write ("(%s%s)"% (adapt (K,prot), Adapt (V,prot))) return "(MAP%s)"% Out.getvalue () # Use this func to convert an imap-supporting obj to ilisp-supporting o Bjdeclareadapter (Map2lisp, Provides=[ilisp], Forprotocols=[imap]) # Note that a dict implements an IMap interface with no C Onversion Neededdeclareadapter (no_adapter_needed, Provides=[imap], fortypes=[dict]) # Define and use func-adapt an Inst Ancetype obj to the ILisp Interfacefrom Types Import instancetypedef inst2lisp (O, p): Return "(CLASS ' (%s)%s)"% (o.__class__.__name__, adapt (o._ _dict__,p) Declareadapter (Inst2lisp, Provides=[ilisp], Fortypes=[instancetype]) # Define A class to adapt an Iseq-supporting obj to an imap-supporting objclass Seqasmap (object): Advise (Instancesprovide=[imap], Asadapterforpro TOCOLS=[ISEQ]) def __init__ (self, SEQ, prot): Self.seq = seq Self.prot = Prot def items (self): # Implement the I Map required. Items () method return Enumerate (SELF.SEQ) # Note that list, tuple implement an ISeq interface w/o Conversi On Neededdeclareadapter (no_adapter_needed, Provides=[iseq], fortypes=[list, tuple]) # Define a lambda func to adapt str, UN Icode to ILisp Interfacedeclareadapter (lambda s,p: "' (%s)"% s, Provides=[ilisp], Fortypes=[str,unicode]) # Define A class to adapt several numeric types to ILisp interface# Return a string (ilisp-supporting) directly from instance Constr Uctorclass Numberaslisp (object): AdviSE (Instancesprovide=[ilisp], asadapterfortypes=[long, float, complex, BOOL]) def __new__ (Klass, Val, Proto): RET Urn "(%s%s)"% (Val.__class__.__name__.upper (), Val)
In the above code, I have declared many adapters in a number of different ways. In some cases, the code converts one interface to another, and in other cases, the type itself is directly adapted to another interface. I want you to notice some aspects of the Code: (1) No adapter was created from list or tuple to ILisp interface, (2) No adapter is explicitly declared for int numeric type, (3) For this purpose, there is no declaration of an adapter directly from Dict to ILisp. Here's how the code will fit (adapt ()) a variety of Python objects:
Serialization of test_lispy.py objects
From lispy import *from sys import stdout, stderrtolisp = Lambda o:adapt (o, ILisp) class Foo: def __init__ (self): SELF.A, self.b, SELF.C = ' A ', ' B ', ' c ' tests = ["foo bar", {17:2, 33:4, ' biz ': ' Baz '}, [' Bar ', (' f ', ' o ', ' o ')], 1.23, (1L, 2, 3, 4+4j), Foo (), True,]for test in tests: stdout.write (Tolisp (test) + ' \ n ')
At run time, we get:
test_lispy.py Serialization Results
$ python2.3 test_lispy.py ' (foo bar) (Map (2) (' Biz ' (Baz)) (4)) (Map (0 ' (bar)) (1 (Map (0) (f)) (1 ' (O)) (2 ' (O)) )) (FLOAT 1.23) (Map (0 (LONG 1)) (1 2) (2 3) (3 (COMPLEX (4+4j)))) (CLASS ' (Foo) (Map (' (a) ' (a)) (' (c) ' (c)) (' (b) ' (b)) )) (BOOL True)
Some explanation of our output will help. The first line is simple, we define an adapter directly from the string to the ILisp, and the call to adapt ("foo bar", ILisp) simply returns the result of the lambda function. The next line is just a little complicated. There are no adapters directly from Dict to ILisp, but we do not have to use any adapters to allow DICT to adapt to IMAP (we declare enough), and we have adapters from IMap to ILisp. Similarly, for subsequent lists and tuples, we can adapt the ILisp to ISeq, enable ISEQ to fit IMAP, and enable IMAP to fit ILisp. Pyprotocols will point out the adaptation path to be taken, and all of these incredible processes are done behind the scenes. An example of an old style goes through the same process as a string or an IMAP-enabled object, and we have a direct-to-ILisp adaptation.
But wait a minute. How are all the integers used in our dict and tuple objects handled? There are explicit adapters for long, complex, float, and bool types, but none for Int. The trick here is that the Int object already has a. __repr__ () method; By declaring implicit support as part of the ILisp interface, we can skillfully use the object's existing. __repr__ () method as support for the ILisp interface. In fact, as a built-in type, integers are represented with no decorated Arabic numerals, rather than uppercase type initializers (such as LONG).
Adaptation protocol
Let's take a more explicit look at what the PROTOCOL.ADAPT () function does. In our example, we use the claims API (Declaration API) to implicitly set up a set of "factories (factories)" for the adaptation. This API has several levels. The "Basic level (primitives)" of the declaration API is a function: Declareadaptorfortype (), Declareadaptorforobject (), and Declareadaptorforprotocol (). These are not used in the previous examples, but are used in a number of high-level APIs such as Declareimplementation (), Declareadaptor (), Adviceobject (), and Protocolfortype (). In one case, we see a "marvelous" advise () declaration in a class body. The advise () function supports a large number of keyword parameters that are used to configure the purpose and role of those recommended classes. You can also suggest (advise ()) a module object.
You do not need to use the claims API to create a suitable object or interface that knows how to adapt an object (adapt ()) to its own. Let's take a look at the call token of adapt () and then explain its subsequent procedure. The call to adapt () is similar to this:
Call token for adapt ()
Adapt (Component, Protocol, [, default [, Factory]])
This means that you want to component the object to the adapter interface protocol. If default is specified, it can be returned as a wrapper (wrapper object) or modified for component. If factory is specified as a keyword parameter, a conversion factory is used to generate the wrapper or modification. But let's go back a little and look at the complete sequence of actions (simplified code) that adapt () Tries:
The hypothetical implementation of adapt ()
If Isinstance (component, Protocol): return Componentelif hasattr (component, ' __conform__ '): return component . __conform__ (protocol) Elif hasattr (protocol, ' __adapt__ '): return protocol.__adapt__ (component) Elif default is Not none: return Defaultelif factory are not none: return Factory (component, Protocol) ELSE: Notimplementederror
Calls to adapt () should maintain some characteristics (but this is a recommendation for programmers, not a general mandatory requirement for libraries). The call to adapt () should be idempotent. In other words, for an object x and a protocol p, we want: adapt (X,P) ==adapt (Adapt (x,p), p). Advanced, the purpose of this is similar to the purpose of returning the iterator (iterator) class of itself (self) from the. __iter__ () method. You basically don't want to re-adapt to the same type that you've adapted to produce fluctuating results.
It is also worth noting that the adaptation may be lossy. In order for an object to conform to an interface, it may be inconvenient or impossible to maintain all the information needed to reinitialize the object. That is, typically, for object X and protocol P1 and P2: Adapt (X,P1)!=ADAPT (Adapt (adapt (X,P1), P2), P1).
Before we finish, let's look at another test script that takes advantage of the low-level behavior of adapt ():
Serialization of test_lispy2.py objects
From Lispy import *class Bar (object): passclass Baz (Bar): def __repr__ (self): return "represent a" +self.__ class__.__name__+ "object!" Class Bat (Baz): def __conform__ (self, prot): return "Adapt" +self.__class__.__name__+ "to" +repr (prot) + "!" Print adapt (Bar (), ILisp) Print adapt (Baz (), ILisp) Print adapt (Bat (), ILisp) print adapt (adapt (BAT (), ILisp), ILisp) $ python2.3 test_lispy2.py<__main__. Bar object at 0x65250>represent a Baz object! Adapt Bat to Weaksubset (
, (' __repr__ ',))! ' (Adapt Bat to Weaksubset (
, (' __repr__ ',))!)
The result proves that the design of lispy.py can't meet the goal of idempotent. It may be a good practice to improve the design. However, a description like ILisp is bound to deplete the information in the original object (it's okay).
Conclusion
It feels that Pyprotocols has something in common with other "exotic" topics mentioned in this column. First, the declaration API is declarative (relative to interpretative). Declarative programming does not give the steps and switches required to perform an action, but rather declares the processing of a particular content, which is specified by the library or compiler to specify how to execute. The name "declare* ()" and "advice* ()" Are coming from this view.
However, I also found that pyprotocols programming is somewhat similar to programming with multiple dispatches, specifically using the Gnosis.magic.multimethods module that I mentioned in another issue of the article. In contrast to the Pyprotocols's deterministic adaptation path, my own module performs a relatively simple deduction to determine the associated ancestor classes to assign. However, two libraries tend to encourage the use of similar modular thinking in programming--a large number of small functions or classes that perform "pluggable" tasks without the need to be trapped by rigid class hierarchies. In my opinion, this style has its superiority.