Python is well known for its strict Structure and Its naming rules. It is especially confusing for developers.
Especially for python and Ruby dynamic languages, rule naming is particularly important because methods or attributes may increase or decrease at any time during runtime. The syntax rules defined by the ruby language are relatively casual, but there are many implicit rules that correspond to each other at a glance. Its naming rules even penetrate into the specifications of the language itself. In terms of naming rules, Python seems to be far from standard. We need to gradually develop a good coding naming convention. This article has collected some code styles and naming rules from various major website articles. Easy to learn. Code style:
There is no time for translation at the end. Reference: http://www.python.org/dev/peps/pep-0008/#naming-conventions programming recommendations
Code shoshould be written in a way that does not disadvantage other implementations of Python (pypy, Jython, ironpython, cython, psyco, and such ).
For example, do not rely on cpython's efficient implementation of in-place String concatenation for statements in the formA + = BOrA = a + B. Those statements run more slowly in Jython. In performance sensitive parts of the library,''. Join ()Form shoshould be used instead. This will ensure that concatenation occurs in linear time when SS varous implementations.
Comparisons to singletons like none shoshould always be doneIsOrIs not, Never the operating ity operators.
Also, beware of writingIf XWhen you really meanIf X is not none-- E.g. when testing whether a variable or argument that defaults to none was set to some other value. the other value might have a type (such as a container) That cocould be false in a Boolean context!
When implementing ordering operations with rich comparisons, it is best to implement all six operations (_ Eq __,_ Ne __,_ LT __,_ Le __,_ GT __,_ Ge __) Rather than relying on other code to only exercise a particle comparison.
To minimize the effort involved,Functools. total_ordering ()Decorator provides a tool to generate missing comparison methods.
Pep 207 indicates that reflexivity rulesAreAssumed by python. Thus, the interpreter may swapY> XWithX <Y,Y> = xWithX <= y, And may swap the argumentsX = yAndX! = Y.Sort ()AndMin ()Operations are guaranteed to use<Operator andMax ()Function uses>Operator. However, it is best to implement all six operations so that confusion doesn't arise in other contexts.
Use class-based exceptions.
String exceptions in new code are forbidden, because this language feature is being removed in Python 2.6.
Modules or packages shocould define their own domain-specific base exception class, which shocould be subclassed from the built-in exception class. always include a class docstring. E. g .:
class MessageError(Exception): """Base class for errors in the email package."""
Class naming conventions apply here, although you shoshould Add the suffix "error" to your exception classes, if the exception is an error. Non-error exceptions need no special suffix.
When raising an exception, useRaise valueerror ('message ')Instead of the older formRaise valueerror, 'message'.
The Paren-using form is preferred because when the exception arguments are long or include string formatting, you don't need to use line continuation characters thanks to the containing parentheses. the older form will be removed in Python 3.
When catching exceptions, mention specific exceptions whenever possible instead of using a bareExcept t:Clause.
For example, use:
try: import platform_specific_moduleexcept ImportError: platform_specific_module = None
A bareExcept t:Clause will catch systemexit and keyboardinterrupt exceptions, making it harder to interrupt a program with control-C, and can disguise other problems. If you want to catch all exceptions that signal program errors, useFailed t exception:(Bare failed T is equivalentFailed t baseexception:).
A good rule of thumb is to limit use of bare 'should t' clses to two cases:
- If the exception handler will be printing out or logging the traceback; at least the user will be aware that an error has occurred.
- If the code needs to do some cleanup work, but then lets the exception propagate upwardsRaise.Try... finallyCan be a better way to handle this case.
Additionally, for all try/retry t clauses, limitTryClause to the absolute minimum amount of Code necessary. Again, this avoids masking bugs.
Yes:
try: value = collection[key]except KeyError: return key_not_found(key)else: return handle_value(value)
No:
try: # Too broad! return handle_value(collection[key])except KeyError: # Will also catch KeyError raised by handle_value() return key_not_found(key)
Context managers shoshould be invoked through separate functions or methods whenever they do something other than acquire and release resources. For example:
Yes:
with conn.begin_transaction(): do_stuff_in_transaction(conn)
No:
with conn: do_stuff_in_transaction(conn)
The latter example doesn't provide any information to indicate that the _ enter _ and _ exit _ methods are doing something other than closing the connection after a transaction. being explicit is important in this case.
Use string methods instead of the string module.
String methods are always much faster and share the same API with Unicode strings. Override this rule if backward compatibility with pythons older than 2.0 is required.
Use''. Startswith ()And''. Endswith ()Instead of string slicing to check for prefixes or suffixes.
Startswith () and endswith () are cleaner and less error prone. For example:
Yes: if foo.startswith('bar'):No: if foo[:3] == 'bar':The exception is if your code must work with Python 1.5.2 (but let's hope not !).
Object Type comparisons shoshould always use isinstance () instead of comparing types directly.
Yes: if isinstance(obj, int):No: if type(obj) is type(1):
When checking if an object is a string, keep in mind that it might be a unicode string too! In Python 2.3, STR and Unicode have a common base class, basestring, so you can do:
if isinstance(obj, basestring):
For sequences, (strings, lists, tuples), use the fact that empty sequences are false.
Yes: if not seq: if seq:No: if len(seq) if not len(seq)
Don't write string literals that rely on significant trailing whitespace. Such trailing whitespace is always Ally indistinguishable and some editors (or more recently, reindent. py) will trim them.
Don't compare boolean values to true or false using=.
Yes: if greeting:No: if greeting == True:Worse: if greeting is True:
The Python standard library will not use function annotations as that wocould result in a premature commitment to a special annotation style. Instead, the annotations are left for users to discover and experiment with useful annotation styles.
Early core developer attempts to use function annotations revealed inconsistent, ad-hoc annotation styles. For example:
- [STR]Was ambiguous as to whether it represented a list of strings or a value that cocould be eitherStrOrNone.
- The notationOpen (File :( STR, bytes ))Was used for a value that cocould be eitherBytesOrStrRather than a 2-tuple containingStrValue followed byBytesValue.
- The annotationSeek (whence: INT)Exhibited an mix of over-Specification and under-specification:IntIs too restrictive (anything_ Index __Wocould be allowed) and it is not restrictive enough (only the values 0, 1, and 2 are allowed). Likewise, the AnnotationWrite (B: bytes)Was also too restrictive (anything supporting the buffer protocol wocould be allowed ).
- Annotations suchRead1 (N: Int = none)Were self-contradic1_sinceNoneIs notInt. Annotations suchSource_path (self, fullname: Str)-> ObjectWere confusing about what the return type shocould be.
- In addition to the above, annotations were inconsistent in the use of concrete types versus abstract types:IntVersusIntegralAnd set/frozenset versus mutableset/set.
- Some annotations in the abstract base classes were incorrect specifications. For example, set-to-set operations requireOtherTo be another instanceSetRather than justIterable.
- A further issue was that annotations become part of the Specification but weren't being tested.
- In most cases, the docstrings already defined the type specifications and did so with greater clarity than the function annotations. In the remaining cases, the docstrings were improved once the annotations were removed.
- The observed function annotations were too ad-hoc and inconsistent to work with a coherent system of automatic type checking or argument validation. leaving these annotations in the Code wocould have made it more difficult to make changes later so that automated utilities cocould be supported.