This article focuses on sharing how to write efficient and elegant Python code, the needs of friends can refer to the following
This article is partly refined from books: "Effective Python" & "Python3 Cookbook", but it has also been revised and added to the author's own best practices in understanding and application.
The full text is about 9956 words, it may take 24 minutes to finish reading.
Pythonic List Cutting
list[start:end:step]
If you start cutting from the beginning of the list, ignore 0 of the start bit, for examplelist[:4]
If you keep cutting to the end of the list, you ignore 0 of the ending bit, for examplelist[3:]
When you cut a list, there's no problem even if the start or end index crosses the boundary
List slices do not change the original list. A copy of the original list is generated when the index is left blank
List-derived
Use a list derivation to replace map andfilter
Do not use a list deduction that contains more than two expressions
The list derivation may consume a lot of memory when the data is long, and it is recommended that you use a builder expression
Iteration
Use when you need to get indexenumerate
enumerateThe second parameter can be accepted as the value added to the iteration index
zipiterate through two iterators at a time
zipTime-elapsed return of a tuple
About for and while The block after the loop else
The code inside is called after the loop is finished normally else
Loop break is not executed by jumping out of the loopelse
Executes immediately when the sequence to traverse is emptyelse
Reverse Iteration
For normal sequences (lists), we can reverse iterate through the built-in reversed() functions:
In addition, you can reverse-iterate a class by implementing a method in the class __reversed__ :
try/except/else/finally
If try an exception does not occur inside, the else code inside the call
elseWill finally run before
Will eventually be executed finally , where cleanup can be done
Functions using adorners
Adorners are used to modify existing functions without changing the original function code. A common scenario is to add a debug or add monitoring to an existing function log
Give me a chestnut:
In addition, you can also write an adorner that receives parameters, in fact, the outer layer of the original adorner and nested a function:
But using adorners like the one above has a problem:
This means that the original function has been replaced by the function in the adorner new_fun . Calling a decorated function is equivalent to calling a new function. When you look at the parameters, comments, and even function names of the original function, you can only see information about the adorner. To solve this problem, we can use
Python comes with a functools.wraps method.
functools.wrapsis a very hack method, which is useful as an adorner, and is used in the function that will be returned inside the adorner. That is, it is the adorner of the adorner, and with the original function as a parameter, the function is to preserve the various information of the original function, so that we can look at the information of the original function that is decorated, and then keep the same as the original function.
In addition, sometimes our decorators may do more than one thing, and the event should be separated out as an extra function. However, because it may only be related to the adorner, you can construct an adorner class at this point. The principle is very simple, mainly is to write the method in the class __call__ , so that the class can be like functions of the call.
Using the generator
Consider using a generator to overwrite functions that directly return a list
There are several minor problems with this approach:
Each time you obtain a result that matches a condition, you call the append method. But in fact our focus is not at all this way, it's just a means for us to achieve the goal, actually just need to index be good
Returned result can continue to optimize
Data exists result inside, if the amount of data is large, it will be more memory
Therefore, it is better to use the generator generator . A generator is a yield function that uses an expression that, when called, does not actually execute, but instead returns an iterator that next advances the generator to the next expression each time a built-in function is called on the iterator yield :
Once you get to a generator, you can traverse it normally:
If you still need a list, you can call the result of the function as a parameter, and then call the list method
Can iterate over objects
It is important to note that the normal iterator can only iterate one round, and repeated calls after a round are not valid. The way to solve this problem is that you can define a container class that can be iterated :
In this case, it is no problem to repeat the instance iterations of the class as many times as possible:
It is important to note, however, that only __iter__ iterators that implement a method can iterate through loops, and you for need to next use methods to iterate through the methods iter :
Using positional parameters
Sometimes, the number of parameters that a method receives may not be necessary, such as defining a summation method and receiving at least two parameters:
The positional parameters should be used for functions that do not necessarily have the number of receive parameters and do not care about the order in which they are passed in *args :
Note, however, that variable-length parameters need to be converted to the args Narimoto group before they are passed to the function tuple . This means that if you take a generator as a parameter into a function, the generator will go through it and convert it to a tuple. This can consume a lot of memory:
Using keyword Parameters
Keyword parameters to improve code readability
You can give a default value to a function by using the keyword argument
Easy to expand function parameters
Define a function that can use only keyword parameters
Normal way, the use of keyword arguments is not enforced at the time of invocation
How to use the Force keyword parameter in Python3
How to use the Force keyword parameter in Python2
About default values for parameters
It's a cliché: the default value of a function is set only once when the program loads the module and reads the definition of the function .
In other words, if a parameter is given a dynamic value (
For example, if the parameter is given other arguments later when the function is called, then the [] {} default value that was previously defined will change to be the value given at the last call when the function is called later:
Therefore, it is more recommended to None assign a value as a default parameter after judging within the function:
Class__slots__
By default, Python uses a dictionary to hold an instance property of an object. This allows us to dynamically add new properties to an instance of the class at run time:
However, this dictionary wastes extra space-many times we don't create so many properties. So by being __slots__ able to tell Python
Do not use dictionaries but fixed collections to allocate space.
__call__
By defining a method in a class __call__ , you can make an instance of that class call like a normal function.
The benefit of this approach is that the state can be saved through the properties of the class without having to create a closure or global variable.
@classmethod&@staticmethod
@classmethodand @staticmethod very similar, but their use of the scene is not the same.
The common method inside the class is to pass the scope self of the instance into the method as the first parameter, which means that it is called by the instance;
@classmethodclsas the first parameter, represents the scope in which the class itself is passed in. Whether invoked through a class or by an instance of a class, the first parameter passed in by default will be the class itself
@staticmethodNo need to pass in default parameters, similar to a normal function
To understand their usage scenarios through an example:
Suppose we need to create a class named to Date Store three data per year/month/day
The preceding code creates the Date class, which sets the property at initialization time, day/month/year and by property setting one getter that can be instantiated after it is time fetched by the storage:
But what if we want to change the way attributes are passed in? After all, it is annoying to pass the three properties of the year/month/day when initializing. Can you find a way to create an instance by passing in such a string without changing the existing interfaces and methods 2016-11-09 Date ?
You might think of such a method:
But not good enough:
Extra write a method outside of the class, each time you have to format to get the parameters
This method is only related to the Date class.
Problem with too many incoming parameters not resolved
You can now take advantage of @classmethod creating a new format string inside the class, and returning an instance of the class method:
In this way, we can Date create an instance from the class to invoke the from_string method, without invading or modifying the old instantiation:
Benefits:
@classmethodinside, you can use cls parameters to get the same convenience as when calling classes externally
The method can be further encapsulated in it to improve reusability
More compliant with object-oriented programming
and @staticmethod , because it is similar to a normal function, you can put the helper associated with this class
method as @staticmethod , put in the class, and then call this method directly through the class.
After you place a date-related helper class function inside a class as a @staticmethod method Date , you can call these methods through the class:
Creating the context Manager
Context Manager, the popular introduction is: Before the code block execution, first to prepare work, after the code block execution completes, do the finishing work. withstatements are often accompanied by context managers, and the classic scenarios are:
Through the with statement, the code completes the file open operation, and the end of the call, or read the exception when the file is closed automatically, that is, the completion of the file after reading and writing processing work. If you do not pass the context manager, this is the code:
Is it more tedious? So the advantage of using the context manager is that by invoking our pre-set callbacks, we automatically handle the work of the code block as it starts and executes. With custom classes __enter__ and __exit__ methods, we can customize a context manager.
The call can then be made in such a way that:
At the time of the call:
withThe statement first staged the ReadFile method of the class __exit__
Then call ReadFile the method of the class __enter__
__enter__method to open the file and return the result to the with statement
The result of the previous step is passed to the file_read parameter
withmanipulate parameters within a statement to file_read read each row
After the read is completed, with the statement calls the method that was staged before __exit__
__exit__Method closes the file
Note that within the method, __exit__ we closed the file but finally returned True , so the error is not thrown by the with statement. Otherwise with , the statement throws a corresponding error.