Objective
Python is a very readable and versatile programming language. The name Python is inspired by the British comedy community Monty Python, whose development team has an important basic goal, which is to make the language fun to use. Python is easy to set up and is written in a relatively straightforward style, providing instant feedback on errors, which is a good choice for beginners.
Python is a multi-paradigm language, meaning that it supports a variety of programming styles, including scripting and object-oriented, which makes it suitable for general purposes. As more and more industries are used by organizations such as the United Space Alliance (NASA's main aircraft support contractor) and Industrial Light demon (VFX and Lucas Studios), Python offers great potential for those seeking additional programming languages.
When an important Python newsgroup called Comp.lang.python was formed in 1994, the user base of Python was growing, paving the way for Python to become one of the most popular programming languages in open source development.
Python is really hot right now. Python real-time project, has been particularly concerned about, next, and you introduce the next 10 Python practiced hand the actual combat project
In the back of the article, we have compiled a full range of Python materials and tutorials that can be downloaded for friends who are learning python.
Python Project Exercise one: Instant tagging
This is the practice behind the basic Python tutorial, which is written to familiarize yourself with Python's code, and practice using basic and non-basic syntax in Python to make perfect.
The project was simple at first, but after refactoring it was a bit more complicated, but more flexible.
According to the book, the reconstructed program is divided into four modules: The handler module, the filter module, the rule (which should actually be the processing rule), and the parser.
First, the handler module, which has two functions, one is to provide the output of those fixed HTML tags (each tag has start and end), and the other is to provide a friendly interface for the start and end of the markup output. Take a look at the program handlers.py:
This program is the cornerstone of the whole "project": it provides the output of the label, and the substitution of the string. It's easier to understand.
Then look at the second module "Filter", this module is more simple, is actually a regular expression string. The relevant code is as follows:
This is the three filters, respectively: The emphasis card filter (marked with x), the URL tag filter, the email tag filter. Students who are familiar with regular expressions have no pressure to understand them.
And then look at the third module "Rules", this module, aside from that grandfather class does not say, other classes should have two methods are condition and action, the former is used to determine whether the read in the string is not in accordance with the rules of the House, the latter is used to perform operations, the so-called "handler module ", output front label, content, post label. Look at the code of this module, in fact, the inside of a few classes of the relationship, drawing into the class diagram will be more clear. rules.py:
Supplemental utils.py:
Finally, a grand look at the "parser module," The role of this module is to coordinate the reading of the text and other modules of the relationship. In the point of emphasis, there are two lists of "rules" and "filters", and the advantage of this is that the flexibility of the entire program is greatly improved, so that the rules and filters become hot-swappable, And, of course, this is due to the previous rules and filters. Each type of rule (filter) is written in a separate class instead of the IF. else to differentiate. Look at the code:
The idea in this module is to traverse the client (that is, the entrance to the program execution) to all the rules and filters that are plugged in, to process the read-in text.
There is a detail of the place to say, in fact, and the previous write echoes, that is, when traversing the rules by calling condition this thing to determine whether the current rule is met.
I think this program is very much like the command line mode, there is time to review the mode, in order to maintain the memory network node robustness.
Finally, what I thought was the purpose of this program:
1, used to do code highlighting analysis, if rewritten into JS version, you can do an online code editor.
2, can be used to learn, for me to write blog use.
There are other ideas that can leave your insights.
It is simple to add a class diagram, but it should be able to illustrate the relationship between the two. In addition, I suggest that if you look at the code to clear the relationship, you better draw your own drawing to familiarize yourself with the whole structure.
Python Project exercise two: frame a good picture
This is the second item in the basic Python tutorial, about Python operations pdf
The knowledge points involved
1, the use of urllib
2, the use of Reportlab library
This is a very simple example, but I found that in Python you can write a for loop directly in the array [], which is more convenient.
Here's the code:
Python Project Exercise three: the almighty XML
The name of this project is not as good as XML called "omnipotent" is called auto-build site, according to an XML file, generate the corresponding directory structure of the site, but only HTML is too simple, if you can create CSS that is more powerful. This need to follow up research and development, first to study how the HTML site structure. Since the Web site is built from an XML structure, everything should be done by this XML file. First look at this XML file, Website.xml:
With this file, here's how to build a Web site from this file.
First of all we want to parse this XML file, Python parsing xml and in Java, there are two ways, sax and DOM, two ways to deal with the difference in speed and range, the former is about efficiency, each time only a small part of the document, quickly and effectively use memory, The latter is the opposite of the process, the first load all the documents into memory, and then processing, slower, also more memory consumption, the only advantage is that the entire document can be manipulated.
Using sax in Python to process XML first introduces the parse function in Xml.sax and the ContentHandler in Xml.sax.handler, which is used in conjunction with the parse function. Use the following: Parse (' Xxx.xml ', Xxxhandler), which xxxhandler to inherit the ContentHandler above, but as long as inheritance on the line, do not need to make a difference. The parse function then processes the XML file, invoking the Startelement function and the EndElement function in the Xxxhandler to start and end the label in the XML. The middle procedure uses a function called characters to handle all strings inside the label.
With these realizations, we already know how to deal with the XML file, and then we look at the source of the evil Website.xml file, analysis of its structure, only two nodes: page and directory, it is obvious that page represents a pages, directory represents a directory.
So the idea of dealing with this XML file becomes clearer. Read each node of the XML file, and then determine if it is page or directory, if it is a page, create an HTML page and write the contents of the node to the file. If you encounter directory, create a folder and then process its internal page node (if one exists).
The following part of the code, the book's implementation is more complex, more flexible. Look first, then in the analysis.
It seems that the program above the analysis of a few complex, but the great man Fluffy said that any complex procedures are paper tigers. Let's analyze this program again.
First see this program is there are two classes, in fact, can be considered as a class, because of the inheritance.
And then see what more it, in addition to our analysis of the startelement and endelement and characters, more out of the startpage,endpage;startdirectory,enddirectory; Defaultstart,defaultend;ensuredirectory;writeheader,writefooter; and dispatch, these functions. In addition to dispatch, the preceding functions are well understood, each pair of functions is purely processing the corresponding HTML tags and XML nodes. The complexity of dispatch is that it is used to dynamically flatten functions and perform them.
Dispatch's approach is to determine whether a corresponding function, such as StartPage, is performed based on the passed parameter (that is, the name of the operation and the name of the node), and if it does not exist, execute the default+ operation name: such as Defaultstart.
A function after a function is clear, you know what the whole process is. First, create a public_html file, store the entire Web site, and then read the XML nodes, Startelement and EndElement call dispatch for processing. Then it is dispatch how to invoke the specific processing function. So far, the project is finished.
One of the main things to master is that Python uses sax to process XML, and the other is the use of functions in Python, such as getattr, asterisks when passing parameters ...
Python Project Exercise IV: News aggregation
The fourth exercise in the book, News aggregation. It's a rare type of application that I've never used, or Usenet. The main function of this program is to collect information from the specified source (this is a Usenet newsgroup) and then to save the information to the specified destination file (two forms are used here: Plain text and HTML files). The usefulness of this program is somewhat similar to the current blog subscription tool or RSS feed.
First, the code, and then to analyze it individually:
This program, first from the overall analysis, the focus is newsagent, its role is to store news sources, store the target address, The source servers (Nntpsource and Simplewebsource) and the classes that write the news (Plaindestination and htmldestination) are then called separately. So I can see from here that Nntpsource is specifically used to get the information on the news server, and Simplewebsource is to get the data on a URL. The role of Plaindestination and Htmldestination is obvious, the former is used to output the obtained content to the terminal, the latter is to write data into the HTML file.
With these analyses, and then looking at the contents of the main program, the main program is to add information sources and output destination addresses to newsagent.
This is really a simple procedure, but the program is layered.
These 4 python combat projects allow you to read python! in a jiffy