How to use Python for stable and reliable file operations

Source: Internet
Author: User
Tags crc32 crc32 checksum flock
The program needs to update the file. Although most programmers know that unexpected things can happen when I am performing I/O, I often see some unusually naïve code. In this article, I want to share some insights into how I can improve I/O reliability in Python code.

Consider the following Python code fragment. Do something about the data in the file, and then save the results back to the file:

with open (filename) as f:   input = f.read () output = do_something (input) with open (filename, ' W ') as F:   F.write (OUTP Ut

It looks simple, doesn't it? It may not look as simple as at first glance. I debug the app in a product server, and there are often strange behaviors.

This is an example of the failure mode I have seen:

    • A runaway server process overflows a large number of logs and the disk is filled. Write () throws an exception after truncating the file, and the file becomes empty.


    • Several instances of the application are executed in parallel. At the end of each instance, the contents of the file eventually became the heavenly book because of the mixed output of multiple instances.


    • After the write operation is complete, the app triggers some subsequent actions. Power off after a few seconds. After we restarted the server, we saw the old file content again. The data that has been passed to other apps is no longer consistent with what we see in the file.

There's nothing new in the next section. The purpose of this article is to provide common methods and techniques for Python developers who lack experience in system programming. I will provide examples of the code so that developers can easily apply these methods to their own code.

What does "reliability" mean?

Broadly speaking, reliability means that the operation can perform the functions it requires under all specified conditions. As for the operation of the file, this function is the problem of creating, replacing or appending the contents of the file. Here you can get inspiration from the database theory. The acid nature of the classic transactional model is used as a guide to improve reliability.

Before we begin, let's take a look at how our examples relate to the ACID4 nature:

    • atomicity (atomicity) requires that the transaction either be completely successful or fail completely. In the above instance, the disk is full and may cause some content to be written to the file. In addition, if other programs are reading the file while the content is being written, they may get a partially completed version and may even cause a write error


    • Consistency (consistency) indicates that the operation must be from one state of the system to another state. Consistency can be divided into two parts: internal and external consistency. Internal consistency means that the data structure of a file is consistent. External consistency means that the contents of a file are consistent with the data associated with it. In this case, because we do not understand the application, it is difficult to infer whether conformance is consistent. But because consistency requires atomicity, we can at least say there is no guarantee of internal consistency.


    • Isolation (Isolation) If multiple identical transactions result in different results in a concurrent execution transaction, the isolation is violated. It is clear that the above code is not protected against operation failures or other isolation failures.


    • persistence (durability) means that the change is persistent. Before we tell our users to succeed, we have to make sure that our data storage is reliable and not just a write cache. The above code has been successfully written to the data, assuming that we call the Write () function, disk I/O is executed immediately. However, the POSIX standard does not guarantee this hypothesis.

Use database systems whenever possible

If we are able to obtain four properties of acid, we have a long-term development in increasing reliability. But it takes a lot of coding credit. Why do you invent wheels repeatedly? Most database systems already have acid transactions.

Reliability data storage is already a resolved issue. If you need reliability storage, use a database. It is possible that without decades of Kung Fu, your own ability to solve this problem is not as good as those who have been focusing on this for years. If you do not want to install a large database server, then you can use SQLite, which has acid transactions, is small, free, and it is included in Python's standard library.

The article is supposed to end here, but there are some substantiated reasons for not using data. They are usually file format or file location constraints. Both of these are poorly controlled in the database system. The reasons are as follows:

    • We have to deal with other applications that produce fixed formats or files in fixed locations,


    • We must write files for the consumption of other applications (and apply the same restrictions)


    • Our documents must be easily read or modified by people.

... Wait a minute. You know.

If we do a reliable file update ourselves, here are some programming techniques for reference. Below I will show four common action file update modes. After that, I'll discuss what steps to take to meet the ACID properties in each file update mode.

File update mode

Files can be updated in a variety of ways, but I think there are at least four common patterns. These four modes will be the basis for the remainder of this article.

Truncate-Write

This is probably the most basic pattern. In the following example, assume that the domain model code reads the data, performs some calculations, and then re-opens the existing file in write mode:

with open (filename, ' r ') as F:   Model.read (f) model.process () with open (filename, ' W ') as F:   Model.write (f)

A variant of this pattern opens the file in read-write mode ("plus" mode in Python), finds the starting position, explicitly calls truncate (), and rewrites the file contents.

with open (filename, ' A + ') as F:   f.seek (0)   model.input (F.read ())   Model.compute ()   f.seek (0)   F.truncate ()   f.write (Model.output ())

The advantage of this variant is that only open files once, always keep the file open. This simplifies locking, for example.

Write-replace

Another widely used pattern is to write new content to a temporary file, and then replace the original file:

With Tempfile. Namedtemporaryfile (      ' W ', dir=os.path.dirname (filename), delete=false) as TF:   tf.write (Model.output ())   tempname = tf.nameos.rename (tempname, filename)

This method is more robust to errors than the truncation-write method. See below for a discussion of atomicity and consistency. Many applications use this method.

These two patterns are so common that the Ext4 file system in the Linux kernel can even detect these patterns automatically, fixing some of the reliability flaws. But do not rely on this feature: you are not always using EXT4, and administrators may turn this feature off.

Additional

The third pattern is to append new data to the existing file:

with open (filename, ' a ') as F:   F.write (Model.output ())

This mode is used to write log files and other tasks that accumulate processing data. Technically speaking, its distinguishing feature is its simplicity. An interesting extension is that the normal operation is updated only by appending, and then periodically re-organizing the file to make it more compact.

Spooldir

Here we make the directory a logical data store, creating a new uniquely named file for each record:

With open (Unique_filename (), ' W ') as F:   F.write (Model.output ())

This pattern has the same cumulative characteristics as the Attach mode. A huge advantage is that we can put a small amount of metadata in the file name. This can be used, for example, to convey information about the status of processing. A particularly ingenious implementation of the Spooldir pattern is the Maildir format. Maildirs uses the naming scheme of additional subdirectories to perform update operations in a reliable, unlocked manner. The MD and Gocept.filestore libraries provide a convenient package for maildir operations.

If your file name generation does not guarantee unique results, it may even be necessary to require that the file be actually new. Then call the low-level os.open () with the appropriate flag:

FD = os.open (filename, os. o_wronly | Os. O_creat| Os. O_EXCL, 0o666) with Os.fdopen (FD, ' W ') as F:   f.write (...)

After opening the file in O_EXCL mode, we use Os.fdopen to convert the original file descriptor into a normal Python file object.

Apply ACID properties to file updates

Below, I will try to strengthen the file update mode. In turn, let's see what we can do to satisfy the ACID properties. I will keep it as simple as possible because we are not going to write a complete database system. Please note that the material in this section is not exhaustive, but it can provide a good starting point for your own experiments.

Atomic Nature

  The write-replace pattern provides atomicity because the underlying os.rename () is atomic in nature. This means that at any given point in time, the process either sees the old file or sees the new file. This mode is naturally robust to write errors: If the write operation triggers an exception, the rename operation is not executed, and all the risk of overwriting the correct old file with the damaged new file is not available.

  The attach mode is not atomic because there is a risk of attaching incomplete records. However, there is a trick to make the update atomic: label the checksum for each write operation. After the log is read, all records that do not have a valid checksum are ignored. In this way, only a complete record will be processed. In the following example, the application makes periodic measurements, adding a line of JSON records to the log each time. We calculate the CRC32 checksum of the recorded byte representation, and then attach to the same line:

With open (logfile, ' AB ') as F:    for I in range (3):        measure = {' timestamp ': time.time (), ' Value ': Random.random ()}
  record = json.dumps (Measure). Encode ()        checksum = ' {: 8x} '. Format (ZLIB.CRC32 (record)). Encode ()        f.write ( Record + B ' + checksum + B ' \ n ')

The example code simulates the measurement by creating a random value each time.

$ cat log{"timestamp": 1373396987.258189, "value": 0.9360123151217828} 9495b87a{"timestamp": 1373396987.25825, "value" : 0.40429005476999424} 149afc22{"timestamp": 1373396987.258291, "value": 0.232021160265939} d229d937

To process This log file, we read one line at a time, separate the checksum, and compare it to the read record.

With open (logfile, ' RB ') as F:    for line in F:        record, checksum = Line.strip (). Rsplit (b ', 1)        if Checksum.deco De () = = ' {: 8x} '. Format (ZLIB.CRC32 (record)):            print (' read measure: {} '. Format (Json.loads (Record.decode ()        ))) else:            print (' checksum error for Record {} '. Format (record))

Now we simulate the truncated write by truncating the last line:

$ cat log{"timestamp": 1373396987.258189, "value": 0.9360123151217828} 9495b87a{"timestamp": 1373396987.25825, "value" : 0.40429005476999424} 149afc22{"timestamp": 1373396987.258291, "value": 0.23202

When the log is read, the last incomplete line is rejected:

$ read_checksummed_log.py logread measure: {' timestamp ': 1373396987.258189, ' value ': 0.9360123151217828}read measure: { ' Timestamp ': 1373396987.25825, ' value ': 0.40429005476999424}checksum error for record B ' {"Timestamp": 1373396987.258291, "value": '

The method of adding checksums to logging is used in a number of applications, including many database systems.

  A single file in the Spooldir can also add checksums to each file. Another possible easier way is to borrow the write-replace pattern: First write the file aside and move to the final position. Design a naming scheme that protects files that are being processed by consumers. In the example below, all files ending with. tmp are ignored by the reader, so they can be used safely during write operations.

NewFile = generate_id () with open (NewFile + '. tmp ', ' W ') as F:   F.write (Model.output ()) os.rename (NewFile + '. tmp ', NEWF Ile

Finally, truncation-write is non-atomic. I'm sorry I can't offer a variant that meets atomicity. After the interception operation is complete, the file is empty and no new content is written. If the concurrent program now reads the file or if there is an exception, the program aborts, and we see neither the version nor the new version.

Consistency

Much of what I'm talking about atomicity can also be applied to consistency. In fact, atomicity updates are prerequisites for internal consistency. External consistency means that several files are updated synchronously. This is not easy, and lock files can be used to ensure that read-write access does not interfere. Consider that files in a directory need to be consistent with each other. A common pattern is to specify a lock file that controls access to the entire directory.

Examples of writing programs:

With open (Os.path.join (dirname, '. Lock '), ' A + ') as Lockfile:   fcntl.flock (Lockfile, Fcntl. LOCK_EX)   model.update (dirname)

Examples of reading programs:

With open (Os.path.join (dirname, '. Lock '), ' A + ') as Lockfile:   fcntl.flock (Lockfile, Fcntl. LOCK_SH)   Model.readall (dirname)

This method only takes effect if all read programs are controlled. Because there is only one write program activity at a time (an exclusive lock blocks all shared locks), all of the methods have limited scalability.

Further, we can apply the write-replace pattern to the entire directory. This involves creating a new directory for each update, and changing the compliance link after the update is complete. For example, a mirrored application maintains a directory that contains a compressed package and an index file that lists the filename, file size, and checksum. When the high image is updated, it is not enough to only isolate the input atomicity of the compressed package and index file. Instead, we need to provide both the zipped package and the index file to avoid a checksum mismatch. To solve this problem, we maintain a subdirectory for each build, and then change the symbolic link to activate the build.

mirror|--483|   | --a.tgz|   | --b.tgz|   '--index.json|--484|   | --a.tgz|   | --b.tgz|   | --c.tgz|   '--index.json '--483

The new build 484 is in the process of being updated. When all the compressed packages are ready and the index file is updated, we can switch the current symbolic link with one atom call Os.symlink (). Other applications always or see completely old or completely new builds. Read programs need to use Os.chdir () to enter the current directory, it is important not to specify the full path name of the file. No when the Read program opens Current/index.json and then opens Current/a.tgz, but at the same time the symbolic link has changed, there will be a race condition.

Isolation of

Isolation means that concurrent updates to the same file are serializable-there is a serial dispatch that returns the same result as the actual parallel dispatch. The "real" database system uses advanced technology like MVCC to maintain serialization while allowing high levels of parallelism. Back to our scenario, we ended up using a lock to update the serial file.

It is easy to lock down the truncation-write update. It is only possible to obtain an exclusive lock before all file operations. The following example code reads an integer from the file, increments it, and finally updates the file:

def update (): With   open (filename, ' r+ ') as F:      Fcntl.flock (F, fcntl. LOCK_EX)      n = Int (F.read ())      n + = 1      f.seek (0)      f.truncate ()      f.write (' {}\n '. Format (n))

Lock update with write-replace mode is a bit of a hassle. Using locks like truncation-write can cause update conflicts. Some naïve implementation might look like this:

def update ():   with open (filename) as F:      Fcntl.flock (F, fcntl. LOCK_EX)      n = Int (F.read ())      n + = 1 with      tempfile. Namedtemporaryfile (            ' W ', dir=os.path.dirname (filename), delete=false) as TF:         tf.write (' {}\n '. Format (n))         tempname = Tf.name      os.rename (tempname, filename)

What's wrong with this piece of code? Imagine two processes competing to update a file. The first process runs in front, but the second process is blocked in the Fcntl.flock () call. When the first process replaces the file and releases the lock, the file descriptor opened in the second process now points to a "ghost" file that contains the old content (any path name is unreachable). To avoid this conflict, we have to check whether the open file is the same as the Fcntl.flock () return. So I wrote a new Lockedopen context Manager to replace the built-in open context. To make sure that we actually open the correct file:

class Lockedopen (object): Def __init__ (self, filename, *args, **kwargs): Self.filenam e = filename Self.open_args = args Self.open_kwargs = Kwargs self.fileobj = None def __enter__ (sel f): F = open (Self.filename, *self.open_args, **self.open_kwargs) while True:fcntl.flock (F, fcntl . LOCK_EX) fnew = open (Self.filename, *self.open_args, **self.open_kwargs) if Os.path.sameopenfile (f.f                Ileno (), Fnew.fileno ()): Fnew.close () Break Else:f.close ()        f = fnew self.fileobj = f return F def __exit__ (self, _exc_type, _exc_value, _traceback): Self.fileobj.close () 
    def update:        with Lockedopen (filename, ' r+ ') as f:            n = Int (F.read ())            n + = 1 with            tempfile. Namedtemporaryfile (                    ' W ', dir=os.path.dirname (filename), delete=false) as TF:                tf.write (' {}\n '. Format (n))                tempname = Tf.name            os.rename (tempname, filename)

Locking an append update is as simple as giving the truncation-write update lock: An exclusive lock is required, and then the append is complete. A process that requires long-running files that will be opened for long periods of time can release locks and allow others to enter during updates.

  The spooldir mode has a beautiful nature that it does not require any locks. In addition, you build on the use of flexible naming patterns and a robust file generation. The Mail directory specification is a good example of a spooldir pattern. It can easily adapt to other situations, not just mail processing.

Durability

Persistence is a bit special because it depends not only on the application, but also on the OS and hardware configuration. Theoretically, we can assume that if the data does not reach persistent storage, the Os.fsync () or Os.fdatasync () call does not return a result. In the actual situation, we may encounter several problems: we may face incomplete fsync implementations, or bad disk controller configurations, which cannot provide any guarantee of persistence. There is a discussion from the MySQL developer about where errors can occur. Some database systems, like PostgreSQL, even provide a choice of persistence mechanisms so that administrators can choose the best one at run time. However, unlucky people can only use Os.fsync () and expect it to be implemented correctly.

by truncating-write mode, we need to send a sync signal before closing the file after the write operation. Note Usually this also involves another level of write caching. The glibc cache will even block it inside the process before the write operation is passed to the kernel. Also in order to get an empty glibc cache, we need to flush () it before synchronizing:

with open (filename, ' W ') as F:   Model.write (f)   F.flush ()   Os.fdatasync (f)

Alternatively, you can call Python with the parameter- u to get an unbuffered write for all file I/O.

Most of the time compared to Os.fsync () I prefer os.fdatasync () to avoid synchronizing metadata updates (ownership, size, mtime ... )。 Metadata updates can eventually result in disk I/O search operations, which can slow down the entire process.

Using the same techniques for write-replace style updates is only half the success. We have to make sure that the content of the newly written file has been written to nonvolatile memory before replacing the old file, but what about the replace operation? We can't guarantee that the directory update is performing just fine. There's a lot on the web about how to get synchronized directory updates. But in our case, the old and new files are in the same directory, and we can use a simple solution to avoid this problem.

Os.rename (tempname, filename) dirfd = Os.open (os.path.dirname (filename), OS. o_directory) Os.fsync (DIRFD) os.close (DIRFD)

We call the underlying Os.open () to open the directory (the Python-brought open () method does not support opening the directory), and then executes Os.fsync () on the directory file descriptor.

Treat the append update and I have said truncation-write is similar.

  spooldir mode has the same directory synchronization problem as write-replace mode. Fortunately, the same solution can be used: the first step is to synchronize the files and then synchronize the directories.

Summarize

This makes it possible to have a reliable update file. I have demonstrated the four properties that satisfy acid. The instance code for these shows acts as a toolbox. Mastering this programming technology is the most satisfying for your needs. Sometimes, you don't need to meet all the acid properties, it may only take one to two. I hope this article will help you to make a fully informed decision, what to achieve and what to discard.

English Source: http://blog.gocept.com/2013/07/15/reliable-file-updates-with-python/

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.