How to use Python for stable and reliable file operations

Source: Internet
Author: User
Tags crc32 crc32 checksum
How to use Python for stable and reliable file operations requires updating files. Although most programmers know that unexpected things will happen when I/O is executed, I often see some abnormal and naive code. In this article, I would like to share some insights on how to improve I/O reliability in Python code.

Consider the following Python code snippets. Perform some operations on the data in the file and save the results back to the file:

with open(filename) as f:   input = f.read()output = do_something(input)with open(filename, 'w') as f:   f.write(output)

It looks simple, right? It may not seem as easy at first glance. I debug applications on the product server and often encounter strange behavior.

Here is an example of the failure mode I have seen:

  • The out-of-control server process overflows a large number of logs and the disk is filled up. Write () throws an exception after the file is truncated, and the file will become empty.


  • Several instances of the application are executed in parallel. After each instance ends, the file content is eventually changed to Tianshu because the output of multiple instances is mixed.


  • After the write operation is completed, the application will trigger some subsequent operations. Power failure after several seconds. After we restarted the server, we again saw the old file content. The data that has been passed to other applications is no longer consistent with what we see in the file.

There is nothing new here. This article aims to provide common methods and technologies for Python developers who lack experience in system programming. I will provide code examples so that developers can easily apply these methods to their own code.

What does "reliability" mean?

Broadly speaking, reliability means that all required functions can be executed under all specified conditions. As for file operations, this function is about creating, replacing, or appending file content. Here we can get inspiration from the database theory. The ACID properties of the classic transaction model serve as a guide to improve reliability.

Before we start, let's take a look at how our example relates to ACID4:

  • Atomicity)The transaction must be either completely successful or completely failed. In the above instance, if the disk is full, some content may be written to the file. In addition, if other programs are reading files while writing content, they may obtain partially completed versions or even cause write errors.


  • Consistency)Indicates that the operation must be in one state of the system to another state. Consistency can be divided into two parts: internal and external consistency. Internal consistency means that the data structure of the file is consistent. External consistency means that the content of a file is consistent with its related data. In this example, it is difficult to infer whether the application is consistent because we do not understand the application. However, because consistency requires atomicity, we can at least say that internal consistency is not guaranteed.


  • Isolation)If multiple identical transactions cause different results in concurrent execution transactions, the isolation is violated. Obviously, the above code does not protect operation failures or other isolation failures.


  • Durability)It means that the change remains unchanged. Before we tell the user success, we must ensure that our data storage is reliable and not just a write cache. The premise that the above code has successfully written data is that if we call the write () function, the disk I/O will be executed immediately. But the POSIX standard does not guarantee this assumption.

Use the database system whenever possible

If we can obtain four properties of ACID, we have achieved long-term development in terms of increased reliability. However, this requires a lot of coding credit. Why did we reinvent the wheel? Most database systems already have ACID transactions.

Reliability Data storage is a problem that has been solved. If you need reliability storage, use the database. It is very likely that, without decades of effort, your ability to solve this problem on your own is not as good as those who have been focusing on this aspect for years. If you do not want to install a large database server, you can use sqlite, which has ACID transactions, is small and free, and is included in the standard Python library.

The article should have ended here, but there are still some underlying reasons, that is, not to use data. They are generally file format or file location constraints. Both of these are difficult to control in the database system. The reasons are as follows:

  • We must process files in a fixed format or location generated by other applications,


  • We must write files for the consumption of other applications (the same restrictions as the application)


  • Our files must be easily read or modified.

... And so on. You know.

If we implement reliable file updates by ourselves, there are some programming techniques for your reference. The following describes four common file update modes. After that, I will discuss the steps to satisfy the ACID nature in each file update mode.

File update mode

Files can be updated in multiple ways, but I think there are at least four common modes. These four modes will serve as the basis for the rest of this article.

Truncation-write

This may be the most basic mode. In the following example, assume that the domain model code reads data, executes some calculations, and re-opens the existing file in write mode:

with open(filename, 'r') as f:   model.read(f)model.process()with open(filename, 'w') as f:   model.write(f)

A variant of this mode opens a file in read/write mode ("add" mode in Python), finds the starting position, explicitly calls truncate (), and overrides the file content.

with open(filename, 'a+') as f:   f.seek(0)   model.input(f.read())   model.compute()   f.seek(0)   f.truncate()   f.write(model.output())

This variant only opens the file once and keeps the file open. For example, locking can be simplified.

Write-replace

Another widely used mode is to write new content to a temporary file and then replace the original file:

with tempfile.NamedTemporaryFile(      'w', dir=os.path.dirname(filename), delete=False) as tf:   tf.write(model.output())   tempname = tf.nameos.rename(tempname, filename)

This method is more robust to errors than the truncation-write method. See the discussion on atomicity and consistency. This method is used by many applications.

These two modes are so common that the ext4 file system in the Linux kernel can even automatically detect these modes to automatically fix some reliability defects. But do not rely on this feature: you are not always using ext4, and the administrator may turn this feature off.

Append

The third mode is to append new data to an existing file:

with open(filename, 'a') as f:   f.write(model.output())

This mode is used to write log files and other tasks that accumulate and process data. Technically, it is extremely simple. An interesting extended application is to update files only through append operations in regular operations, and then reorganize files regularly to make them more compact.

Spooldir

Here, we store the directory as the logical data and create a unique named file for each record:

with open(unique_filename(), 'w') as f:   f.write(model.output())

This mode has the same accumulation characteristics as the additional mode. A huge advantage is that we can put a small amount of metadata in the file name. For example, this can be used to convey the processing status information. A particularly clever implementation of the spooldir mode is the maildir format. Maildirs uses the naming scheme of adding sub-directories to perform update operations in a reliable and lockless manner. The md and gocept. filestore libraries provide convenient encapsulation for maildir operations.

If your file name generation does not guarantee unique results, it may even require the file to be actually new. The following code calls a low-level OS. open () with a suitable flag ():

fd = os.open(filename, os.O_WRONLY | os.O_CREAT| os.O_EXCL, 0o666)with os.fdopen(fd, 'w') as f:   f.write(...)

After opening the file in O_EXCL mode, we use OS. fdopen to convert the original file descriptor to a common Python file object.

Apply ACID properties to file updates

Next, I will try to enhance the file update mode. In turn, let's see what we can do to satisfy the ACID attribute. I will try to keep it as simple as possible, because we do not want to write a complete database system. Note that the materials in this section are not thorough, but they can provide a good starting point for your own experiments.

Atomicity

  Write-replaceThe mode provides atomicity because the underlying OS. rename () is atomic. This means that at any given time point, the process can see the old file or the new file. This mode has a natural robustness to write errors: if a write operation triggers an exception, the rename operation will not be executed, and the risk of overwriting the correct old file with no corrupted new files will be eliminated.

  AdditionalThe mode is not atomic, because there is a risk of attaching incomplete records. However, there is a trick to make updates Atomic: Mark the checksum for each write operation. When reading logs, ignore all records with no valid checksum. In this way, only the complete record will be processed. In the following example, the application performs periodic measurements and attaches a JSON record to the log each time. We calculate the CRC32 checksum in the byte representation of the record and then attach it to the same row:

with open(logfile, 'ab') as f:    for i in range(3):        measure = {'timestamp': time.time(), 'value': random.random()}        record = json.dumps(measure).encode()        checksum = '{:8x}'.format(zlib.crc32(record)).encode()        f.write(record + b' ' + checksum + b'\n')

The sample code simulates a measurement by creating a random value each time.

$ cat log{"timestamp": 1373396987.258189, "value": 0.9360123151217828} 9495b87a{"timestamp": 1373396987.25825, "value": 0.40429005476999424} 149afc22{"timestamp": 1373396987.258291, "value": 0.232021160265939} d229d937

To process this log file, we read a row of records each time, separate the checksum, and compare it with the records we read.

with open(logfile, 'rb') as f:    for line in f:        record, checksum = line.strip().rsplit(b' ', 1)        if checksum.decode() == '{:8x}'.format(zlib.crc32(record)):            print('read measure: {}'.format(json.loads(record.decode())))        else:            print('checksum error for record {}'.format(record))

Now we can simulate the truncated write operation by truncating the last row:

$ cat log{"timestamp": 1373396987.258189, "value": 0.9360123151217828} 9495b87a{"timestamp": 1373396987.25825, "value": 0.40429005476999424} 149afc22{"timestamp": 1373396987.258291, "value": 0.23202

When reading logs, the last incomplete line is rejected:

$ read_checksummed_log.py logread measure: {'timestamp': 1373396987.258189, 'value': 0.9360123151217828}read measure: {'timestamp': 1373396987.25825, 'value': 0.40429005476999424}checksum error for record b'{"timestamp": 1373396987.258291, "value":'

The method for adding checksum to log records is used in a large number of applications, including many database systems.

  SpooldirYou can also add a checksum to each file. Another simpler method is to borrow the write-replace mode: first write the file to one side, and then move it to the final location. Design a naming scheme to protect files being processed by consumers. In the following example, all files ending with. tmp are ignored by the Read program, so they can be safely used during write operations.

newfile = generate_id()with open(newfile + '.tmp', 'w') as f:   f.write(model.output())os.rename(newfile + '.tmp', newfile)

Finally,Truncation-writeIt is non-atomic. Unfortunately, I cannot provide variants that satisfy atomicity. After the screenshot operation is completed, the file is empty and no new content is written. If a concurrent program reads a file or encounters an exception, the program is aborted. we cannot see any new version of the concurrent program.

Consistency

Most of the atomic content I'm talking about can also be applied to consistency. In fact, atomic update is a prerequisite for internal consistency. External consistency means that several files are synchronously updated. This is not easy. the lock file can be used to ensure that read/write access does not interfere with each other. Consider that the files in a directory must be consistent with each other. The common mode is to specify the lock file to control access to the entire directory.

Example of program writing:

with open(os.path.join(dirname, '.lock'), 'a+') as lockfile:   fcntl.flock(lockfile, fcntl.LOCK_EX)   model.update(dirname)

Example of a read program:

with open(os.path.join(dirname, '.lock'), 'a+') as lockfile:   fcntl.flock(lockfile, fcntl.LOCK_SH)   model.readall(dirname)

This method takes effect only when all read programs are controlled. Because each time there is only one write program activity (the exclusive lock blocks all the shared locks), the scalability of all the methods is limited.

Furthermore, we can apply the entire directoryWrite-replaceMode. This involves creating a new directory for each update and changing the matching link after the update. For example, the image application maintains a directory that contains a compressed package and an index file listing the file name, file size, and checksum. When a high-speed image is updated, it is not enough to update the atomic content of the compressed package and the index file in an isolated manner. On the contrary, we need to provide both the compressed package and the index file to avoid non-matching checksum. To solve this problem, we maintain a sub-directory for each generation, and then change the symbolic link to activate the generation.

mirror|-- 483|   |-- a.tgz|   |-- b.tgz|   `-- index.json|-- 484|   |-- a.tgz|   |-- b.tgz|   |-- c.tgz|   `-- index.json`-- current -> 483

The new 484 is being updated. When all the packages are ready and the index file is updated, we can use an atomic call to OS. symlink () to switch the current symbolic link. Other applications always see completely old or completely new generation. The Read program needs to use OS. chdir () to enter the current directory. it is important not to use the full path name to specify the file. No. when the read program opens current/index. json, and then opens current/a. tgz, but the symbolic link has changed, the competition condition will appear.

Isolation

Isolation means that concurrent updates to the same file are Serializable-there is a serial scheduling so that the actual execution of parallel scheduling returns the same results. "Real" database systems use advanced technologies such as MVCC to maintain serializability while allowing high levels of parallelism. Back to our scenario, we finally used the lock to update the serial file.

PairTruncation-writeUpdate locks are easy. You can obtain an exclusive lock only before all file operations. The following sample code reads an integer from the file, increments it, and finally updates the file:

def update():   with open(filename, 'r+') as f:      fcntl.flock(f, fcntl.LOCK_EX)      n = int(f.read())      n += 1      f.seek(0)      f.truncate()      f.write('{}\n'.format(n))

UseWrite-replaceIt is a bit of trouble to update the lock mode. ImageTruncation-writeUsing the lock may cause update conflicts. A naive implementation may look like this:

def update():   with open(filename) as f:      fcntl.flock(f, fcntl.LOCK_EX)      n = int(f.read())      n += 1      with tempfile.NamedTemporaryFile(            'w', dir=os.path.dirname(filename), delete=False) as tf:         tf.write('{}\n'.format(n))         tempname = tf.name      os.rename(tempname, filename)

What is the problem with this code? Imagine two processes competing to update a file. The first process runs in front, but the second process is blocked in the fcntl. flock () call. When the first process replaces the file and releases the lock, the file descriptor opened in the second process points to a "ghost" file containing the old content (any path name cannot be reached ). To avoid this conflict, we must check whether the opened file is the same as that returned by fcntl. flock. So I wrote a new LockedOpen context manager to replace the built-in open context. To ensure that the correct file is opened:

class LockedOpen(object):    def __init__(self, filename, *args, **kwargs):        self.filename = filename        self.open_args = args        self.open_kwargs = kwargs        self.fileobj = None    def __enter__(self):        f = open(self.filename, *self.open_args, **self.open_kwargs)        while True:            fcntl.flock(f, fcntl.LOCK_EX)            fnew = open(self.filename, *self.open_args, **self.open_kwargs)            if os.path.sameopenfile(f.fileno(), fnew.fileno()):                fnew.close()                break            else:                f.close()                f = fnew        self.fileobj = f        return f    def __exit__(self, _exc_type, _exc_value, _traceback):        self.fileobj.close()
    def update(self):        with LockedOpen(filename, 'r+') as f:            n = int(f.read())            n += 1            with tempfile.NamedTemporaryFile(                    'w', dir=os.path.dirname(filename), delete=False) as tf:                tf.write('{}\n'.format(n))                tempname = tf.name            os.rename(tempname, filename)

Locking an append update is as simple as locking a truncated-write update: an exclusive lock is required and the append is complete. If you need to run the file for a long time, you can release the lock when updating the file to allow other processes to enter.

  SpooldirThe pattern has a very elegant nature, that is, it does not need any locks. In addition, you are using a flexible naming mode and a robust file name generation. The Mail directory specification is a good example of the spooldir mode. It can easily adapt to other situations, not just handling emails.

Durability

Durability is a bit special because it depends not only on applications, but also on OS and hardware configuration. Theoretically, we can assume that if the data does not reach the persistent storage, no results will be returned if OS. fsync () or OS. fdatasync () is called. In actual situations, we may encounter several problems: we may face incomplete fsync implementations, or bad disk controller configurations, which cannot provide any persistence guarantee. There is a discussion from the MySQL developer about where errors will occur in detail. Some database systems, such as PostgreSQL, even provide the persistence mechanism, so that the administrator can choose the best one at runtime. However, unlucky people can only use OS. fsync () and expect it to be implemented correctly.

PassTruncation-writeMode. before closing the file after the write operation, we need to send a synchronous signal. Note that this usually involves another level of write cache. The glibc cache even stops the glibc cache before the write operation is passed to the kernel. Similarly, to get an empty glibc cache, we need to flush () it before synchronization ():

with open(filename, 'w') as f:   model.write(f)   f.flush()   os.fdatasync(f)

Alternatively, you can include parameters.-UCall Python to obtain unbuffered writes for all file I/O.

Most of the time I prefer OS. fdatasync () compared to OS. fsync () to avoid synchronizing metadata updates (ownership, size, mtime ...). Metadata updates can eventually lead to disk I/O search operations, which slows down the entire process.

PairWrite-replaceStyle update uses the same technique but is half successful. Before replacing the old file, make sure that the content of the newly written file has been written into non-volatile memory. but what should we do with the replacement operation? We cannot ensure that the directory update is just executed. There are a lot of long articles on how to synchronize directory updates on the network. However, in this case, the old and new files are in the same directory. we can use a simple solution to avoid this problem.

os.rename(tempname, filename)dirfd = os.open(os.path.dirname(filename), os.O_DIRECTORY)os.fsync(dirfd)os.close(dirfd)

We call the underlying OS. open () to open the Directory (the open () method that comes with Python does not support opening the directory), and then execute OS. fsync () on the directory file descriptor ().

TreatAppendUpdated and mentioned by meTruncation-writeIs similar.

  SpooldirThe mode is the same as the write-replace mode for directory synchronization. Fortunately, you can use the same solution: synchronize files first and then synchronize directories.

Summary

This makes it possible to update files reliably. I have demonstrated the four major properties of ACID. The demo code serves as a toolbox. Master this programming technology to meet your needs. Sometimes, you do not need to satisfy all ACID properties, but may only need one or two. I hope this article will help you make decisions that have been fully understood, what to implement, and what to discard.

Http://blog.gocept.com/2013/07/15/reliable-file-updates-with-python/:

The above describes how to use Python for stable and reliable file operations. For more information, see other related articles in the first PHP community!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.