How to write a database with over 100 lines

Source: Internet
Author: User
Tags timedelta
This article describes a simple database written by IT veterans in China. IT is not as powerful as the database we use, but IT is worth your reference. Can be used in a specific environment, more flexible... this article describes a simple database written by IT veterans in China. IT is not as powerful as the database we use, but IT is worth your reference. It can be used in a specific environment for greater flexibility and convenience.

The database name is WawaDB, which is implemented in python. It can be seen that python is very powerful!

Introduction

The requirements for logging are generally as follows:

Only append, not modify, write in chronological order;

A large number of writes, a small amount of reads, queries generally query data in a period of time;

MongoDB's fixed set meets this requirement. However, MongoDB occupies a large amount of memory and is a bit vulnerable to mosquitoes.

The idea of WawaDB is to write 1000 logs each time, and record the current time and the offset of the log file in an index file.

Then, when querying logs by time, first load the index into the memory, use the binary method to find the offset of the time point, and then open the log file seek to the specified location, in this way, you can quickly locate the data you need and read it without traversing the entire log file.

Performance

Core 2 P8400, 2.26 GHZ, 2 GB memory, 32 bit win7

Write test:

Simulate writing 10000 data records in one minute. a total of 5 hours of data are written. Insert 3 million data records, 54 characters in each record, at 2 minutes 51 seconds


Read test: read logs that contain a substring within a specified period of time

Data range traversal result count (seconds)

5 hours 3 million 604 6.6

2 hours 1.2 million 225 2.7

1 hour 0.6 million 96 1.3

30 minutes 0.3 million 44 0.6

Index

Only index the log record time. the introduction roughly describes the implementation of the index. binary search is certainly not as efficient as B-Tree, but generally it cannot be an order of magnitude, the implementation is particularly simple.

Because it is a sparse index, not every log has an index to record its offset, so when reading data, you need to read more data to prevent missing reading, when you read the data you really need, the data is actually returned to the user.

For example, if you want to read logs ranging from 25 to 43, you can use the binary method to find 25 and find the point where 30 is located,

Index: 0 10 20 30 40 50 log: | ......... | ......... | ......... | ......... | ......... | >>> a = [0, 10, 20, 30, 40, 50] >>> bisect. bisect_left (a, 35) >>> 3 >>> a [3] >>> 30 >>> bisect. bisect_left (a, 43) >>> 5 >>> a [5] >>> 50

So we need to move forward and read the logs starting from 20 (the first scale of 30). after reading the logs from, the logs are dropped because they are smaller than 25, 27 ,... and then return to the user

After reading the data to 40 (the first scale of 50), it is necessary to determine whether the current data is greater than 43. if it is greater than 43 (the data in the full open range is returned), it is necessary to stop reading.

As a whole, we only operate a small part of a large file and get the data that the user wants.

Buffer Zone

To reduce the number of disk writes when writing logs, the buffer is set to 10 kB when the index is append logs, and the default value is 4 kB.

Similarly, in order to improve the log reading efficiency, the buffer to be read is also set to 10 KB, and you need to adjust the size of your log according to your needs.

The read and write operations of indexes are set to row buffer, and every full row must be flushed to the disk to prevent reading incomplete index rows. (in fact, it has been proved that row buffer is set, can still read half-drawn rows ).

Query

What? To support SQL, don't bother. how can I support SQL with 100 lines of code.

Now, a lambda expression is passed in for query. when the system traverses data rows within a specified time range and meets the user's Lambda Ada conditions, the result is returned to the user.

Of course, this will read a lot of data that users do not need, and lambda expression operations are required for each line, but there is no way, simple is beautiful.

In the past, I recorded the log file offset in the index for a condition and log time to be queried. in this way, I checked the index to find the offset that meets the condition, each piece of data is then read once, as in the log file. Only one advantage is that the amount of data read is small, but there are two disadvantages:

The index file is too large to be loaded into the memory.

Seek is required for each read. it seems that the buffer zone is not used, which is particularly slow. it is four to five times slower than reading data in a continuous segment and filtering with lambda.

Write

As mentioned above, only append is used, no data is modified, and the timestamp is at the beginning of each log line.

Multithreading


Data query can be performed in multiple threads at the same time. each query will open a new log file descriptor, so multiple reads in parallel will not fight.

If writing is only an append operation, but it is not sure whether the append operation on the file by multiple threads is safe. Therefore, we recommend that you use a queue and a dedicated thread to write data.

Lock

No lock.

Sort

By default, the queried data is arranged in chronological order. if you want to sort the data by other means, you can use the python sorted function to sort the data in memory.


More than 100 lines of database code

#-*-Coding: UTF-8-*-import osimport timeimport bisectimport itertoolsfrom datetime import datetimeimport logging default_data_dir = '. /data/'default _ write_buffer_size = 1024 * 10default_read_buffer_size = 1024 * 10default_index_interval = 1000 def ensure_data_dir (): if not OS. path. exists (default_data_dir): OS. makedirs (default_data_dir) def init (): ensure_data_dir () class WawaIndex: def _ init _ (self, Index_name): self. fp_index = open (OS. path. join (default_data_dir, index_name + '. index'), 'A + ', 1) self. indexes, self. offsets, self. index_count = [], [], 0 self. _ load_index () def _ update_index (self, key, offset): self. indexes. append (key) self. offsets. append (offset) def _ load_index (self): self. fp_index.seek (0) for line in self. fp_index: try: key, offset = line. split () self. _ update_index (key, Fset) doesn t ValueError: # if the index is not flush, it may read data of half a row pass def append_index (self, key, offset): self. index_count + = 1 if self. index_count % default_index_interval = 0: self. _ update_index (key, offset) self. fp_index.write ('% s % s' % (key, offset, OS. linesep) def get_offsets (self, begin_key, end_key): left = bisect. bisect_left (self. indexes, str (begin_key) right = bisect. bisect_left (self. indexes, Str (end_key) left, right = left-1, right-1 if left <0: left = 0 if right <0: right = 0 if right> len (self. indexes)-1: right = len (self. indexes)-1 logging. debug ('Get _ index_range: % s % s', self. indexes [0], self. indexes [-1], begin_key, end_key, left, right) return self. offsets [left], self. offsets [right] class WawaDB: def _ init _ (self, db_name): self. db_name = db_name self. fp_da Ta_for_append = open (OS. path. join (default_data_dir, db_name + '. db '), 'A', default_write_buffer_size) self. index = WawaIndex (db_name) def _ get_data_by_offsets (self, begin_key, end_key, begin_offset, end_offset): fp_data = open (OS. path. join (default_data_dir, self. db_name + '. db '), 'R', default_read_buffer_size) fp_data.seek (int (begin_offset) line = fp_data.readline () find_real_begin_offset = Fals E will_read_len, read_len = int (end_offset)-int (begin_offset), 0 while line: read_len + = len (line) if (not find_real_begin_offset) and (line <str (begin_key )): line = fp_data.readline () continue find_real_begin_offset = True if (read_len> = will_read_len) and (line> str (end_key): break yield line. rstrip ('\ r \ n') line = fp_data.readline () def append_data (self, data, record_time = datetime. now ()): Def check_args (): if not data: raise ValueError ('data is Null') if not isinstance (data, basestring): raise ValueError ('data is not string') if data. find ('\ r ')! =-1 or data. find ('\ n ')! =-1: raise ValueError ('data contains linesep') check_args () record_time = time. mktime (record_time.timetuple () data = '% s % s' % (record_time, data, OS. linesep) offset = self. fp_data_for_append.tell () self. fp_data_for_append.write (data) self. index. append_index (record_time, offset) def get_data (self, begin_time, end_time, data_filter = None): def check_args (): if not (isinstance (begin_time, datetim E) and isinstance (end_time, datetime): raise ValueError ('In in _ time or end_time is not datetime') check_args () begin_time, end_time = time. mktime (begin_time.timetuple (), time. mktime (end_time.timetuple () begin_offset, end_offset = self. index. get_offsets (begin_time, end_time) for data in self. _ get_data_by_offsets (begin_time, end_time, begin_offset, end_offset): if data_filter (dat A): yield data else: yield data def test (): from datetime import datetime, timedelta import uuid, random logging. getLogger (). setLevel (logging. NOTSET) def time_test (test_name): def inner (f): def inner2 (* args, ** kargs): start_time = datetime. now () result = f (* args, ** kargs) print '% s take time: % s' % (test_name, (datetime. now ()-start_time) return result return inner2 return inner @ time_test ('Gen _ t Est_data ') def gen_test_data (db): now = datetime. now () begin_time = now-timedelta (hours = 5) while begin_time <now: print begin_time for I in range (10000): db. append_data (str (random. randint (1,10000) + ''+ str (uuid. uuid1 (), begin_time) begin_time + = timedelta (minutes = 1) @ time_test ('test _ get_data ') def test_get_data (db): begin_time = datetime. now ()-timedelta (hours = 3) end_time = begin_time + tim Edelta (minutes = 120) results = list (db. get_data (begin_time, end_time, lambda x: x. find ('123 ')! =-1) print 'test _ get_data get % s results '% len (results) @ time_test ('Get _ db') def get_db (): return WawaDB ('test') if not OS. path. exists ('. /data/test. db '): db = get_db () gen_test_data (db) # db. index. fp_index.flush () db = get_db () test_get_data (db) init () if _ name _ = '_ main _': test ()

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.