A memory-saving Python storage solution for sparse matrix

Source: Internet
Author: User
A memory-saving sparse matrix recommendation system for Python storage solutions often needs to process data such as user_id, item_id, and rating, which is actually a sparse matrix in mathematics, scipy provides the sparse module to solve this problem, but scipy. sparse has many problems that are not suitable for use: 1. it cannot support data [I,...] At the same time., data [..., j], data [I, j] fast slicing; 2. because the data is stored in the memory, it cannot support massive data processing.

To support data [I,...], data [..., j] fast slicing requires centralized storage of I or j data. at the same time, to save massive data, you also need to put a part of the data on the hard disk and use the memory as the buffer. The solution here is relatively simple. it uses Dict-like things to store data. for an I (such as 9527), its data is stored in dict ['i9527, similarly, for a j (such as 3306), all its data is stored in dict ['j3306'], and data [9527,...] needs to be retrieved. you only need to retrieve dict ['i9527']. dict ['i9527'] is originally a dict object and stores the value corresponding to a j. to save memory space, we store this dict in the form of a binary string and directly use the code:

'''Sparse Matrix ''' import structimport numpy as npimport bsddbfrom cStringIO import StringIO class DictMatrix (): def _ init _ (self, container ={}, dft = 0.0): self. _ data = container self. _ dft = dft self. _ nums = 0 def _ setitem _ (self, index, value): try: I, j = index handle T: raise IndexError ('invalid index ') ik = ('I % d' % I) # to save memory, we pack j and value into a binary string ib = struct. pack ('if', j, value) jk = ('J % d' % j) jb = struct. pack ('if', I, value) try: self. _ data [ik] + = ib Primary T: self. _ data [ik] = ib try: self. _ data [jk] + = jb failed t: self. _ data [jk] = jb self. _ nums + = 1 def _ getitem _ (self, index): try: I, j = index handle T: raise IndexError ('invalid index') if (isinstance (I, int): ik = ('I % d' % I) if not self. _ data. has_key (ik): return self. _ dft ret = dict (np. fromstring (self. _ data [ik], dtype = 'i4, f4') if (isinstance (j, int): return ret. get (j, self. _ dft) if (isinstance (j, int): jk = ('J % d' % j) if not self. _ data. has_key (jk): return self. _ dft ret = dict (np. fromstring (self. _ data [jk], dtype = 'i4, f4 ') return ret def _ len _ (self): return self. _ nums def _ iter _ (self): pass ''' the matrix generated from the file. considering that the read/write performance of dbm is not as good as the memory, we have made some caches, each times of batch writing takes into account that the string splicing performance is not very good, we directly use StringIO to splice '''def from_file (self, fp, sep ='t '): cnt = 0 cache = {} for l in fp: if 10000000 = cnt: self. _ flush (cache) cnt = 0 cache = {} I, j, v = [float (I) for I in l. split (sep)] ik = ('I % d' % I) ib = struct. pack ('if', j, v) jk = ('J % d' % j) jb = struct. pack ('if', I, v) try: cache [ik]. write (ib) into T: cache [ik] = StringIO () cache [ik]. write (ib) try: cache [jk]. write (jb) handle T: cache [jk] = StringIO () cache [jk]. write (jb) cnt + = 1 self. _ nums + = 1 self. _ flush (cache) return self. _ nums def _ flush (self, cache): for k, v in cache. items (): v. seek (0) s = v. read () try: self. _ data [k] + = s limit T: self. _ data [k] = s if _ name _ = '_ main _': db = bsddb. btopen (None, cachesize = 268435456) data = DictMatrix (db) data. from_file (open ('/path/to/log.txt', 'r '),',')

Test 1.2 million rating data records (integer, integer, floating point format) and MB text file import. if the data is stored in memory dict, the 12-minute build is complete, consuming GB of memory, using the bdb storage in the sample code, the build is completed in 20 minutes and the memory usage is 300 ~ About MB, which is a big deal than cachesize. data reading test:

import timeittimeit.Timer('foo = __main__.data[9527, ...]', 'import __main__').timeit(number = 1000)

It takes 1.4788 seconds to read a piece of data, which is about 1.5 ms.

Another advantage of using Dict to store data is that you can use the memory Dict or any other form of DBM, or even the legendary Tokyo Cabinet ....

Okay. the code is complete.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.