Some data downloaded from the Internet, excel tables, xml files, txt files, etc. sometimes we want to import it into the database, what should we do? Next we will discuss it in detail.
Batch import data in django background
In a production environment, there are usually not a few or several hundred pieces of data. for example, you can import all employee numbers or account passwords of the company into the background, we recommend that you add a record in the background.
How to import svn records in batches from xml
Step 1:
Create a model for data
@ Python_2_unicode_compatibleclass SVNLog (models. model): vision = models. integerField (verbose_name = u "", blank = False, null = False,) author = models. charField (verbose_name = u "author", max_length = 60, blank = True, null = True) date = models. dateTimeField (verbose_name = u "revision time", null = True) msg = models. textField (verbose_name = u "comment message", blank = False, null = False, default = u "") paths = models. textField (verbose_name = u "affected files", blank = False, null = False, default = u "") created_time = models. dateTimeField (verbose_name = u "creation time", auto_now_add = True,) update_time = models. dateTimeField (verbose_name = u "" ", auto_now = True,) class Meta: ordering = ['Revision '] def _ str _ (self ): return u 'r % s' % (self. revision or u "",)
Now that we have created a model, let's create a models that accepts our xml file.
@ Python_2_unicode_compatibleclass ImportLogFile (models. model): LogFile = models. fileField (upload_to = 'logfile') FileName = models. charField (max_length = 50, verbose_name = u'filename ') class Meta: ordering = ['filename'] def _ str _ (self): return self. fileName
OK. the above code defines the data and file Upload model.
Synchronize databases
python manage.py makemigrationspython manage.py migrate
Next, modify admin. py so that we can upload files from the background,
class ImportLogAdmin(admin.ModelAdmin): list_display = ('LogFile','FileName',) list_filter = ['FileName',] def save_model(self, request, obj, form, change): re = super(YDImportLogAdmin,self).save_model(request, obj, form, change) update_svn_log(self, request, obj, change) return re
Note that the save_model in the code above is the key here. here I overwrite the save_model method in ModelAdmin.
Because we need to merge the Upload file, read the file, parse the file, and operate the database in one step, you can open debug. when uploading the file, the obj of the returned parameter contains the file upload path. This path is also the key to the next step in parsing the file. well, we can create a utils file in this app folder. py is used to operate files and database tools. for simplicity, I wrote the following functions:
First paste the xml file we want to test
<?xml version="1.0" encoding="UTF-8"?>
qwert
2016-09-27T07:16:37.396449Z
/aaa/README
20160927 151630
VisualSVN Server
2016-09-20T05:03:12.861315Z
/branches
/tags
/trunk
hello word
Output result format
R2 | qwer | 15:16:37 + 0800 (2, 27 9 2016) | 1 lineChanged paths: A/xxx/README20160927 151630 running r1 | VisualSVN Server | 13:03:12 + 0800 (2, 20 9 2016) | 1 lineChanged paths: A/branches A/tags A/trunkInitial structure. from. models import SVNLogimport xmltodictdef update_svn_log (self, request, obj, change): headers = ['R', 'A', 'D', 'M ', 'P'] filepath = obj. logFile. path xmlfile = xmltodict. parse (open (filepath, 'r') xml_logentry = xml. get ('log '). get ('logentry ') info_list = [] pathlist = [] SQL _insert_list = [] SQL _update_list = [] for j in xml: data_dict ={}# get path paths = j. get ('Paths '). get ('PATH') if isinstance (paths, list): for path in paths: action = path. get ('@ action') pathtext = path. get ('# text') pathtext = action + ''+ pathtext pathlist. append (pathtext) _ filelist = u' \ n '. join (pathlist) _ paths = u "Changed paths: \ n {}". format (_ filelist) print _ paths else: _ filelist = paths. get ('@ action') + ''+ paths. get ('# text') _ paths = u "Changed paths: \ n {}". format (_ filelist) print _ paths # get revision = j. get ('@ vision') # get auth author = j. get ('author') # get date = j. get ('date') # get msg = j. get ('MSG ') data_dict [headers [0] = int (vision) data_dict [headers [1] = author data_dict [headers [2] = date data_dict [headers [3] = msg data_dict [headers [4] = _ paths info_list.append (data_dict) _ svnlog = SVNLog. objects. filter (). order_by ('-vision '). first () _ last_version = _ svnlog. vision if _ svnlog else 0 for value in info_list: vision = value ['R'] author = value ['A'] date = value ['D'] msg = value ['M'] paths = value ['P'] print vision, author _ svnlog = YDSVNLog. objects. filter (). order_by ('-revision '). first () _ last_version = _ svnlog. revision if _ svnlog else 0 if vision> _ last_version: SQL _insert_list.append (SVNLog (revision = revision, author = author, date = date, msg = msg, paths = paths) else: SQL _update_list.append (SVNLog (revision = revision, author = author, date = date, msg = msg, paths = paths) SVNLog. objects. bulk_create (SQL _insert_list) SVNLog. objects. bulk_create (SQL _update_list)
The third-party library xmltodict we use to parse xml. it parses the content into an efficient orderdict type, that is, a sequential Dictionary.
In this xml, the path in the paths is complex, because the xml contains two elements, and the path of the first element only contains one path, the paths in the second element contains three paths. Therefore, we need to determine when parsing the obtained path.
paths = j.get('paths').get('path')if isinstance(paths,list): pass
We determine whether the path is of the list type. If yes, we will handle it in the list method. if not, we will process it in a single way, process the result according to the output result format and then obtain other content.
revision = j.get('@vision')# get authauthor = j.get('author')#get datedate = j.get('date')#get msgmsg = j.get('msg')
Finally, the obtained elements are stored in the dictionary.
Judge the current version number and the version number in the database in a loop,
If it is smaller than the original one, we will perform the update operation, and vice versa, the insert operation.
Finally, bulk_create is used to operate the database, which avoids the waste of resources caused by every database operation in the loop.