S3cmd achieves synchronous file backup in batches using split
Source: Internet
Author: User
S3cmd implements file synchronization and backup in batches using split. s3cmd is always used to synchronize backup to the server. However, a small problem has occurred recently. after verification, it is found that the server file is growing bigger and bigger after packaging, reaching about 7 GB, resulting in s3cmd unable to upload it to amazon Cloud; solution... s3cmd implements file synchronization and backup in batches using split. s3cmd is always used to synchronize backup to the server. However, a small problem has occurred recently. after verification, it is found that the server file is growing bigger and bigger after packaging, and it reaches about 7 GB, so s3cmd cannot upload it to amazon cloud. the solution is to cut the packaged file with split first, the command is roughly as follows: [plain] tar-zcvf-db_backup. SQL webserver/| openssl des3-salt-k password | split-B 1024 m-mybackup. in this case, a file of about GB is cut by 1 GB, and seven 1 GB files and one GB file are obtained. then, upload them one by one using s3cmd; [plain] for filename in 'ls mybackup. des3 _ * 'Do s3cmd-v put $ filename s3: // mybackup/$ filename done after uploading, use cat to integrate each file on AWS; [plain] cat mybackup. des3 _ *> mybackup. des3 for s3cmd installation and use, make a note: 1. installation Method 1: (Debian/Ubuntu) [plain] wget-O--q http://s3tools.org/repo/deb-all/stable/s3tools.key | Sudo apt-key add-wget-O/etc/apt/sources. list. d/s3tools. list http://s3tools.org/repo/deb-all/stable/s3tools.list Apt-get update & sudo apt-get install s3cmd Method 2: [plain] wget http://nchc.dl.sourceforge.net/project/s3tools/s3cmd/1.0.0/s3cmd-1.0.0.tar.gz Tar-zxf s3cmd-1.0.0.tar.gz-C/usr/local/mv/usr/local/s3cmd-1.0.0 // usr/local/s3cmd/ln-s/usr/local/s3cmd/s3cmd/usr/ bin/s3cmd 2. usage 1. configuration, mainly Access Key ID and Secret Access Key s3cmd -- configure 2. list all Buckets. (Bucket is equivalent to the root folder) s3cmd ls 3. create a bucket with a unique name and cannot be repeated. S3cmd mb s3: // my-bucket-name 4. delete an empty bucket s3cmd rb s3: // my-bucket-name 5. list Bucket content s3cmd ls s3: // my-bucket-name 6. Upload file.txt to a bucket. s3cmd put file.txt s3: // my-bucket-name/file.txt 7. Upload and set the permission to everyone's readable s3cmd put -- acl-public file.txt s3: // my-bucket-name/file.txt 8. batch upload files s3cmd put. /* s3: // my-bucket-name/9, download file s3cmd get s3: // my-bucket-name/file.txt 10, batch download s3cmd get s3: // my-bucket -Name /*. /11. delete the file s3cmd del s3: // my-bucket-name/file.txt 12 to obtain the space occupied by the corresponding bucket s3cmd du-H s3: // my-bucket-name III. Directory processing rules the following commands can upload files in dir1 to my-bucket-name, but the effect is only different. 1) dir1 does not contain a slash (/), so dir1 will be part of the file path, which is equivalent to uploading the entire dir1 Directory, that is, similar to "cp-r dir1/" [plain] ~ /Demo $ s3cmd put-r dir1 s3: // my-bucket-name/dir1/file1-1.txt-> s3: // my-bucket-name/dir1/file1-1.txt [1 of 1] 2) dir1 with "/" slash, equivalent to uploading all files under the dir1 Directory, that is, similar to "cp. /*"~ /Demo $ s3cmd put-r dir1/s3: // my-bucket-name/dir1/file1-1.txt-> s3: // my-bucket-name/file1-1.txt [1 of 1] 4. synchronous method this is s3cmd difficult to use, but it is the most practical function. For official instructions, see s3cmd sync HowTo. First of all, it is clear that the synchronization operation requires MD5 verification and will be transmitted only when files are different. 4.1. general synchronization operation 1. synchronize all files in the current directory s3cmd sync. /s3: // my-bucket-name/2. after the "-- dry-run" parameter is added, only the items to be synchronized are listed and not actually synchronized. S3cmd sync -- dry-run./s3: // my-bucket-name/3. after the "-- delete-removed" parameter is added, files that do not exist locally are deleted. S3cmd sync -- delete-removed. /s3: // my-bucket-name/4. after the "-- skip-existing" parameter is added, MD5 verification is not performed, and the existing local files are skipped directly. S3cmd sync -- skip-existing. /s3: // my-bucket-name/4.2, Advanced Synchronization Operation 4.2.1, exclusion, inclusion rules (-- exclude, -- include) file1-1.txtexcluded, file2-2.txt can also be included in the txt format. [Plain] ~ /Demo $ s3cmd sync -- dry-run -- exclude '*. txt '-- include 'dir2 /*'. /s3: // my-bucket-name/exclude: dir1/file1-1.txt upload :. /dir2/file2-2.txt-> s3: // my-bucket-name/dir2/file2-2.txt 4.2.2 load exclusion or inclusion rules from the file. (-- Exclude-from, -- include-from) s3cmd sync -- exclude-from pictures. exclude. /s3: // my-bucket-name/pictures. exclude file content # Hey, comments are allowed here ;-)*. jpg *. gif 4.2.3. exclude or include rules support regular expressions such as rexclude, -- rinclude, -- rexclude-from, and -- rinclude-from.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.