S3cmd implements file synchronization and backup in batches using split. s3cmd is always used to synchronize backup to the server. However, a small problem has occurred recently. After verification, it is found that the Server File is growing bigger and bigger after packaging, and it reaches about 7 GB, so s3cmd cannot upload it to amazon cloud. The solution is to cut the packaged file with split first, the command is roughly as follows: [plain] tar-zcvf-db_backup. SQL webserver/| openssl des3-salt-k password | split-B 1024 m-mybackup. in this case, a file of about GB is cut by 1 GB, and seven 1 GB files and one GB file are obtained. Then, upload them one by one using s3cmd; [plain] for filename in 'ls mybackup. des3 _ * 'do s3cmd-v put $ filename s3 :/ /Mybackup/$ filename done after uploading, use cat to integrate each file on AWS; [plain] cat mybackup. des3 _ *> mybackup. des3 for s3cmd installation and use, make a note: 1. Installation Method 1: (Debian/Ubuntu) [plain] wget-O-q http://s3tools.org/repo/deb-all/stable/s3tools.key | sudo apt-key add-wget-O/etc/apt/sources. list. d/s3tools. list http://s3tools.org/repo/deb-all/stable/s3tools.list apt-get update & sudo apt-get install s3cmd Method 2: [plain] wget http: // nc Hc.dl.sourceforge.net/project/s3tools/s3cmd/1.0.0/s3cmd-1.0.0.tar.gz tar-zxf s3cmd-1.0.0.tar.gz-C/usr/local/mv/usr/local/s3cmd-1.0.0 // usr/local/s3cmd/ln-s/usr/local/s3cmd/s3cmd/usr /bin/s3cmd 2. Usage 1. configuration, mainly Access Key ID and Secret Access Key s3cmd -- configure 2. List all Buckets. (Bucket is equivalent to the root folder) s3cmd ls 3. Create a bucket with a unique name and cannot be repeated. S3cmd mb s3: // my-bucket-name 4. delete an empty bucket s3cmd rb s3: // my-bucket-name 5. List Bucket content s3cmd ls s3: // my-bucket-name 6. Upload file.txt to a bucket. s3cmd put file.txt s3: // my-bucket-name/file.txt 7. Upload and set the permission to everyone's readable s3cmd put -- acl-public file.txt s3: // my-bucket-name/file.txt 8. Batch upload files s3cmd put. /* s3: // my-bucket-name/9, download file s3cmd get s3: // my-bucket-name/file.txt 10, batch download s3cmd get s3: // my-bucket -Name /*. /11. delete the file s3cmd del s3: // my-bucket-name/file.txt 12 to obtain the space occupied by the corresponding bucket s3cmd du-H s3: // my-bucket-name III. Directory processing rules the following commands can upload files in dir1 to my-bucket-name, but the effect is only different. 1) dir1 does not contain a slash (/), so dir1 will be part of the file path, which is equivalent to uploading the entire dir1 directory, that is, similar to "cp-r dir1/" [plain] ~ /Demo $ s3cmd put-r dir1 s3: // my-bucket-name/dir1/file1-1.txt-> s3: // my-bucket-name/dir1/file1-1.txt [1 of 1] 2) dir1 with "/" slash, equivalent to uploading all files under the dir1 directory, that is, similar to "cp. /*"~ /Demo $ s3cmd put-r dir1/s3: // my-bucket-name/dir1/file1-1.txt-> s3: // my-bucket-name/file1-1.txt [1 of 1] 4. synchronous method this is s3cmd difficult to use, but it is the most practical function. For official instructions, see s3cmd sync HowTo. First of all, it is clear that the synchronization operation requires MD5 verification and will be transmitted only when files are different. 4.1. General synchronization operation 1. Synchronize all files in the current directory s3cmd sync. /s3: // my-bucket-name/2. After the "-- dry-run" parameter is added, only the items to be synchronized are listed and not actually synchronized. S3cmd sync -- dry-run./s3: // my-bucket-name/3. After the "-- delete-removed" parameter is added, files that do not exist locally are deleted. S3cmd sync -- delete-removed. /s3: // my-bucket-name/4. After the "-- skip-existing" parameter is added, MD5 verification is not performed, and the existing local files are skipped directly. S3cmd sync -- skip-existing. /s3: // my-bucket-name/4.2, advanced synchronization operation 4.2.1, exclusion, inclusion rules (-- exclude, -- include) file1-1.txtexcluded, file2-2.txt can also be included in the txt format. [Plain] ~ /Demo $ s3cmd sync -- dry-run -- exclude '*. txt '-- include 'dir2 /*'. /s3: // my-bucket-name/exclude: dir1/file1-1.txt upload :. /dir2/file2-2.txt-> s3: // my-bucket-name/dir2/file2-2.txt 4.2.2 load exclusion or inclusion rules from the file. (-- Exclude-from, -- include-from) s3cmd sync -- exclude-from pictures. exclude. /s3: // my-bucket-name/pictures. exclude file content # Hey, comments are allowed here ;-)*. jpg *. gif 4.2.3. exclude or include rules support regular expressions such as rexclude, -- rinclude, -- rexclude-from, and -- rinclude-from.