Amazon S3 is all called Amazon Simple storage Services (Amazon simple Storage Service) is simply a server to store files online, you can put your own files, and then through its open APIs to manage. The official website is http://aws.amazon.com/cn/s3/
There is a bucketon the S3, my understanding is modularity, because that thing is very big, if I want to exist music files and installation package, and put together will be very messy, use buckets to divide the module. So the backstage can also be better managed. For developers can also be only open to the module, security is also improved. Another thing is that every file on the S3 has a key, which if viewed from the background is the file name. If the opening is set, the global user can access the file based on this key.
The most recent learning is to use Python to write an upload file to the S3 function
Upload the words mainly include (1) The detection of the existence of files; (2) Upload files
The code that determines whether key exists is bucket_name the name of the bucket mentioned above. And then there are two things that are aws_access_key_id,aws_secret_access_key These two things are S3 's only authentication, after registering good will get these two things, Python can be written directly into Java. If it is in Java it would be a file credentials, write these two things in, and then put this file in the Windows user directory under the. AWS folder is available. We found it inconvenient so we didn't write it in Java.
defis_s3_file_exist (key):" "weather S3 exists This key return True exist" "Bucket_name=Bucket_name#Connect to the bucketconn =boto.connect_s3 (aws_access_key_id, aws_secret_access_key) bucket=Conn.get_bucket (bucket_name)#create a key to keep track of our file in the storageK =Key (bucket) K.key=Keyifk.exists (None): Logging.info ("S3 exists this file") returnTrueElse: returnFalse
Upload part of the code, the front and the previous function are similar, upload is only one line K.set_contents_from_filename (filepath), and then there is a k.make_public () This is what we said earlier so that everyone can access this file, but the background has to configure permissions.
defUpload_apk_to_s3 (key,filepath):" "upload apk to S3" " Try: Bucket_name=Bucket_name#Connect to the bucketconn =boto.connect_s3 (aws_access_key_id, aws_secret_access_key) bucket=Conn.get_bucket (bucket_name)#create a key to keep track of our file in the storageK =Key (bucket) K.key=key K.set_contents_from_filename (filepath)#we need to make it public so it can be accessed publicly #using a URL like Http://s3.amazonaws.com/bucket_name/keyk.make_public () logging.info ("upload file to S3 success") returnTrueexceptException,e:logging.info ("UPLOAD_APK_TO_S3 Error") Logging.info (e)returnFalse
There is a logging.info ("") in the code, which is a common way for Linux to print logs because the code is running in the background in Linux . If we output the log by print, we can't see it, it's very useful. Once configured, the entire project can be used
Import logginglogging.basicconfig (filename='/var/log/xx/xx.log', FileMode='a', format='% (asctime) s% (name) s% ( LevelName) s% (message) s', datefmt='%y-%m-%d%h:%m:%s ', level =logging. DEBUG)
You only need to call Logging.info ("") later, then use the command
Tail -f/var/log/xx/xx.log
To see the latest log files, it is also worth noting that these directories have to be created first, the log file will be created by itself, but the directory will not.
Amazon S3 Learning Python