This section describes how to export data from one or more dynamodb tables to S3 buckets. Before you run the export, you need to create S3 buckets in advance.
NoteAssuming you haven't used AWS Data Pipeline before, you'll need to create two IAM roles before running the following process. For a lot of other information, please go to creating IAM Roles for AWS Data Pipeline.
export data from Dynamodb to S3
- Log in to the AWS Administrator console and open the Dynamodb console.
https://console.aws.amazon.com/dynamodb/.
- On the Amazon DynamoDB Tables page, click export/import.
- On the export/import page, select the table you want to export and click Export from DynamoDB.
- On the Create Export Table Data Pipeline (s) page. Follow these steps:
-
- In the S3 Output Folder text box, fill in the Amazon S3 URI, and the exported file will be stored in the corresponding directory in S3. Like what:
s3://mybucket/exports
The rules for this URI should look like this s3://bucketname
/folder
:
-
- Enter a S3 URI in the S3 Log Folder text box. The log of the export process is stored in the corresponding folder. Like what:
s3://mybucket/logs/
S3 Log Folder The format of the URI is the same as the format of the S3 Output Folder .
- You can select a percentage in the throughput rate text box.
This ratio indicates that the upper limit of read throughput is consumed during the export process. For example, if the read throughput of the table you are exporting is 20, the same time you set the percentage to 40%.
Then the throughput consumed by the export will not exceed 8.
Assuming you are exporting multiple tables, this throughput rate will be applied to each table.
- execution The Timeout text box. Enter the timeout period for the export task. Assuming that the export task has not finished running within this length of time, this task will fail.
- Send notifications to text box, enter an email address.
After the pipeline is created. You will receive an email invitation to subscribe to Amazon SNS. Let's say you accept this invitation. You will receive an email notification each time you run the export operation.
- Schedule option, select one of the following:
-
- one-time Export-the exported task will run immediately after pipeline is created.
- Daily Export-the exported task will run at the moment you specify. The same time will be repeated at that time of day.
- Data Pipeline Role, select datapipelinedefaultrole.
- Resource Role, select datapipelinedefaultresourcerole
- Confirm the above settings and click Create Export Pipeline.
Your pipeline will now be created. This process can take a few minutes to complete. To view the current status. Move managing export and Import pipelines. Assuming that the schedule you selected is one-time export, the export task will run immediately after the pipeline creation is successful.
Assuming that you choose daily export, the export task will run at the specified time, and the export task will run at the same time every day.
When the export task ends, you can go to Amazon S3 console to view the export file. This file will be in a directory named after your table name. And the file name will be in this format:
YYYY-MM-DD_HH.MM。
文件内部结构会在
Verify Data Export File describes the narrative.
AWS dynamodb Data Export to S3