This section assumes that you have exported data from Dynamodb, and that the exported files are stored in S3.
文件内部结构会在
described in Verify Data Export File. we call the original table of the previously exported data as
sourcetable, and the table to which the data will be imported is
destinationtable. You can import an export file from S3 into a dynamodb table, but first make sure that the following conditions are true:
- The destination table already exists. (The import task does not create a table for you)
- The destination table has the same name as the source table.
- The destination table has the same structure as the source table.
Destination table is not necessarily empty. However, the import process replaces the data in the table with the same primary key. For example, you have a
Customer table whose primary key is
CustomerIdand only three items (
CustomerId 1, 2, and 3). If the file to be imported also contains items with
CustomerID 1, 2, and 3, the items in the destination table will be replaced with the data in the imported file. If the file also contains Item CustomerID of 4, then this item is added to the destination table. Destination table can be in different AWS region. For example, suppose you have a
Customer table in US West (Oregon) region and then export its data to Amazon S3. You can import it into a table in the EU (Ireland) region that has the same primary key. This practice is known as
cross-region Export and import. Note that the AWS Management Console allows you to export data from more than one table at a time. However, the difference is that you can import only one table at a time.
importing data from S3 to Dynamodb
- Log in to the AWS Management Console, and then open the Dynamodb console: https://console.aws.amazon.com/dynamodb/.
- (optional) If you want to do a chunk import, click Select a region in the upper-right corner and select the area of the table you want to import. The console displays all the tables under that area. If destination table does not exist, you need to create it first.
- On the Amazon DynamoDB Tables page, click export/import.
- On the export/import page, select a table you want to import and click Import into DynamoDB.
- On the Create Import Table Data Pipeline page, proceed as follows:
-
- Enter the Amazon S3 URI for the import file in the S3 input Folder text box. For example: The
s3://mybucket/exports
rules for this URI should look like this s3://bucketname
/folder
:
-
bucketname
is the name of the bucket in the S3
folder
Represents the name of the file to be imported
- The import task will find the corresponding file through the specified S3 location.
文件内部结构会在
described in Verify Data Export File.
- Enter a S3 URI in the S3 log Folder text box and the log of the export process will be stored in the corresponding Folder. For example:
s3://mybucket/logs/
S3 Log Folder The format of the URI is the same as the format of the S3 Output Folder .
- You can select a percentage in the throughput rate text box. This ratio indicates that the upper limit of read throughput is consumed during the export process. For example, suppose you want to export a table with a read throughput of 20, and you set the percentage to 40%. Then the throughput consumed by the export will not exceed 8.
If you are exporting multiple tables, this throughput rate will be applied to each table.
- Execution Timeout text box, enter the time-out for the export task. This task fails if the export task does not complete within this length of time.
- Send notifications to text box, enter an email address. After pipeline is created, you will receive an email invitation to subscribe to Amazon SNS, and if you accept this invitation, you will receive an email notification each time you perform an export operation.
- Data Pipeline Role, select datapipelinedefaultrole.
- Resource Role, select datapipelinedefaultresourcerole
- Confirm the above settings and click Create Export Pipeline.
your pipeline will now be created, and this process may take several minutes to complete. To view the current status, go to managing Export and Import pipelines.the import task executes immediately after your pipeline is created.
Import data from S3 to Dynamodb