Import data from S3 to Dynamodb

Source: Internet
Author: User
Tags dynamodb amazon dynamodb aws management console

This section if you have already exported data from Dynamodb, and the exported files are stored in S3. 文件内部结构会在 Verify Data Export File describes the narrative.

we call the original table of the previously exported data as sourcetable, and the table to which the data will be imported is destinationtable. You can import an export file from S3 into a table in Dynamodb. But first make sure that the following conditions are met:
    • The destination table already exists. (The import task does not create a table for you)
    • The destination table has the same name as the source table.
    • The destination table has the same structure as the source table.

Destination table is not necessarily empty. However, the import process replaces the data in the table with the same primary key.

For example, you have a Customer table whose primary key is CustomerId. And there are only three items (CustomerId 1, 2, and 3).

Suppose the same file to be imported includes items of CustomerID 1, 2, and 3. The items in the destination table will be replaced by the data in the imported file. Assuming the file also includes an item of CustomerID 4, the item will be added to the destination table.

Destination table can be in different AWS region. Like what. If you have a Customer table in US West (Oregon) region, then export its data to Amazon S3. You can import it into a table in the EU (Ireland) region that also shows the same primary key. This practice is known as cross-region Export and import.

Note that the AWS Management Console agrees that you export data from more than one table at a time. However, the difference is that you can only import a table at a time.
importing data from S3 to Dynamodb
  1. Log in to the AWS Management Console, and then open the Dynamodb console: https://console.aws.amazon.com/dynamodb/.
  2. (optional) Suppose you want to do a chunk import. Click Select a region in the upper-right corner and select the area of the table you want to import. The console displays all the tables under that area.

    Suppose destination table does not exist. You need to create it first.

  3. On the Amazon DynamoDB Tables page, click export/import.
  4. On the export/import page, select a table you want to import and click Import into DynamoDB.
  5. On the Create Import Table Data Pipeline page, follow these steps:
    1. Enter the appropriate Amazon S3 URI for the import file in the S3 input Folder text box. For example: The s3://mybucket/exports rules for this URI should look like this s3://bucketname/folder :
      • bucketnameis the name of the bucket in the S3
      • folderRepresents the name of the file to be imported
    2. The import task finds the appropriate file through the specified S3 location. 文件内部结构会在Verify Data Export File describes the narrative.
    3. Enter a S3 URI in the S3 log Folder text box and the log of the export process will be stored in the corresponding Folder. Like what:s3://mybucket/logs/
      S3 Log Folder The format of the URI is the same as the format of the S3 Output Folder .

    4. You can select a percentage in the throughput rate text box.

      This ratio indicates that the upper limit of read throughput is consumed during the export process. For example, if the read throughput of the table you are exporting is 20, the same time you set the percentage to 40%.

      Then the throughput consumed by the export will not exceed 8.
      Suppose you are exporting multiple tables. This throughput rate will be applied to each table.

    5. Execution Timeout text box, enter the time-out for the export task. Assuming that the export task has not finished running within this length of time, this task will fail.

    6. Send notifications to text box, enter an email address. After the pipeline is created. You will receive an email invitation to subscribe to Amazon SNS. Assuming you accept this invitation, you will receive an email notification each time you run the export operation.

    7. Data Pipeline Role, select datapipelinedefaultrole.
    8. Resource Role, select datapipelinedefaultresourcerole
  6. Confirm the above settings and click Create Export Pipeline.
Your pipeline will now be created, and this process may take a few minutes to complete.

To view the current status. Managing Export and Import pipelines.

the import task runs immediately after your pipeline is created.

Import data from S3 to Dynamodb

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.