Data archiving process in SAP BW

Source: Internet
Author: User

This document explains the steps on how to create a DAP in sap bw, more information on what is archiving is covered in my archiving blog

How to create data archiving process based on adk Method

Step 1:
Go to transaction rsa1 and right click on the info provider you want to archive and select "create data archiving process"

Step 2:
In the 'general settings' tab, check the box 'adk-based archiving '. system will assign the archiving object name for the infoprovider.

Step 3:
In the 'selection profile 'tab, select the 'characteristic for time slices' as a time characteristic. and here we also select the info Objects Based on which we want the archiving to be done, and move them
From right to left. We repeat the same process in next tab "semantic group" as well.

Step 4
: In the last tab 'adk', enter the newly created logical file name, maintain the 'archiveing file size', select Delete job as 'not scheduled '. this will not allow system to delete the data automatically from data target.

Note: Logical files can be created using the transaction "file ".

By doing these steps, we have successfully created and maintained an archiving object for DSO/infocube

Step 5: now that we have set up the adk DAP process and defined the logical mpath, next step is to archive
The data.

To archive the data go to T code: Sara and give the archiving object name, and click on 'write' (which is our first
Step in the adk data archiving process
Creation of archive files)

Step 6:
Give a variant name and click on maintain.

Step 7:
In the pop-up window, select 'For All selection screens '.

Step 8:
In the tab 'Primary time restrictions ', maintain the details with respect to your requirement:

Step 9:
In the tab 'further restrictions ', give the details like on what basis we want to archive the data I. E on certain selection if any. And keep the processing option as 'production Mode'

Step 10:
Come back and save.

Step 11:
Enter the description I. e text for variant name

Save and come back

Step 12:
Click on start date and schedule it.

Step 13:
Click on 'spool Params '.

Step 14:
Enter the 'output device' and press enter button

Step 15:
Click on 'execute '.

We can check the status of the job by clicking on job button

This completes our first step in adk and we have hereby successfully "written" the data from our infoprovider to the archiving file.

Step 16:
Now that we have copied that data into our archiving file, we can now "delete 'the data from our infoprovider.

T code: Sara and select the "delete" option

Step 17:
Select "Archive Selection"

Step 18:
Select the archived file and press Enter.

Step 19:
Click on 'start date' and click on 'immediate' and save.

Step 20:
Click on 'spool parameters ', enter the output device name and press Enter. Click on execute button and the data gets deleted. You can the job overview and be confirmed.

Hence we have completed our first Type I. e adk Based Data archiving process.

2) NLS (nearline storage)

How to create data archiving process based on NLS Method

Step 1: Right click on the infoprovider and select "create Data Archiving
Process"

Here uncheck the type "adk" and specify the name of the NLS connection.

Step 2:
In the 'selection profile 'tab, select the 'characteristic for time slices' as a time characteristic. and here we also select the info Objects Based on which we want the archiving to be done, and move them
From right to left. We repeat the same process in next tab "semantic group" as well.

Step 3: In the last tab "nearline" mention the size of the packet.

Step 4:
Activate the data archiving process.

We get a log display like the below. So we have now setup the nearline connection for this particle infoprovider.

Step 5:
Once the activation is successful, come back to the infoprovider tree and click on manage.


Step 6:
We get a new tab "Archiving" and click on archiving requests.


Step 7:
Fill in the details like based on what selections and conditions you wowould like the archiving to be done.


Step 8:
After specifying the selections, click on selection button in the "primary time restriction" tab. Now select "immediate" schedule.


Step 9:
You can use this option
For
Simulation.

Once the simulation is successful, we can execute the archive run in dailog or background Mode

 

Select the target status as 70 under the process flow control for deletion phase completion.

Step 10:
Click on yes.


Step 11:
Once the archiving has completed, the request wocould appear in the Archiving tab like the below.


Hence we have successfully archived the data with respect to our selections.

We can cross check the cube content after archiving the data, where in you can see some changes in the content of the Cube before and after archiving.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.