Analysis and use of ArcGIS slice cache compact File Format

Source: Internet
Author: User

In ArcGIS 10, a new format of slice cache file is introduced: compact ). Compared with the previous exploded storage, it has many advantages such as convenient migration, faster creation, and reduced storage space. It has become the default format for creating slice cache. For ArcGIS products, there is no difference between accessing the compact storage and accessing the loose storage. However, if a third-party application wants to access the new slice format, at present, the official reply "no" is provided:

The internal architecture of the bundle is not publicly specified ented by ESRI. if you 've coded your own logic to pull tiles out of a virtual directory, you shoshould continue to use the "exploded" format which stores each tile as a single file and was the only option at ArcGIS Server versions 9.3.1 and previous.

I googled it and did not have any relevant information. Therefore, I am solely self-reliant. I will analyze the format of the compact storage. I believe this is the only information I can find on the internal format of the compact storage.

Principles of compact storage

The two most important types of compact storage files are bundle and bunddlx files. Bundle files are used to store sliced data, and bunddlx is the index file for Sliced data in bundle files.

A bundle file can store a maximum of 128 × 128 (16384) slices. However, creating a slice cache is not generated by a single slice, but by 4096 pixels (no anti-aliasing) or 2048 pixels (with anti-aliasing) for side-length rendering, if we choose a slice side length of 256 pixels and turn on anti-aliasing, each time the arcsoc process creates a large image consisting of 8x8 (64) slices, the image is cut and saved to the bundle file.

In, the blue border represents the bundle file, and the Black Grid is the large image spliced when the slice is generated. Each slice is in the black grid, and the figure is not displayed.

Storage Format Analysis

Before analyzing the compact storage format, I first asked myself what should I do if you want to store content in a bundle file and store indexes in a bunddlx file at the same time? The conventional approach is to identify the status (storage offset and length) of a slice in the bundle file with fixed several bytes in the bundlx File Based on the bitmap indexing method of the database ).

Observe the bundlx file generated by ArcGIS. Each file is of the same size: 81952 bytes. As mentioned above, each bundle file can store up to 16384 slices. Although the bundle file may not contain so many slices, I guess the bundlx file must retain the index position of the owner's 16384 slices. It is estimated that each slice occupies about 5 bytes, 16384 × 5 = 81920 bytes, and 32 more bytes. We guess the ID information of the bundlx file is stored.

By observing and guessing the pattern of a very sparse bundlx file, we have determined that the starting 16 bytes of the file in bundlx and the end 16 bytes of the file are irrelevant to the index, the remaining 81920 bytes of data repeat at a frequency of 5 bytes, forming an index on the bundle file.

We thought the five bytes would save the offset and length of the sliced data in the bundle file, but we found that the amount of information expressed by the five bytes may not be enough. Therefore, I also analyzed the sliced data in the bundle.

I guess the file is not compressed, so search for the PNG file header 0x89504e47 in the file (I chose the png24 format when creating the cache. At the same time, each two slice data is separated by four bytes (slice data is compared directly using the exploded image). Through conjecture and attempt, it is found that the four bytes indicate the length of the subsequent sliced data in the way of low to high.

Since the segment data length is recorded in the bundle file, the index in the bunddlx file must only include the segment data offset, the five bytes in the bundlx also indicate the data offset in the form of low to high.

Slice Data Length and data offset conjecture should be an unsigned integer, which is proved by subsequent practices.

Another question is, which shard data offset is identified for every five bytes in bundlx? The result of my experiment is: sort by column:

1

129

...

...

2

130

   

3

131

   

...

...

   

...

...

   

128

256

 

16384

From the above analysis, if we know the level, row number, and column number of a slice, we can use bundlx to first find the offset of the slice content in the bundle, then, the bundle file is used to retrieve the 4-byte length data, and then the real sliced data is read Based on the length. It is relatively simple to calculate the row number, column number, and bundle File naming method.

Use third-party code to access slice

Based on the analysis of the compact slice storage format, I wrote a service in Java for verification. My service accepts three parameters, which are maintained by the ArcGIS Server rest interface, namely level, row, and Col. The following code extracts the sliced data from the Compact slicing storage:

String l = "0" + level;int lLength = l.length();if (lLength > 2) {l = l.substring(lLength - 2);}l = "L" + l;int rGroup = 128 * (row / 128);String r = "000" + Integer.toHexString(rGroup);int rLength = r.length();if (rLength > 4) {r = r.substring(rLength - 4);}r = "R" + r;int cGroup = 128 * (col / 128);String c = "000" + Integer.toHexString(cGroup);int cLength = c.length();if (cLength > 4) {c = c.substring(rLength - 4);}c = "C" + c;String bundleBase = String.format("%s/%s/%s%s", bundlesDir, l, r, c);String bundlxFileName = bundleBase + ".bundlx";String bundleFileName = bundleBase + ".bundle";int index = 128 * (col - cGroup) + (row - rGroup);FileInputStream isBundlx = new FileInputStream(bundlxFileName);isBundlx.skip(16 + 5 * index);byte[] buffer = new byte[5];isBundlx.read(buffer);long offset = (long) (buffer[0] & 0xff) + (long) (buffer[1] & 0xff)* 256 + (long) (buffer[2] & 0xff) * 65536+ (long) (buffer[3] & 0xff) * 16777216+ (long) (buffer[4] & 0xff) * 4294967296L;FileInputStream isBundle = new FileInputStream(bundleFileName);isBundle.skip(offset);byte[] lengthBytes = new byte[4];isBundle.read(lengthBytes);int length = (int) (lengthBytes[0] & 0xff)+ (int) (lengthBytes[1] & 0xff) * 256+ (int) (lengthBytes[2] & 0xff) * 65536+ (int) (lengthBytes[3] & 0xff) * 16777216;result = new byte[length];isBundle.read(result);

 

Then, a custom slice layer is written through the flex API, And the gettileurl method is rewritten:

override protected function getTileURL(level:Number, row:Number,col:Number):URLRequest{var url:String = "http://localhost:8777/restserver/tile/"+level+"/"+row+"/"+col;return new URLRequest(url);}

 

Let's take a look at the results. It is a verification of the above analysis:

Http://help.arcgis.com/en/arcgisserver/10.0/help/arcgis_server_dotnet_help/index.html#//00930000165z000000.htm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.