Some tips about crawler large external data with bcs connec

Source: Internet
Author: User

To enable the SharePoint Search Component to retrieve external databases, business systems, binary files, and so on from external content sources, a custom Indexing Connector is usually required. Indexing Connector is a component based on Business Connectivity Services and Search Connector Framework in SharePoint 2010. It replaces the previous Protocol Handler and becomes SharePoint 2010 and FAST Search for SharePoint 2010) the main external data crawling and expansion methods supported. SharePoint 2010 still supports custom Protocol Handler .)

After creating a ctor via BCS, one of the possible problems is to use it to crawl a large amount of data. For example, millions or even tens of millions of data items. If Connector needs to face such challenges, you need to carefully design Connector.

First, Connector must be able to support incremental crawling. You certainly do not want the speed of incremental network crawling to be the same as that of full network crawling.

Connector supports incremental crawling in two ways: based on the last modification time (Timestamp-based) and based on the modification log (Changelog-based ). Based on the last modification time, that is, you need to specify a date and time field. The Crawler uses this field as the last modification time of the data item. When the incremental crawling is performed, the Crawler compares the data, to determine whether you need to re-process a data item. Based on the log modification, a special method is used to directly return newly added, modified, and deleted data items to the search engine, in this way, the search engine knows which data items need to be re-processed from the previous crawling to the present.

If the external content source has a large amount of data, even if it is the first full crawling, it may cause the Crawler to fail to work normally, or cause the external content source to withstand excessive pressure in a short time.

First, consider whether to directly use a Finder method to return all required data from an external content source at one time, similar to the select * from db_table operation ). In the case of small data volumes, this is very convenient, but if the data volume is too large, this may be inappropriate.

It is prudent to use only the IdEnumerator method and SpecificFinder method to obtain data. The IdEnumerator method is similar to the select id from db_table). You can return the ID of the data item, and then use these IDs to repeatedly call the SpecificFinder method similar to select * from db_table where ID = @ id ), to obtain data items one by one. In this case, you need to tell Connector that the IdEnumerator method is the RootFinder of the entire Entity. In most cases, you do not even need to define a Finder method, because crawlers should not retrieve too much data from external content sources at a time.

650) this. width = 650; "style =" background-image: none; border-right-0px; margin: 0px; padding-left: 0px; padding-right: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px "title =" image "border =" 0 "alt =" image "src =" http://www.bkjia.com/uploads/allimg/131228/1R6432421-0.png "width =" 588 "height =" 259 "/>

If the data volume is too large, even the execution of the IdEnumerator method may be faulty. Imagine returning IDs of tens of millions of data items from an external data source at a time. In this case, we need to go further and let the IdEnumerator method return only the ID of a specific number of data items, such as 1000) at a time.

We can define a filter with the type of LastId for the IdEnumerator method and the corresponding input Parameter (the direction is In Parameter). In this way, the Crawler will try to call the IdEnumerator method repeatedly, each time, the last ID obtained after the last call is passed as a parameter. In the implementation code of the IdEnumerator method, you can retrieve the data item ID after the ID from the external content source based on the input parameter.

650) this. width = 650; "style =" background-image: none; border-right-0px; margin: 0px; padding-left: 0px; padding-right: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px "title =" image "border =" 0 "alt =" image "src =" http://www.bkjia.com/uploads/allimg/131228/1R6436094-1.png "width =" 707 "height =" 301 "/>

650) this. width = 650; "style =" background-image: none; border-right-0px; margin: 0px; padding-left: 0px; padding-right: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px "title =" image "border =" 0 "alt =" image "src =" http://www.bkjia.com/uploads/allimg/131228/1R64333R-2.png "width =" 294 "height =" 246 "/>

The Crawler will try this method to repeatedly call the IdEnumerator method until it returns 0 results. The number of data items returned by each call depends on how many items are returned in the implementation code of the IdEnumerator method .)

650) this. width = 650; "style =" background-image: none; border-right-0px; margin: 0px; padding-left: 0px; padding-right: 0px; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px; padding-top: 0px "title =" image "border =" 0 "alt =" image "src =" http://www.bkjia.com/uploads/allimg/131228/1R643B09-3.png "width =" 451 "height =" 300 "/>

The search engine Crawler may call the SpecificFinder method multiple times to obtain the same data item. The specific reason is unknown ...), therefore, in Connector, you can consider using some cache techniques to reduce the number of requests to external content sources. If the data size of the content source is large, it is not a good idea to save all the data to the memory. You can consider using HttpRuntime. cache, although it seems that it can only be used for Web programs, but actually only need to reference System in the program. the Web assembly can be used in your Connector. Note: The Microsoft MSDN document shows that it is not ASP.. NET program may not work normally, but it does not seem to be working normally, but I do not guarantee that ...). The Cache implementation has built-in functions such as automatic clearing, priority setting, and Cache time setting, which is much more convenient than writing one by yourself. In addition, EntLib also contains a Cache implementation that can be used for any type of programs.

In addition to the above, Eric Wang also contributed an idea. That is, (according to his point of view) It is no problem to obtain a large amount of data at a time. If you really encounter massive data, you can create multiple Content sources in search management, each content source crawls a part of external data. For example, if there are 2 million external data items, you can create four content sources. Each content source obtains and crawls 0.5 million of the data items based on certain rules.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.