Recently, Google released a paper titled large-scale incremental processing using distributed transactions and restrictions on the core mechanism of its next-generation real-time search system. this paper introduces a bigtable-based system named "percolator", which functions very similar to trigger of traditional databases, but has its unique design in terms of scalability, the following are their summaries and related articles.
Summary
Updating an index of the Web as users are crawled requires continuously transforming a large repository of existing users as new users arrive. this task is one example of a class of data processing tasks that transform a large repository of data via small, independent mutations. these tasks lie in a gap between the capabilities of existing infrastructure. databases do not meet the storage or throughput requirements of these tasks: Google's indexing system stores tens of petabytes of data and processes billions of updates per day on thousands of machines. mapreduce and other batch-processing systems cannot process small updates individually as they rely on creating large batches for efficiency.
We have built percolator, a system for Incrementally processing updates to a large data set, and deployed it to create the Google Web search index. by replacing a batch-based indexing system with an indexing system based on incremental Processing Using percolator, we process the same number of documents per day, while deleting the average age of documents in Google search results by 50%.
Related Articles
Google's colossus makes search real-time by dumping mapreduce