For concurrency, Lucene. Net follows the following rules:
1. allow any number of concurrent read operations, that is, a certain number of users can perform retrieval operations on the same index at the same time.
2. Even if you are modifying indexes (index optimization, adding documents, and deleting documents), you can still execute any number of search operations concurrently.
3. concurrent modification operations are not allowed, that is, only one index modification operation is allowed at a time.
Lucene. Net has implemented multi-thread security processing. Open the indexwrite. CS/indexreade. CSR file and you will find that many operations use lock for multi-thread synchronization lock. As long as you follow certain rules, you can run Lucene. net safely in a multi-threaded environment.
Suggestion:
1. directotry and analyzer are all multi-threaded security types. You only need to create a singleton object.
2. All threads use the same indexmodifier object for index modification.
3. It is recommended that indexwriter/indexreader/indexmodifier/indexsearcher use the same directory object. Otherwise, filenotfoundexception may be triggered during multi-thread concurrent read/write.
The indexmodifier object encapsulates the common operations of indexwriter and indexreader, and implements multi-thread synchronous locking. Using indexmodifier can avoid the trouble of synchronizing multiple objects when both indexwriter and indexreader are used. After all the modifications are completed, remember to call the close () method to close related resources. Optimize () is not required for each operation. You can regularly perform optimization operations based on specific situations.
--------
the following example shows that the Code encapsulates an indexmodifier signleton type to ensure that multiple threads use the same object, the close method can only be closed by the last multi-thread call.
the code is incomplete and is for reference only! You must make some changes to apply the changes to the actual project. Public class myindexmodifier
{< br> Private Static directory = new ramdirectory ();
private static analyzer = new standardanalyzer ();
Private Static indexmodifier modifier;
Private Static list threadlist = new list ();
private myindexmodifier () {}
Public static indexmodifier getinstance ()
{
Lock (threadlist)
{
If (modifier = NULL)
{
Modifier = new indexmodifier (directory, analyzer, false );
Modifier. setmaxfieldlength (1000 );
}
If (! Threadlist. Contains (thread. currentthread ))
Threadlist. Add (thread. currentthread );
Return modifier;
}
}
Public static void close ()
{
Lock (threadlist)
{
If (threadlist. Contains (thread. currentthread ))
Threadlist. Remove (thread. currentthread );
If (threadlist. Count = 0)
{
If (modifier! = NULL)
{
Modifier. Close ();
Modifier = NULL;
}
}
}
}
}
Multi-thread test code
For (INT I = 0; I <100; I ++)
{
New thread (delegate ()
{
Indexmodifier writer = myindexmodifier. getinstance ();
For (INT x = 0; x <10; X ++)
{
Document Doc = new document ();
Doc. Add (field. Text ("A", "Hello, world! "));
Writer. adddocument (DOC );
}
Console. writeline ("{0 }:{ 1}", thread. currentthread. managedthreadid, writer. doccount ());
Myindexmodifier. Close (); // do not call indexmodifier. Close ()!
}). Start ();
}
It is easy to calculate a few pieces of data according to the following formula.
Total number of indexed documents
Documentcount = searcher. Reader. numdocs ();
Total search results
Count = hits. Length ();
Number of records per page
Pagesize;
Total number of result pages (handle the remainder)
Pagecount = (count/pagesize) + (count % pagesize> 0? 1: 0 );
The number of pages to be displayed. (when the number of pages is greater than the total number of pages, the last page is returned .)
Pageindex = math. Min (pageindex, pagecount );
Starting record sequence number (Starting sequence number must be greater than or equal to zero)
Startpos = math. Max (pageindex-1) * pagesize, 0 );
End record sequence number (cannot exceed the total number of records)
Endpos = math. Min (pageindex * pagesize-1, Count-1 );
Demo: return 10th pages with 20 records per page.
Hits hits = searcher. Search (query );
Int pageindex = 10;
Int pagesize = 20;
Int COUNT = hits. Length ();
If (count> 0)
{
Int pagecount = count/pagesize + (count % pagesize> 0? 1: 0 );
Pageindex = math. Min (pageindex, pagecount );
Int startpos = math. Max (pageindex-1) * pagesize, 0 );
Int endpos = math. Min (pageindex * pagesize-1, Count-1 );
For (INT I = startpos; I <= endpos; I ++)
{
Int docid = hits. ID (I );
String name = hits. DOC (I). Get (fieldname );
Float score = hits. Score (I );
Console. writeline ("docid: {0}; Name: {1}; score: {2}", docid, name, score );
}
}
Author: yuyun