The website has a random retrieval function implemented using the NewGuid of SQL Server.
I don't know how MSSQL is implemented internally. In short, it is quite slow. Generally, I use 140 ms + for simple queries (2 k + for data search and 12 k + for total data ), the number of records on the server is at least 200 ms, and the number of records is very small. I really don't know how to deal with the random number of tens of thousands of data. If any of you want to give me some advice.
When lucene is used in the site, the index is always stored in memory (not the entire index, which is simplified). If the guid is used, it must be faster, because it has one doc for two fields: int + score: float
Lucene. Net version: 2.9.2
In my practice, I first use MatchAllDocsQuery to obtain all the results, and then calculate all the IDS by using linq and then throw them to the SQL statement. The Code is as follows:
var query = new MyQuery();
query.WithAll();
var results=IndexConfig.RealTimeSearch.SearchIndex(query.Query);
results.Documents = results.Documents.OrderBy(d => Guid.NewGuid()).Take(25);
SQL query:
Select id, xx1, xx2 from table where id in (id1, id2, id3); // idN is the result of the previous linq calculation.
The local test result is about 50 ms, which is basically the SQL query time!