I don't want to talk about it. I have a computer with an i7 quad-core CPU with 8 GB memory.
There are five fields in the inserted data document, including two word segmentation, one int, one Date Batch insert test once 0.1 million cycles 10 times total 1 million use cases 85 seconds batch insert test once 0.1 million cycle 100 times total 10 million use cases 865 seconds insert performance is good
Query Test
In 12166454 data (approximately equal to 12 million), a total of 0.031 GB of hard disk used to query word segmentation field title: China 0.030 seconds of non-word segmentation field query view_count: 1 seconds of query view_count: [0 to 1000] It Takes 0.125 seconds for the second query above to take less than 0.001 seconds. It should be because of the cache, which is superior to the fuzzy query of the database and is obvious, I used MySQL 20 million to query the "% China %" direct database and got stuck. (However, full-text retrieval is not stored in the same way as SQL, but they both have advantages and disadvantages)
Among the hundreds of millions of data records
It takes 1 hour, 40 minutes, and 35 seconds to insert 0.1 billion data.
During the insertion process, the memory usage is about 70%, and the CPU usage does not change significantly by about 11%.
(Written at work, to be continued soon)
SOLR Performance Test of hundreds of millions of Data Queries