Apache SOLR – background_merge_hit_exception

background_merge_hit_exception

I got this error the other day while administering a SOLR instance. background_merge_hit_exception_39m3C845845739m4C20_into39m5_optimize_mergeDocStores_javaioIOException_backg

On the particular server I was working on I received it while trying to do an optimize command.  I also saw the following effects:

  • Optimization commands would fail
  • Commit commands would succeed
  • Search commands would succeed – but the unoptimized / uncommitted files would not be included in the results

I searched online and found the following resource:

This led me to run the following command which basicaly does a check utility on the Solr Indexes

java -ea:org.apache.lucene -cp lib/lucene-core-2.9.1.jar org.apache.lucene.index.CheckIndex {install_dir}/solr/data/index

In this case one of our indexes was… so we got the following spit out at the very end:

WARNING: 1 broken segments (containing 8457625 documents) detected
WARNING: would write new segments file, and 8457625 documents would be lost, if -fix were specified

Here is the BAD Index

1 of 8: name=_39m3 docCount=8458457
compound=false
hasProx=true
numFiles=9
size (MB)=1,873.279
diagnostics = {optimize=true, mergeFactor=2, os.version=2.6.18-xenU-ec2-v1.0, os=Linux, mergeDocStores=true, lucene.version=2.9.1 832363 – 2009-11-03 04:37:25, source=merge, os.arch=amd64,java.version=1.6.0_04, java.vendor=Sun Microsystems Inc.}
has deletions delFileName=_39m3_11.del
test: open reader………OK 832 deleted docs
test: fields…………..OK 9 fields
test: field norms………OK 9 fields
test: terms, freq, prox…ERROR [term email:42929174 docFreq=80 -!= num docs seen 1 + num docs deleted 0]java.lang.RuntimeException: term email:42929174 docFreq=80 != num docs seen 1 + num docs deleted 0
at org.apache.lucene.index.CheckIndex.testTermIndex(CheckIndex.java:675)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:530)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
test: stored fields…….OK 59203349 total field count. avg 7 fields per doc
test: term vectors……..OK 0 total vector count. avg 0 term-freq vector fields per doc
FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.lang.RuntimeException: Term Index test failed
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:543)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)

By comparrison – here is what a good index should look like:

3 of 8: name=_3aaq docCount=1
compound=false
hasProx=true
numFiles=9
size (MB)=0
diagnostics = {os.version=2.6.18-xenU-ec2-v1.0, os=Linux, lucene.version=2.9.1 832363 – 2009-11-03 04:37:25, source=flush, os.arch=amd64, java.version=1.6.0_04, java.vendor=Sun Microsystems Inc.}
has deletions delFileName=_3aaq_1.del
test: open reader………OK 1 deleted docs
test: fields…………..OK 5 fields
test: field norms………OK 5 fields
test: terms, freq, prox…OK 6 terms. 6 terms-docs pairs. 0 tokens
test: stored fields…….OK 0 total field count. avg ? fields per doc
test: term vectors……..OK 0 total vector count. avg ? term-freq vector fields per doc

So… To fix it – you have to run the same command as up above but ith the “-fix” parameter appended to the end… be sure to sudo it if you need the perms… also you will end up basically erasing that index – so make sure that this is not mission critical data.  In the above case this was a new slave so it will remove 8 million records.

sudo java -ea:org.apache.lucene -cp lib/lucene-core-2.9.1.jar org.apache.lucene.index.CheckIndex {install_dir}/solr/data/index -fix

Once I ran that I saw this printout:

NOTE: will write new segments file in 5 seconds; this will remove 8457625 docs from the index. THIS IS YOUR LAST CHANCE TO CTRL+C!
5…
4…
3…
2…
1…

Then afterwards, I saw this at the bottom of the output:

Writing…
OK
Wrote new segments file “segments_32la”

Immediately afterwards I was able to perform the optimize command without any issues… but… all 8 million records that were in that segmented file were lost…

Posted in , and tagged , .

Leave a Reply

Your email address will not be published. Required fields are marked *