Home > other >  when update many docs how to avoid TOO_MANY_REQUESTS problem
when update many docs how to avoid TOO_MANY_REQUESTS problem

Time:12-06

There is a large index, recently full updated this index to add some new fields

health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   company             KTngnM6ASD-_KdU0FFAWRA   1   0   96008284      3662063     33.6gb         33.6gb

First 20 threads concurrently to bulk the index, 200 records per thread,used 2 days, but there are 31712850 records updated failed,

id: 20078928430 opType: UPDATE status: TOO_MANY_REQUESTS

then did update only for these failed records, and this time use 10 threads , but this time still exists the same problem, 103800 records updated failed, then have to update these records again.

So want to know how to avoid the problem when update many records meantime take less time?

CodePudding user response:

With heavy indexing you might want to optimize for indexing speed and probably benchmark against your cluster to find out optimal bulk size and concurrency - specific numbers will depend on the configuration of your cluster and mapping (some features like n-grams have extremely heavy overhead while indexing). Also you might want to monitor thread pool usage to detect issues early.

  • Related