Home > database >  Millions of level data count (1) how to deal with slow?
Millions of level data count (1) how to deal with slow?

Time:10-12

Millions of level data count (1) how to deal with slow?

CodePudding user response:

Millions of data, a count should be drizzle...

CodePudding user response:

This count is not the primary key, you modify the program to increase statistical fields

CodePudding user response:

See whether to use the primary key index,

CodePudding user response:

Have no experience in this field? Now is 7 million about statistic to 7 seconds, don't baidu returns the count (field) & gt; Count (1) & gt; Count (*), or by a trigger to do, I want to learn the great spirit with mysql reaches certain order of magnitude of the situation, how to consider,

CodePudding user response:

Can check tables table without conditions

CodePudding user response:

Just finished writing a similar problem, the write post the answer again

Method 1: using MyISAM engine, InnoDB engine to perform the count (*), need to get the data line to read out from inside the engine, and then the cumulative count, so the efficiency is low, (behind the statement with the where condition when using MyISAM engine efficiency will also be discounted,)

Method 2: use redis for a counter

Method 3: table

CodePudding user response:

Add index!
The count performance analysis

CodePudding user response:

Normal speed, depending on your hardware
In addition, the mysql does not support the single query in parallel, so the processing its efficiency is very low, you can try yourself in parallel section in the program

CodePudding user response:

On the worst mysql tencent cloud, 100 w records, zero point for a few seconds,

If the id is the increased and continuous, Max, or the last record,

If does not require accurate, you can check the metadata table,

CodePudding user response:

The magnitude of data in mysql unanswered,

Suggest you put the data into a wide, synchronization to Elasticsearch , Elasticsearch is most suitable for analysis of the polymerization,

CodePudding user response:

The order of magnitude should be not a problem,
Thanked me for the count () five summary article, said here is simple:
1. If the table contains the increasing field (the best is the primary key), without the where and table records do not delete, you can use the Max (primary key)
2. If there is a where, can take a range, Max (primary key) - min (primary key) + 1 for
3. The count (*) and count (1), according to the official documentation there is no performance difference, it is recommended to use the former
4. The count (field) this usage, if the field is not null value, are consistent with the count (*); If there is a null value, its not statistical null line
5. If does not require accurate estimate can use the explain + SQL to achieve, for about 10% of the error; Information_schema. ` TABLES stored in the table have a estimate of all TABLES,

CodePudding user response:

https://blog.csdn.net/dfy11011/article/details/106143617
  • Related