Home > OS >  MySQL High CPU Consumption (600%)
MySQL High CPU Consumption (600%)

Time:02-18

We have a high traffic server running mysql. This database has run fine for years with no significant spikes in traffic or changes to the database but recently Mysql CPU consumption has spiked to a consistent 600%.

We don't see any slow queries being logged but we get the following recommendations.

We're not very knowledgeable with MySQL and everything we've tried to fix the CPU consumption has not worked.

If anyone can provide some recommended settings that would be appreciated.

top - 08:08:59 up 313 days, 12:47,  2 users,  load average: 7.21, 6.48, 6.08
Tasks: 294 total,   1 running, 293 sleeping,   0 stopped,   0 zombie
Cpu(s): 29.9%us, 10.6%sy,  3.9%ni, 47.1%id,  3.1%wa,  0.0%hi,  5.5%si,  0.0%st
Mem:   8018016k total,  7869300k used,   148716k free,   587632k buffers
Swap:        0k total,        0k used,        0k free,  5894136k cached

[mysqld]
# skip-networking
#tmpdir=/dev/shm
general_log = 1
slow_query_log = 1
log-slow-queries = /var/log/mysql-slow.log
slow-query_log_file = /var/log/mysql-slow.log
long_query_time = 2
sort_buffer_size = 4M
max_connections = 151
max_allowed_packet = 100M
query_cache_size = 128M
query_cache_limit = 5M
join_buffer_size = 10M
tmp_table_size = 64M
max_heap_table_size = 64M
thread_cache_size = 4
table_cache = 20k
innodb_buffer_pool_size = 512M
key_buffer_size = 128M
local-infile=0
open_files_limit=2058
wait_timeout = 500
connect_timeout = 500
interactive_timeout = 500
performance_schema = on


 >>  MySQLTuner 1.4.0 - Major Hayden <[email protected]>
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
[OK] Logged in using credentials passed on the command line
[OK] Currently running supported MySQL version 5.5.50-log
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------
[--] Status:  ARCHIVE  BLACKHOLE  CSV -FEDERATED  InnoDB  MRG_MYISAM 
[--] Data in MyISAM tables: 453M (Tables: 169)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)
[!!] InnoDB is enabled but isn't being used
[!!] Total fragmented tables: 5

-------- Security Recommendations  -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[--] Up for: 22m 39s (763K q [561.834 qps], 22K conn, TX: 506M, RX: 95M)
[--] Reads / Writes: 71% / 29%
[--] Total buffers: 848.0M global   14.6M per thread (151 max threads)
[OK] Maximum possible memory usage: 3.0G (39% of installed RAM)
[OK] Slow queries: 0% (1/763K)
[OK] Highest usage of available connections: 52% (80/151)
[OK] Key buffer size / total MyISAM indexes: 128.0M/273.3M
[OK] Key buffer hit rate: 100.0% (72M cached / 29K reads)
[OK] Query cache efficiency: 79.1% (472K cached / 596K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (8 temp sorts / 43K sorts)
[OK] Temporary tables created on disk: 0% (37 on disk / 10K total)
[!!] Thread cache hit rate: 47% (12K created / 22K connections)
[OK] Table cache hit rate: 97% (249 open / 256 opened)
[OK] Open file limit used: 0% (277/41K)
[!!] Table locks acquired immediately: 53%

-------- Recommendations -----------------------------------------------------
General recommendations:
    Add skip-innodb to MySQL configuration to disable InnoDB
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Optimize queries and/or use InnoDB to reduce lock wait
Variables to adjust:
    thread_cache_size (> 4)

Suboptimal caching method.

You are using the MySQL Query cache with a fairly high traffic database. It might be worth considering to use memcached instead of the MySQL Query cache, especially if you have multiple slaves.

Less than 80% of the query cache is being utilized.

This might be caused by query_cache_limit being too low. Flushing the query cache might help as well.

The query cache is considerably fragmented.

Severe fragmentation is likely to (further) increase Qcache_lowmem_prunes. This might be caused by many Query cache low memory prunes due to query_cache_size being too small. For a immediate but short lived fix you can flush the query cache (might lock the query cache for a long time). Carefully adjusting query_cache_min_res_unit to a lower value might help too, e.g. you can set it to the average size of your queries in the cache using this formula: (query_cache_size - qcache_free_memory) / qcache_queries_in_cache

There are lots of rows being sorted.

While there is nothing wrong with a high amount of row sorting, you might want to make sure that the queries which require a lot of sorting use indexed columns in the ORDER BY clause, as this will result in much faster sorting

There are too many joins without indexes.

This means that joins are doing full table scans. Adding indexes for the columns being used in the join conditions will greatly speed up table joins

The rate of reading the first index entry is high.

This usually indicates frequent full index scans. Full index scans are faster than table scans but require lots of CPU cycles in big tables, if those tables that have or had high volumes of UPDATEs and DELETEs, running 'OPTIMIZE TABLE' might reduce the amount of and/or speed up full index scans. Other than that full index scans can only be reduced by rewriting queries.

The rate of reading data from a fixed position is high.

This indicates that many queries need to sort results and/or do a full table scan, including join queries that do not use indexes. Add indexes where applicable.

The rate of reading the next table row is high.

This indicates that many queries are doing full table scans. Add indexes where applicable.

Many temporary tables are being written to disk instead of being kept in memory.

Increasing max_heap_table_size and tmp_table_size might help. However some temporary tables are always being written to disk, independent of the value of these variables. To eliminate these you will have to rewrite your queries to avoid those conditions (Within a temporary table: Presence of a BLOB or TEXT column or presence of a column bigger than 512 bytes) as mentioned in the MySQL Documentation

MyISAM key buffer (index cache) % used is low.

You may need to decrease the size of key_buffer_size, re-examine your tables to see if indexes have been removed, or examine queries and expectations about what indexes are being used.

The rate of opening tables is high.

Opening tables requires disk I/O which is costly. Increasing table_open_cache might avoid this.

The rate of opening files is high.

Consider increasing open_files_limit, and check the error log when restarting after changing open_files_limit.

Too many table locks were not granted immediately.

Optimize queries and/or use InnoDB to reduce lock wait.

Too many table locks were not granted immediately.

Optimize queries and/or use InnoDB to reduce lock wait.

Too many connections are aborted.

Connections are usually aborted when they cannot be authorized. This article might help you track down the source.

The InnoDB log file size is not an appropriate size, in relation to the InnoDB buffer pool.

Especially on a system with a lot of writes to InnoDB tables you should set innodb_log_file_size to 25% of innodb_buffer_pool_size. However the bigger this value, the longer the recovery time will be when database crashes, so this value should not be set much higher than 256 MiB. Please note however that you cannot simply change the value of this variable. You need to shutdown the server, remove the InnoDB log files, set the new value in my.cnf, start the server, then check the error logs if everything went fine. See also this blog entry

CodePudding user response:

Upgrade -- Support of 5.5 ended about 7 years ago.

Tip-top recommendation: Switch your tables from MyISAM to InnoDB. And be sure to adjust key_buffer_size down to 20M and innodb_buffer_pool_size higher, but not so high that it causes swapping.

Top recommendation: Use the SlowLog that you have been capturing. Digest it and let's discuss the 'worst' couple of queries.

Well, maybe there is nothing much in the slowlog ("Slow queries: 0% (1/763K)"). So lower long_query_time to 0.4

The query cache may be hurting more than it is helping. It is hard to say.

Not much of the 8MB is taken up by MySQL; do you have other apps running on the same machine? Changing memory size will not relieve CPU issues. Using a little more memory to raise table_open_cache, open_files_limit, etc, may help with the CPU.

If you are not using MyISAM, lower key_buffer_size to 20M.

Ignore the note about "fragmented tables".

Turn off the general_log; it rapidly fills the disk. (Not much CPU impact.)

"Up for: 22m 39s" -- The analysis is mostly focused on "startup" activity; hence does not adequately address your question. Wait 24 hours.

Deeper analysis: http://mysql.rjweb.org/doc.php/mysql_analysis#tuning

CodePudding user response:

Before I get started, I'll mention that everything in the recommended list is worth looking at and I agree strongly with the other answer that 5.5 is crazy-old — to the point of not even getting security patches. Upgrading is a very good idea.


That out of the way, I want to focus on this:

This database has run fine for years with no significant spikes in traffic or changes to the database

With that in mind, I know two common reasons you can see a sudden drop in performance like this:

  1. The disks are on the verge of failing, such that some disk I/O operations are slow or repeating. If this is your issue, you'll likely find other evidence of it in the core server (not MySql) logs... but not always!

  2. As you see data growth — not just over time but in terms of business growth (hopefully the business has more customers than it did when the server was installed) — it takes more memory to keep active records buffered in RAM instead of frequently reading from the (much slower) disks than it used to. You can hit a tipping point, where suddenly many of your queries that used to finish entirely in RAM now need to spill out to disk.

Yes, yes, I know both of these are disk issues, while the problem presents as CPU use. However, disk I/O problems often show up as CPU load issues, with processes waiting on disk to complete.

The solution to both of these usually involves hardware upgrades... but you can delay this if it's the second reason by working the list of recommendations in your question.

  • Related