MySQL MySQL中有很多“查询结束”状态,所有连接都在几分钟内使用

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/13234290/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-31 15:22:45  来源:igfitidea点击:

Lots of "Query End" states in MySQL, all connections used in a matter of minutes

mysql

提问by Engineer81

This morning I noticed that our MySQL server load was going sky high. Max should be 8 but it hit over 100 at one point. When I checked the process list I found loads of update queries (simple ones, incrementing a "hitcounter") that were in query endstate. We couldn't kill them (well, we could, but they remained in the killedstate indefinitely) and our site ground to a halt.

今天早上我注意到我们的 MySQL 服务器负载飙升。最大值应该是 8,但它一度超过 100。当我检查进程列表时,我发现了大量处于状态的更新查询(简单查询,增加“hitcounter”)query end。我们无法杀死他们(好吧,我们可以,但他们会killed无限期地留在该州),我们的站点也陷入停顿。

We had loads of problems restarting the service and had to forcibly kill some processes. When we did we were able to get MySQLd to come back up but the processes started to build up again immediately. As far as we're aware, no configuration had been changed at this point.

我们在重新启动服务时遇到了很多问题,不得不强行杀死一些进程。当我们这样做时,我们能够让 MySQLd 重新启动,但进程立即开始再次建立。据我们所知,此时尚未更改任何配置。

So, we changed innodb_flush_log_at_trx_commitfrom 2 to 1 (note that we need ACID compliance) in the hope that this would resolve the problem, and set the connections in PHP/PDO to be persistent. This seemed to work for an hour or so, and then the connections started to run out again.

因此,我们innodb_flush_log_at_trx_commit从 2更改为 1(请注意,我们需要符合 ACID)希望这能解决问题,并将 PHP/PDO 中的连接设置为持久连接。这似乎工作了一个小时左右,然后连接再次开始耗尽。

Fortunately, I set a slave server up a couple of months ago and was able to promote it and it's taking up the slack for now, but I need to understand why this has happened and how to stop it, since the slave server is significantly underpowered compared to the master, so I need to switch back soon.

幸运的是,我在几个月前设置了一个从服务器并且能够推广它并且它现在正在填补空缺,但是我需要了解为什么会发生这种情况以及如何阻止它,因为从服务器的功率明显不足相比于主人,所以我需要尽快切换回来。

Has anyone any ideas? Could it be that something needs clearing out? I don't know what, maybe the binary logs or something? Any ideas at all? It's extremely important that we can get this server back as the master ASAP but frankly I have no idea where to look and everything I have tried so far has only resulted in a temporary fix.

有没有人有任何想法?莫非有什么需要清理?我不知道是什么,也许是二进制日志之类的?有什么想法吗?我们可以尽快将这台服务器恢复为主服务器非常重要,但坦率地说,我不知道去哪里寻找,到目前为止我尝试过的一切都只是临时修复。

Help! :)

帮助!:)

回答by Engineer81

I'll answer my own question here. I checked the partition sizes with a simple dfcommand and there I could see that /var was 100% full. I found an archive that someone had left that was 10GB in size. Deleted that, started MySQL, ran a PURGE LOGS BEFORE '2012-10-01 00:00:00'query to clear out a load of space and reduced the /var/lib/mysql directory size from 346GB to 169GB. Changed back to master and everything is running great again.

我将在这里回答我自己的问题。我用一个简单的df命令检查了分区大小,在那里我可以看到 /var 是 100% 满的。我发现有人留下的一个大小为 10GB 的档案。删除它,启动 MySQL,运行PURGE LOGS BEFORE '2012-10-01 00:00:00'查询以清除空间负载并将 /var/lib/mysql 目录大小从 346GB 减少到 169GB。改回主人,一切都再次运行良好。

From this I've learnt that our log files get VERY large, VERY quickly. So I'll be establishing a maintenance routine to not only keep the log files down, but also to alert me when we're nearing a full partition.

从中我了解到我们的日志文件变得非常大,非常快。因此,我将建立一个维护例程,不仅要保持日志文件不可用,还要在我们接近完整分区时提醒我。

I hope that's some use to someone in the future who stumbles across this with the same problem. Check your drive space! :)

我希望这对将来遇到同样问题的人有用。检查您的驱动器空间!:)

回答by akoby

We've been having a very similar problem, where the mysql processlist showed that almost all of our connections were stuck in the "query end" state. Our problem was also related to replication and writing the binlog.

我们遇到了一个非常相似的问题,mysql 进程列表显示我们几乎所有的连接都停留在“查询结束”状态。我们的问题还与复制和编写二进制日志有关。

We changed the sync_binlog variable from 1 to 0, which means that instead of flushing binlog changes to disk on each commit, it allows the operating system to decide when to fsync() to the binlog. That entirely resolved the "query end" problem for us.

我们将 sync_binlog 变量从 1 更改为 0,这意味着不是在每次提交时将 binlog 更改刷新到磁盘,而是允许操作系统决定何时 fsync() 到 binlog。这完全解决了我们的“查询结束”问题。

According to this post from Mats Kindahl, writing to the binlog won't be as much of a problem in the 5.6 release of MySQL.

根据Mats Kindahl 的这篇文章,在 MySQL 5.6 版本中写入 binlog 不会有太大的问题。

回答by Mathieu Longtin

In my case, it was indicative of maxing out the I/O on disk. I had already reduced fsyncs to a minimum, so it wasn't that. Another symptoms is "log*.tokulog*" files start accumulating because the system can't catch up all the writes.

就我而言,它表示磁盘上的 I/O 达到最大值。我已经将 fsyncs 减少到最低限度,所以不是那样。另一个症状是“log*.tokulog*”文件开始累积,因为系统无法赶上所有写入。