Python 和 Django OperationalError(2006,'MySQL 服务器已经消失')

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/14163429/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-18 10:39:13  来源:igfitidea点击:

Python and Django OperationalError (2006, 'MySQL server has gone away')

pythonmysqldjangonginxdjango-middleware

提问by Franz Payer

Original: I have recently started getting MySQL OperationalErrors from some of my old code and cannot seem to trace back the problem. Since it was working before, I thought it may have been a software update that broke something. I am using python 2.7 with django runfcgi with nginx. Here is my original code:

原文:我最近开始从我的一些旧代码中获取 MySQL OperationalErrors,并且似乎无法追溯问题。由于它之前可以工作,我认为它可能是一个破坏某些东西的软件更新。我正在将 python 2.7 与 django runfcgi 与 nginx 一起使用。这是我的原始代码:

views.py

视图.py

DBNAME = "test"
DBIP = "localhost"
DBUSER = "django"
DBPASS = "password"
db = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME)
cursor = db.cursor()

def list(request):
    statement = "SELECT item from table where selected = 1"
    cursor.execute(statement)
    results = cursor.fetchall()

I have tried the following, but it still does not work:

我尝试了以下方法,但仍然无效:

views.py

视图.py

class DB:
    conn = None
    DBNAME = "test"
    DBIP = "localhost"
    DBUSER = "django"
    DBPASS = "password"
def connect(self):
    self.conn = MySQLdb.connect(DBIP,DBUSER,DBPASS,DBNAME)
def cursor(self):
    try:
        return self.conn.cursor()
    except (AttributeError, MySQLdb.OperationalError):
        self.connect()
        return self.conn.cursor()

db = DB()
cursor = db.cursor()

def list(request):
    cursor = db.cursor()
    statement = "SELECT item from table where selected = 1"
    cursor.execute(statement)
    results = cursor.fetchall()

Currently, my only workaround is to do MySQLdb.connect()in each function that uses mysql. Also I noticed that when using django's manage.py runserver, I would not have this problem while nginx would throw these errors. I doubt that I am timing out with the connection because list()is being called within seconds of starting the server up. Were there any updates to the software I am using that would cause this to break/is there any fix for this?

目前,我唯一的解决方法是MySQLdb.connect()在每个使用 mysql 的函数中执行。我还注意到,当使用 django's 时manage.py runserver,我不会遇到这个问题,而 nginx 会抛出这些错误。我怀疑我的连接超时,因为list()在启动服务器的几秒钟内被调用。我正在使用的软件是否有任何更新会导致它中断/是否有任何解决方案?

Edit: I realized that I recently wrote a piece of middle-ware to daemonize a function and this was the cause of the problem. However, I cannot figure out why. Here is the code for the middle-ware

编辑:我意识到我最近写了一个中间件来守护一个函数,这就是问题的原因。但是,我无法弄清楚原因。这是中间件的代码

def process_request_handler(sender, **kwargs):
    t = threading.Thread(target=dispatch.execute,
        args=[kwargs['nodes'],kwargs['callback']],
        kwargs={})
    t.setDaemon(True)
    t.start()
    return
process_request.connect(process_request_handler)

采纳答案by Mark Streatfield

As per the MySQL documentation, your error message is raised when the client can't send a question to the server, most likely because the server itself has closed the connection. In the most common case the server will close an idle connection after a (default) of 8 hours. This is configurable on the server side.

根据MySQL 文档,当客户端无法向服务器发送问题时,会引发您的错误消息,这很可能是因为服务器本身已关闭连接。在最常见的情况下,服务器将在(默认)8 小时后关闭空闲连接。这可以在服务器端进行配置。

The MySQL documentationgives a number of other possible causes which might be worth looking into to see if they fit your situation.

MySQL文档给出了许多其他可能的原因,这可能是值得探讨,看看他们是否适合你的情况。

An alternative to calling connect()in every function (which might end up needlessly creating new connections) would be to investigate using the ping()method on the connection object; this tests the connection with the option of attempting an automatic reconnect. I struggled to find some decent documentationfor the ping()method online, but the answer to this questionmight help.

调用connect()每个函数(最终可能会不必要地创建新连接)的另ping()一种方法是使用连接对象上的方法进行调查;这通过尝试自动重新连接的选项来测试连接。我努力在网上找到有关该方法的一些体面的文档ping(),但这个问题的答案可能会有所帮助。

Note, automatically reconnecting can be dangerous when handling transactions as it appears the reconnect causes an implicit rollback (and appears to be the main reason why autoreconnect is not a feature of the MySQLdb implementation).

请注意,处理事务时自动重新连接可能很危险,因为重新连接会导致隐式回滚(并且似乎是自动重新连接不是 MySQLdb 实现的功能的主要原因)。

回答by Tim Baxter

How old is this code? Django has had databases defined in settings since at least .96. Only other thing I can think of is multi-db support, which changed things a bit, but even that was 1.1 or 1.2.

这段代码多久了?Django 至少从 0.96 开始就在设置中定义了数据库。我唯一能想到的就是多数据库支持,它稍微改变了一些,但即使是 1.1 或 1.2。

Even if you need a special DB for certain views, I think you'd probably be better off defining it in settings.

即使您需要特定视图的特殊数据库,我认为您最好在设置中定义它。

回答by Dima Tisnek

Check if you are allowed to create mysql connection object in one thread and then use it in another.

检查是否允许您在一个线程中创建 mysql 连接对象,然后在另一个线程中使用它。

If it's forbidden, use threading.Local for per-thread connections:

如果被禁止,请使用 threading.Local 进行每线程连接:

class Db(threading.local):
    """ thread-local db object """
    con = None

    def __init__(self, ...options...):
        super(Db, self).__init__()
        self.con = MySQLdb.connect(...options...)

db1 = Db(...)


def test():
    """safe to run from any thread"""
    cursor = db.con.cursor()
    cursor.execute(...)

回答by Milimetric

SQLAlchemy now has a great write-up on how you can use pinging to be pessimistic about your connection's freshness:

SQLAlchemy 现在有一篇很棒的文章,介绍了如何使用 ping 对连接的新鲜度持悲观态度:

http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic

http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-pessimistic

From there,

从那里,

from sqlalchemy import exc
from sqlalchemy import event
from sqlalchemy.pool import Pool

@event.listens_for(Pool, "checkout")
def ping_connection(dbapi_connection, connection_record, connection_proxy):
    cursor = dbapi_connection.cursor()
    try:
        cursor.execute("SELECT 1")
    except:
        # optional - dispose the whole pool
        # instead of invalidating one at a time
        # connection_proxy._pool.dispose()

        # raise DisconnectionError - pool will try
        # connecting again up to three times before raising.
        raise exc.DisconnectionError()
    cursor.close()

And a test to make sure the above works:

并进行测试以确保上述内容有效:

from sqlalchemy import create_engine
e = create_engine("mysql://scott:tiger@localhost/test", echo_pool=True)
c1 = e.connect()
c2 = e.connect()
c3 = e.connect()
c1.close()
c2.close()
c3.close()

# pool size is now three.

print "Restart the server"
raw_input()

for i in xrange(10):
    c = e.connect()
    print c.execute("select 1").fetchall()
    c.close()

回答by Shannon -jj Behrens

Sometimes if you see "OperationalError: (2006, 'MySQL server has gone away')", it is because you are issuing a query that is too large. This can happen, for instance, if you're storing your sessions in MySQL, and you're trying to put something really big in the session. To fix the problem, you need to increase the value of the max_allowed_packet setting in MySQL.

有时,如果您看到“OperationalError: (2006, 'MySQL server has away')”,那是因为您发出的查询太大。例如,如果您将会话存储在 MySQL 中,并且您试图在会话中放入一些非常大的东西,就会发生这种情况。要解决此问题,您需要增加 MySQL 中 max_allowed_pa​​cket 设置的值。

The default value is 1048576.

默认值为 1048576。

So see the current value for the default, run the following SQL:

因此,查看默认值的当前值,运行以下 SQL:

select @@max_allowed_packet;

To temporarily set a new value, run the following SQL:

要临时设置新值,请运行以下 SQL:

set global max_allowed_packet=10485760;

To fix the problem more permanently, create a /etc/my.cnf file with at least the following:

要更永久地解决问题,请创建一个至少包含以下内容的 /etc/my.cnf 文件:

[mysqld]
max_allowed_packet = 16M

After editing /etc/my.cnf, you'll need to restart MySQL or restart your machine if you don't know how.

编辑 /etc/my.cnf 后,如果您不知道如何操作,则需要重新启动 MySQL 或重新启动机器。

回答by deathangel908

I've been struggling with this issue too. I don't like the idea of increasing timeout on mysqlserver. Autoreconnect with CONNECTION_MAX_AGEdoesn't work either as it was mentioned. Unfortunately I ended up with wrapping every method that queries the database like this

我也一直在为这个问题苦苦挣扎。我不喜欢在 mysqlserver 上增加超时的想法。正如CONNECTION_MAX_AGE提到的那样,自动重新连接也不起作用。不幸的是,我最终包装了每个查询数据库的方法,如下所示

def do_db( callback, *arg, **args):
    try:
        return callback(*arg, **args)
    except (OperationalError, InterfaceError) as e:  # Connection has gone away, fiter it with message or error code if you could catch another errors
        connection.close()
        return callback(*arg, **args)

do_db(User.objects.get, id=123)  # instead of User.objects.get(id=123)

As you can see I rather prefer catching the exception than pinging the database every time before querying it. Because catching an exception is a rare case. I would expect django to reconnect automatically but they seemed to refusedthat issue.

正如您所看到的,我更喜欢捕获异常而不是每次在查询数据库之前 ping 数据库。因为捕获异常是一种罕见的情况。我希望 Django 能够自动重新连接,但他们似乎拒绝了这个问题。

回答by HelenM

I had this problem and did not have the option to change my configuration. I finally figured out that the problem was occurring 49500 records in to my 50000-record loop, because that was the about the time I was trying again (after having tried a long time ago) to hit my second database.

我遇到了这个问题,并且没有更改配置的选项。我终于发现问题出在我的 50000 条记录循环中出现了 49500 条记录,因为那是我再次尝试(在很久以前尝试过之后)访问我的第二个数据库的时间。

So I changed my code so that every few thousand records, I touched the second database again (with a count() of a very small table), and that fixed it. No doubt "ping" or some other means of touching the database would work, as well.

所以我改变了我的代码,每几千条记录,我再次接触第二个数据库(一个非常小的表的 count() ),然后修复它。毫无疑问,“ping”或其他接触数据库的方法也能奏效。

回答by ivanleoncz

In my opinion, the must common issue regarding such warning, is the fact that your application has reached the wait_timeout value of MySQL.

在我看来,关于此类警告的常见问题是您的应用程序已达到 MySQL 的 wait_timeout 值。

I had the same problem with a PythonFLASK app which I developed and I solved very easy.

我开发的 PythonFLASK 应用程序也遇到了同样的问题,我很容易解决。

P.S: I was using MySQL 5.7.14

PS:我使用的是 MySQL 5.7.14

$ grep timeout /etc/mysql/mysql.conf.d/mysqld.cnf 
# https://support.rackspace.com/how-to/how-to-change-the-mysql-timeout-on-a-server/
# wait = timeout for application session (tdm)
# inteactive = timeout for keyboard session (terminal)
# 7 days = 604800s / 4 hours = 14400s 
wait_timeout = 604800
interactive_timeout = 14400

One important observation:if you search for the variables via MySQL batch mode, the values will appear as it is. But If you perform "SHOW VARIABLES LIKE 'wait%';" or "SHOW VARIABLES LIKE 'interactive%';", the value configured for 'interactive_timeout', will appear to both variables, and I don't know why, but the fact is, that the values configured for each variable at '/etc/mysql/mysql.conf.d/mysqld.cnf', will be respected by MySQL process.

一个重要的观察结果:如果您通过 MySQL 批处理模式搜索变量,这些值将按原样显示。但是如果你执行“SHOW VARIABLES LIKE 'wait%';” 或“SHOW VARIABLES LIKE 'interactive%';”,为 'interactive_timeout' 配置的值将出现在两个变量中,我不知道为什么,但事实是,为每个变量配置的值在 '/etc /mysql/mysql.conf.d/mysqld.cnf',会被MySQL进程尊重。

Regards!

问候!

回答by Chris Johnson

This error is mysterious because MySQL doesn't report why it disconnects, it just goes away.

这个错误很神秘,因为 MySQL 没有报告它断开连接的原因,它只是消失了。

It seems there are many causes of this kind of disconnection. One I just found is, if the query string too large, the server will disconnect. This probably relates to the max_allowed_packetssetting.

这种断开连接的原因似乎有很多。我刚刚发现的一个是,如果查询字符串太大,服务器将断开连接。这可能与max_allowed_packets设置有关。

回答by Ryan Chou

Firstly, You should make sure the MySQL session & global enviroments wait_timeoutand interactive_timeoutvalues. And secondly Your client should try to reconnect to the server below those enviroments values.

首先,您应该确保 MySQL 会话和全局环境wait_timeoutinteractive_timeout值。其次,您的客户端应尝试重新连接到低于这些环境值的服务器。