Python Sql Alchemy QueuePool 限制溢出

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/24956894/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 05:30:56  来源:igfitidea点击:

Sql Alchemy QueuePool limit overflow

pythonsessionsqlalchemyzopeconnection-timeout

提问by QLands

I have a Sql Alchemy application that is returning TimeOut:

我有一个返回超时的 Sql Alchemy 应用程序:

TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30

TimeoutError: QueuePool limit of size 5 overflow 10 达到,连接超时,超时 30

I read in a different post that this happens when I don't close the session but I don't know if this applies to my code:

我在另一篇文章中读到,当我不关闭会话时会发生这种情况,但我不知道这是否适用于我的代码:

I connect to the database in the init.py:

我连接到 init.py 中的数据库:

from .dbmodels import (
    DBSession,
    Base,    

engine = create_engine("mysql://" + loadConfigVar("user") + ":" + loadConfigVar("password") + "@" + loadConfigVar("host") + "/" + loadConfigVar("schema"))

#Sets the engine to the session and the Base model class
DBSession.configure(bind=engine)
Base.metadata.bind = engine

Then in another python file I'm gathering some data in two functions but using DBSession that I initialized in init.py:

然后在另一个 python 文件中,我在两个函数中收集一些数据,但使用我在 init.py 中初始化的 DBSession:

from .dbmodels import DBSession
from .dbmodels import resourcestatsModel

def getFeaturedGroups(max = 1):

    try:
        #Get the number of download per resource
        transaction.commit()
        rescount = DBSession.connection().execute("select resource_id,count(resource_id) as total FROM resourcestats")

        #Move the data to an array
        resources = []
        data = {}
        for row in rescount:
            data["resource_id"] = row.resource_id
            data["total"] = row.total
            resources.append(data)

        #Get the list of groups
        group_list = toolkit.get_action('group_list')({}, {})
        for group in group_list:
            #Get the details of each group
            group_info = toolkit.get_action('group_show')({}, {'id': group})
            #Count the features of the group
            addFesturedCount(resources,group,group_info)

        #Order the FeaturedGroups by total
        FeaturedGroups.sort(key=lambda x: x["total"],reverse=True)

        print FeaturedGroups
        #Move the data of the group to the result array.
        result = []
        count = 0
        for group in FeaturedGroups:
            group_info = toolkit.get_action('group_show')({}, {'id': group["group_id"]})
            result.append(group_info)
            count = count +1
            if count == max:
                break

        return result
    except:
        return []


    def getResourceStats(resourceID):
        transaction.commit()
        return  DBSession.query(resourcestatsModel).filter_by(resource_id = resourceID).count()

The session variables are created like this:

会话变量是这样创建的:

#Basic SQLAlchemy types
from sqlalchemy import (
    Column,
    Text,
    DateTime,
    Integer,
    ForeignKey
    )
# Use SQLAlchemy declarative type
from sqlalchemy.ext.declarative import declarative_base

#
from sqlalchemy.orm import (
    scoped_session,
    sessionmaker,
    )

#Use Zope' sqlalchemy  transaction manager
from zope.sqlalchemy import ZopeTransactionExtension

#Main plugin session
DBSession = scoped_session(sessionmaker(extension=ZopeTransactionExtension()))

Because the session is created in the init.py and in subsequent code I just use it; at which point do I need to close the session? Or what else do I need to do to manage the pool size?

因为会话是在 init.py 和后续代码中创建的,所以我只是使用它;什么时候我需要关闭会话?或者我还需要做什么来管理池大小?

采纳答案by Minh-Hung Nguyen

You can manage pool size by adding parameters pool_size and max_overflow in function create_engine

您可以通过在函数中添加参数 pool_size 和 max_overflow 来管理池大小 create_engine

engine = create_engine("mysql://" + loadConfigVar("user") + ":" + loadConfigVar("password") + "@" + loadConfigVar("host") + "/" + loadConfigVar("schema"), 
                        pool_size=20, max_overflow=0)

Reference is here

参考在这里

You don't need to close the session, but the connection should be closed after your transaction has been done. Replace:

您不需要关闭会话,但在您的事务完成后应该关闭连接。代替:

rescount = DBSession.connection().execute("select resource_id,count(resource_id) as total FROM resourcestats")

By:

经过:

connection = DBSession.connection()
try:
    rescount = connection.execute("select resource_id,count(resource_id) as total FROM resourcestats")
    #do something
finally:
    connection.close()

Reference is here

参考在这里

Also, notice that mysql's connection that have been stale is closed after a particular period of time (this period can be configured in MySQL, I don't remember the default value), so you need passing pool_recycle value to your engine creation

另外,注意mysql已经失效的连接在特定时间段后关闭(这个时间段可以在MySQL中配置,我不记得默认值),因此您需要将pool_recycle值传递给您的引擎创建

回答by Игор Ра?ачи?

Add following method to your code. It will automatically close all unused/hanging connections and prevent bottleneck in your code. Especially if you are using following syntax Model.query.filter_by(attribute=var).first() and relationships / lazy loading.

将以下方法添加到您的代码中。它将自动关闭所有未使用/挂起的连接并防止代码出现瓶颈。特别是如果您使用以下语法 Model.query.filter_by(attribute=var).first() 和关系/延迟加载。

   @app.teardown_appcontext
    def shutdown_session(exception=None):
        db.session.remove()

Documentation on this is available here: http://flask.pocoo.org/docs/1.0/appcontext/

这方面的文档可在此处获得:http: //flask.pocoo.org/docs/1.0/appcontext/

回答by Alex

Also you can use engine.dispose()method in the end of def. This has the effect of fully closing all currently checked in database connections.

您也可以在 def 的末尾使用engine.dispose()方法。这具有完全关闭所有当前检查的数据库连接的效果。