Python Django 异步请求处理
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/32536799/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Python Django Asynchronous Request handling
提问by Gunjan chhetri
I am working in an application where i am doing a huge data processing to generate a completely new set of data which is then finally saved to database. The application is taking a huge time in processing and saving the data to data base. I want to improve the user experience to some extent by redirecting user to result page first and then doing the data saving part in background(may be in the asynchronous way) . My problem is that for displaying the result page i need to have the new set of processed data. Is there any way that i can do so that the data processing and data saving part is done in background and whenever the data processing part is completed(before saving to database) i would get the processed data in result page?.
我正在一个应用程序中工作,我正在执行大量数据处理以生成一组全新的数据,然后将其最终保存到数据库中。该应用程序在处理和将数据保存到数据库方面花费了大量时间。我想通过先将用户重定向到结果页面,然后在后台进行数据保存部分(可能是异步方式)来在一定程度上改善用户体验。我的问题是,为了显示结果页面,我需要有一组新的已处理数据。有什么办法可以让数据处理和数据保存部分在后台完成,每当数据处理部分完成时(在保存到数据库之前),我都会在结果页面中获取处理过的数据?
采纳答案by Sudip Kafle
Asynchronous tasks can be accomplished in Python using Celery. You can simply push the task to Celery queue and the task will be performed in an asynchronous way. You can then do some polling from the result page to check if it is completed.
异步任务可以使用Celery在 Python 中完成。您可以简单地将任务推送到 Celery 队列,任务将以异步方式执行。然后,您可以从结果页面进行一些轮询以检查它是否已完成。
Other alternative can be something like Tornado.
其他选择可以是类似Tornado 的东西。
回答by Gocht
It's the same process that a synchronous request. You will use a View
that should return a JsonResponse. The 'tricky' part is on the client side, where you have to make the async call to the view.
这与同步请求的过程相同。您将使用View
应该返回JsonResponse 的。“棘手”的部分在客户端,您必须在那里对视图进行异步调用。
回答by D3V
The long running task can be offloaded with Celery. You can still get all the updates and results. Your web application code should take care of polling for updates and results. http://blog.miguelgrinberg.com/post/using-celery-with-flaskexplains how one can achieve this.
长时间运行的任务可以用 Celery 卸载。您仍然可以获得所有更新和结果。您的 Web 应用程序代码应该负责轮询更新和结果。http://blog.miguelgrinberg.com/post/using-celery-with-flask解释了如何实现这一目标。
Some useful steps:
一些有用的步骤:
- Configure celery with result back-end.
- Execute the long running task asynchronously.
- Let the task update its state periodically or when it executes some stage in job.
- Poll from web application to get the status/result.
- Display the results on UI.
- 使用结果后端配置 celery。
- 异步执行长时间运行的任务。
- 让任务定期或在作业中执行某个阶段时更新其状态。
- 从 Web 应用程序轮询以获取状态/结果。
- 在 UI 上显示结果。
There is a need for bootstrapping it all together, but once done it can be reused and it is fairly performant.
需要将它们一起引导,但是一旦完成,它就可以重复使用并且性能相当高。
回答by Jonathan Nikkel
Another strategy is to writing a threading class that starts up custom management commands you author to behave as worker threads. This is perhaps a little lighter weight than working with something like celery, and of course has both advantages and disadvantages. I also used this technique to sequence/automate migration generation/application during application startup (because it lives in a pipeline). My gunicorn startup script then starts these threads in pre_exec() or when_ready(), etc, as appropriate, and then stops them in on_exit().
另一种策略是编写一个线程类来启动您编写的自定义管理命令,以充当工作线程。这可能比使用 celery 之类的东西轻一点,当然也有优点和缺点。我还使用这种技术在应用程序启动期间对迁移生成/应用程序进行排序/自动化(因为它存在于管道中)。我的 gunicorn 启动脚本然后在 pre_exec() 或 when_ready() 等中启动这些线程,视情况而定,然后在 on_exit() 中停止它们。
# Description: Asychronous Worker Threading via Django Management Commands
# Lets you run an arbitrary Django management command, either a pre-baked one like migrate,
# or a custom one that you've created, as a worker thread, that can spin forever, or not.
# You can use this to take care of maintenance tasks at start-time, like db migration,
# db flushing, etc, or to run long-running asynchronous tasks.
# I sometimes find this to be a more useful pattern than using something like django-celery,
# as I can debug/use the commands I write from the shell as well, for administrative purposes.
import json
import os
import requests
import sys
import time
import uuid
import logging
import threading
import inspect
import ctypes
from django.core.management import call_command
from django.conf import settings
class DjangoWorkerThread(threading.Thread):
"""
Initializes a seperate thread for running an arbitrary Django management command. This is
one (simple) way to make asynchronous worker threads. There exist richer, more complex
ways of doing this in Django as well (django-cerlery).
The advantage of this pattern is that you can run the worker from the command line as well,
via manage.py, for the sake of rapid development, easy testing, debugging, management, etc.
:param commandname: name of a properly created Django management command, which exists
inside the app/management/commands folder in one of the apps in your project.
:param arguments: string containing command line arguments formatted like you would
when calling the management command via manage.py in a shell
:param restartwait: integer seconds to wait before restarting worker if it dies,
or if a once-through command, acts as a thread-loop delay timer
"""
def __init__(self, commandname,arguments="",restartwait=10,logger=""):
super(DjangoWorkerThread, self).__init__()
self.commandname = commandname
self.arguments = arguments
self.restartwait = restartwait
self.name = commandname
self.event = threading.Event()
if logger:
self.l = logger
else:
self.l = logging.getLogger('root')
def run(self):
"""
Start the thread.
"""
try:
exceptioncount = 0
exceptionlimit = 10
while not self.event.is_set():
try:
if self.arguments:
self.l.info('Starting ' + self.name + ' worker thread with arguments ' + self.arguments)
call_command(self.commandname,self.arguments)
else:
self.l.info('Starting ' + self.name + ' worker thread with no arguments')
call_command(self.commandname)
self.event.wait(self.restartwait)
except Exception as e:
self.l.error(self.commandname + ' Unkown error: {}'.format(str(e)))
exceptioncount += 1
if exceptioncount > exceptionlimit:
self.l.error(self.commandname + " : " + self.arguments + " : Exceeded exception retry limit, aborting.")
self.event.set()
finally:
self.l.info('Stopping command: ' + self.commandname + " " + self.arguments)
def stop(self):
"""Nice Stop
Stop nicely by setting an event.
"""
self.l.info("Sending stop event to self...")
self.event.set()
#then make sure it's dead...and schwack it harder if not.
#kill it with fire! be mean to your software. it will make you write better code.
self.l.info("Sent stop event, checking to see if thread died.")
if self.isAlive():
self.l.info("Still not dead, telling self to murder self...")
time.sleep( 0.1 )
os._exit(1)
def start_worker(command_name, command_arguments="", restart_wait=10,logger=""):
"""
Starts a background worker thread running a Django management command.
:param str command_name: the name of the Django management command to run,
typically would be a custom command implemented in yourapp/management/commands,
but could also be used to automate standard Django management tasks
:param str command_arguments: a string containing the command line arguments
to supply to the management command, formatted as if one were invoking
the command from a shell
"""
if logger:
l = logger
else:
l = logging.getLogger('root')
# Start the thread
l.info("Starting worker: "+ command_name + " : " + command_arguments + " : " + str(restart_wait) )
worker = DjangoWorkerThread(command_name,command_arguments, restart_wait,l)
worker.start()
l.info("Worker started: "+ command_name + " : " + command_arguments + " : " + str(restart_wait) )
# Return the thread instance
return worker
#<----------------------------------------------------------------------------->
def stop_worker(worker,logger=""):
"""
Gracefully shutsdown the worker thread
:param threading.Thread worker: the worker thread object
"""
if logger:
l = logger
else:
l = logging.getLogger('root')
# Shutdown the thread
l.info("Stopping worker: "+ worker.commandname + " : " + worker.arguments + " : " + str(worker.restartwait) )
worker.stop()
worker.join(worker.restartwait)
l.info("Worker stopped: "+ worker.commandname + " : " + worker.arguments + " : " + str(worker.restartwait) )