Laravel 队列被“杀死”

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/45539032/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-14 16:27:48  来源:igfitidea点击:

Laravel queues getting "killed"

laravellaravel-5queue

提问by PeterInvincible

Sometimes when I'm sending over a large dataset to a Job, my queue worker exits abruptly.

有时,当我将大型数据集发送到作业时,我的队列工作人员会突然退出。

// $taskmetas is an array with other arrays, each subsequent array having 90 properties.
$this->dispatch(new ProcessExcelData($excel_data, $taskmetas, $iteration, $storage_path));

The ProcessExcelDatajob class creates an excel file using the box/spoutpackage.

ProcessExcelData工作类创建使用Excel文件盒/口包。

  • in the 1st example $taskmetashas 880 rows - works fine
  • in the 2nd example $taskmetashas 10,000 rows - exits abruptly
  • 在第一个示例中$taskmetas有 880 行 -工作正常
  • 在第二个例子中$taskmetas有 10,000 行 -突然退出

1st example - queue output with a small dataset:

第一个示例 - 使用小数据集排队输出:

forge@user:~/myapp.com$ php artisan queue:work --tries=1
[2017-08-07 02:44:48] Processing: App\Jobs\ProcessExcelData
[2017-08-07 02:44:48] Processed:  App\Jobs\ProcessExcelData

2nd example - queue output with a large dataset:

第二个例子 - 使用大数据集排队输出:

forge@user:~/myapp.com$ php artisan queue:work --tries=1
[2017-08-07 03:18:47] Processing: App\Jobs\ProcessExcelData
Killed

I don't get any error messages, logs are empty, and the job doesn't appear in the failed_jobstable as with other errors. The time limit is set to 1 hour, and the memory limit to 2GBs.

我没有收到任何错误消息,日志为空,并且该作业不会failed_jobs像其他错误一样出现在表中。时间限制设置为 1 小时,内存限制设置为 2GB。

Why are my queues abruptly quitting?

为什么我的队列突然退出?

回答by Smruti Ranjan

You can try with giving a timeout. For eg. php artisan queue:work --timeout=120

您可以尝试超时。例如。php artisan queue:work --timeout=120

By default, the timeout is 60 seconds, so we forcefully override the timeout as mentioned above

默认情况下,超时时间为 60 秒,因此我们如上所述强制覆盖超时时间

回答by Mhmd

I know this is not what you are looking for. but i have same problem and i think it's happen bcs of OS ( i will change it if i found the exact reason ) but lets check

我知道这不是你要找的。但我有同样的问题,我认为这是操作系统的 bcs(如果我找到确切原因,我会更改它)但让我们检查一下

queue:listen

instead of

代替

queue:work

the main different between this two is that the queue:listen run Job class codes per job ( so you dont need to restart your workers or supervisor) but the queue:work use cache system and work very faster than queue:listen and OS can not handle this speed and prepare queue connection ( in my case Redis )

这两者之间的主要区别在于 queue:listen 为每个作业运行作业类代码(因此您不需要重新启动您的工人或主管)但 queue:work 使用缓存系统并且工作速度比 queue:listen 和 OS 不能处理此速度并准备队列连接(在我的情况下为 Redis )

queue:listen command will run queue:work in it self ( you can check this from your running process in htopor .. )

queue:listen 命令将运行 queue:work in it self(您可以从htop或 .. 中的运行进程中检查它)

But the reason of telling you to check queue:listen command , bcs of the speed. OS can work easily with this speed and have no problem to handle your queue connection and ... ( in my case there is no silent kill any more )

但告诉你检查队列的原因:听命令,速度的bcs 。操作系统可以以这种速度轻松工作,并且可以毫无问题地处理您的队列连接和......(在我的情况下不再有静默杀死)

to know if you have my problem , you can change your queue driver to "sync" from .envand see if it's kill again or not - if it's not killed , you can know that the problem is on preparing queue connection for use

要知道您是否遇到我的问题,您可以将队列驱动程序从 .env 更改为“同步”,然后查看它是否再次被杀死 - 如果它没有被杀死,您可以知道问题在于准备队列连接以供使用

  • to know if you have memory problem run your queue with listen method or sync and php will return an Error for that, then you can increase your memory to test it again

  • you can use this code to give more memory for testing in your code

    ini_set('memory_limit', '1G');//1 GIGABYTE
    
  • 要知道您是否有内存问题,请使用侦听方法或同步运行队列,php 将为此返回错误,然后您可以增加内存以再次测试

  • 您可以使用此代码为代码中的测试提供更多内存

    ini_set('memory_limit', '1G');//1 GIGABYTE
    

回答by SpinyMan

Sometimes you work with resource-intensive processes like image converting or BIG excel file creating/parsing. And timeout option is not enough for this. You can set public $timeout = 0;in your job but it still killed because of memory(!). By default memory limit is 128 MB. To fix it just add --memory=256(or heigher) option to avoid this problem.

有时您会使用资源密集型流程,例如图像转换或 BIG excel 文件创建/解析。超时选项还不够。你可以设置public $timeout = 0;你的工作,但它仍然因为内存(!)而被杀死。默认内存限制为 128 MB。要修复它,只需添加--memory=256(或更高)选项即可避免此问题。

BTW:

顺便提一句:

The time limit is set to 1 hour, and the memory limit to 2GBs

时间限制设置为1小时,内存限制为2GBs

This applying only for php-fpm in your case but not for queue process worker.

这仅适用于您的情况下的 php-fpm 而不适用于队列进程工作者。

回答by Ryan

This worked for me:

这对我有用:

I had a Supervisord job:

我有一份主管工作:

Job ID, Queue, Processes, Timeout, Sleep Time, Tries, Action Job_1,
Default, 1, 60, 3, 3

https://laravel.com/docs/5.6/queues#retrying-failed-jobssays:

https://laravel.com/docs/5.6/queues#retrying-failed-jobs说:

To delete all of your failed jobs, you may use the queue:flush command:

php artisan queue:flush

要删除所有失败的作业,您可以使用 queue:flush 命令:

php artisan queue:flush

So I did that (after running php artisan queue:failedto see that there were failed jobs).

所以我这样做了(在运行php artisan queue:failed后看到有失败的作业)。

Then I deleted my Supervisord job and created a new one like it but with 360 second timeout.

然后我删除了我的 Supervisord 作业并创建了一个类似的新作业,但超时 360 秒。

Also important to remember was restarting the Supervisord job (within the control panel of my Cloudways app) and restarting the entire Supervisord process (within the control panel of my Cloudways server).

同样重要的是要记住重新启动 Supervisord 作业(在我的 Cloudways 应用程序的控制面板内)并重新启动整个 Supervisord 进程(在我的 Cloudways 服务器的控制面板内)。

After trying to run my job again, I noticed it in the failed_jobstable and read that the exception was related to cache file permissions, so I clicked the Reset Permission button in my Cloudways dashboard for my app.

再次尝试运行我的作业后,我在failed_jobs表中注意到它并读取异常与缓存文件权限有关,因此我单击了我的应用程序 Cloudways 仪表板中的重置权限按钮。

回答by Owais Alam

There are 2 options. Either running out of memory or exceeding execution time.

有2个选项。内存不足或超过执行时间。

Try $ dmesg | grep phpThis will show you more details

试试$ dmesg | grep php这将向您显示更多详细信息

Increase max_execution_timeand/or memory_limitin your php.ini file.

增加max_execution_time和/或memory_limit在您的 php.ini 文件中。