Ad

Why Are My Laravel Queue Jobs Failing After 60 Seconds?

- 1 answer

The Situation

I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).

I am using Supervisor to run my queue, and I am running 20 processes at a time. My supervisor config file looks like this:

[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log

There are a few oddities that I don't know how to explain or correct:

  1. My jobs fairly consistently fail after running for 60 to 65 seconds.
  2. After being marked as failed the job continues to run even after being marked as failed. Eventually they do end up resolving successfully.
  3. When I run the failed task in isolation to find the cause of the issue it succeeds just fine.

I strongly believe this is a timeout issue; however, I was under the impression that --timeout=0 would result in an unlimited timeout.

The Question

How can I prevent this temporary "failure" job state? Are there other places where a queue timeout might be invoked that I'm not aware of?

Ad

Answer

It turns out that in addition to timeout there is an expire setting defined in config/queue.php

    'database' => [
        'driver' => 'database',
        'table' => 'jobs',
        'queue' => 'default',
        'expire' => 60,
    ],

Changing that to a higher value did the trick.


UPDATE: This parameter is now called retry_after

    'database' => [
        'driver' => 'database',
        'table' => 'jobs',
        'queue' => 'default',
        'retry_after' => 60,
    ],
Ad
source: stackoverflow.com
Ad