flush_queue
To run jobs in the queue you can use the command flush_queue
.
python manage.py flush_queue
flush_queue
will run once through the jobs that are scheduled to run at that time, but will exit early if any job throws an exception. Normally you would use it from an external script that simply keeps re-running the command.
while :; do ( python manage.py flush_queue && sleep 10 ) ; done
Jobs are executed in priority order first (higher numbers execute earlier), then by the scheduled time (unscheduled jobs will go last, but of course only jobs whose scheduled time has arrived will run) and finally by their ID order (which should be the order they were added). A failed task will be re-scheduled for later execution.
In order to limit problems caused by potential memory leaks (for example, the use of DEBUG=True
) the number of jobs in one run is limited, by default to 300.
python manage.py flush_queue -j 300
python manage.py flush_queue --jobs 300
It is also possible to run more than one queue processor, with each one taking a different block of jobs for executing. You need to tell each run which jobs it should choose and how many runners are going to be used.
python manage.py flush_queue -w 1 -o 2 python manage.py flush_queue -w 2 -o 2
python manage.py flush_queue --which 1 --outof 2 python manage.py flush_queue --which 2 --outof 2
Jobs are allocated to workers based on their job IDs.
queue_health
Queue health is done via the queue_health command:
python management.py queue_health