-
-
Notifications
You must be signed in to change notification settings - Fork 98
Description
I am seeing an interesting issue. Tasks are getting lost when the following settings are applied
taskiq worker app.worker:broker app.tasks --max-async-tasks=5 --max-tasks-per-child=5 --wait-tasks-timeout=600 --reload
I did this test on purpose to see how worker restarts work etc. And I am a bit worried about the results.
Some items are getting completely lost and remain in the queue
Note: I have 3 replicas running via dockerocompose replica command and each one uses --reload (1 worker per replica)
lev-cortex-worker-3 | INFO [11/30/25 15:21:36] Streamed 120 events to Redis for run-stream:
lev-cortex-worker-3 | 434012e5-0b54-43c8-b568-22452d27980a
lev-cortex-worker-3 | INFO [11/30/25 15:21:36] No more tasks to wait for. Shutting down.
lev-cortex-worker-3 | INFO [11/30/25 15:21:36] The runner is stopped.
lev-cortex-worker-3 | WARNING [11/30/25 15:21:36] Shutting down the broker.
lev-cortex-worker-3 | [2025-11-30 15:21:37,292][taskiq.process-manager][INFO ][MainProcess] worker-0 is dead. Scheduling reload.
lev-cortex-worker-3 | [2025-11-30 15:21:38,354][taskiq.process-manager][INFO ][MainProcess] Process worker-0 restarted with pid 33
At some point now I see a bunch of tasks left in the queue. If I don't use max-tasks-per-child I think it all works.. except of course, I won't get the worker restarts I am testing.
Also the shutdown logs seem strange - I don't think those are the logs for the max-tasks-per-child shutdown right?