Conversation
|
I've pushed my initial implementation for this to #729 for comparison. Feel free to take anything you deem useful. :) |
|
Hi @rosa, I've published a benchmark harness that answers three questions:
https://github.com/crmne/solid_queue_bench Solid Queue: async vs thread
The cleanest result is DB pool ceiling (stress suite)The headline suite caps total concurrency to keep comparisons fair. The stress suite removes that cap. Async::Job vs Solid QueueAsync::Job + Redis is faster across all shared tests (+7% to +213%), but that's a different backend entirely -- a throughput ceiling reference, not a same-backend comparison. Bottom lineThe main |
|
I'm running this code in production at Chat with Work right now. Switched from |
Summary
Hi @rosa, I finally had some time to work on this after our earlier conversation about async worker execution mode.
This PR is a first implementation of async worker execution mode for Solid Queue. I'm also running benchmarks in parallel and can follow up with numbers once I have stable results.
Workers can now be configured with
execution_mode: :async(or:fiber), which runs claimed jobs as fibers on a single async reactor thread and bounds concurrency withcapacity/fibersinstead of a thread pool.This is separate from supervisor
asyncmode. Supervisor mode still controls whether managed processes run in forks or threads; this change adds an async execution backend for workers themselves.What Changed
SolidQueue::ExecutionPools::AsyncPoolSolidQueue::ExecutionPools::ThreadPoolexecution_mode: :asyncand:fiberas a configuration aliascapacity/fibersas the clearer async-worker concurrency optionsConfiguration / Validation
This PR also tightens async worker validation:
asyncgem is not availablethreads:and requirecapacityorfibersinsteadDatabase pool guidance is now Rails-version-aware:
The README now documents the version-specific DB-pool guidance and calls out sticky Active Record APIs that can still pin connections.
Why
The goal is to support cooperative, mostly I/O-bound job execution with lower thread overhead and a clearer concurrency model, while keeping the feature explicit and safe to configure.
Tests
Added and updated coverage for: