helpers
MultiQueueLazyWorker ¶
MultiQueueLazyWorker(running_slots: int = 1)
A scheduling worker that controls access to an (implicit) shared resource.
The shared resource is not managed by the worker itself, but assumed
to be accessed by the coroutine given to run. The maximum number of
concurrent accesses is controlled by the running_slots parameter.
Items from the queues are scheduled fairly, favouring the queue with the least recent access to the resource. If the resource is equally contended by all queues, this results in round-robin scheduling.
Each queue has a fixed size of one, where new requests (by calls to run)
displace old ones, similar to the mechanism used by the LazyWorker.
Note: The implementation is not threadsafe, so it should not be used to synchronize access from different threads (e.g., by using a multi threaded async runtime).
Initialize the worker with a fixed number of running slots.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
running_slots
|
int
|
Maximum number coroutines running concurrently. |
1
|
run
async
¶
run(
queue_id: str, coro: Coroutine[Any, Any, _T]
) -> _T | None
Try to run the given coroutine. A later call with the same
queue_idwill displace this one. In that caseNone` is returned.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
queue_id
|
str
|
Uniquely identifies the queue that coro will be pushed to |
required |
coro
|
Coroutine[Any, Any, _T]
|
Coroutine which accesses a shared resource |
required |
Returns:
| Type | Description |
|---|---|
_T | None
|
None if a later call displaced this one, the return value of the coroutine otherwise |