55 order : 2
66---
77
8- import { WranglerConfig } from " ~/components" ;
8+ import { WranglerConfig , TypeScriptExample , Tabs , TabItem } from " ~/components" ;
99
1010## Batching
1111
@@ -62,7 +62,9 @@ You can acknowledge individual messages within a batch by explicitly acknowledgi
6262
6363To explicitly acknowledge a message as delivered, call the ` ack() ` method on the message.
6464
65- ``` ts title="index.js"
65+ <Tabs syncKey = " workersExamples" >
66+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
67+ ``` ts
6668export default {
6769 async queue(batch : MessageBatch , env : Env , ctx : ExecutionContext ) {
6870 for (const msg of batch .messages ) {
@@ -73,10 +75,26 @@ export default {
7375 },
7476};
7577```
78+ </TypeScriptExample >
79+ <TabItem label = " Python" icon = " seti:python" >
80+ ``` python
81+ from workers import WorkerEntrypoint
82+
83+ class Default (WorkerEntrypoint ):
84+ async def queue (self , batch ):
85+ for msg in batch.messages:
86+ # TODO : do something with the message
87+ # Explicitly acknowledge the message as delivered
88+ msg.ack()
89+ ```
90+ </TabItem >
91+ </Tabs >
7692
7793You can also call ` retry() ` to explicitly force a message to be redelivered in a subsequent batch. This is referred to as "negative acknowledgement". This can be particularly useful when you want to process the rest of the messages in that batch without throwing an error that would force the entire batch to be redelivered.
7894
79- ``` ts title="index.ts"
95+ <Tabs syncKey = " workersExamples" >
96+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
97+ ``` ts
8098export default {
8199 async queue(batch : MessageBatch , env : Env , ctx : ExecutionContext ) {
82100 for (const msg of batch .messages ) {
@@ -86,6 +104,19 @@ export default {
86104 },
87105};
88106```
107+ </TypeScriptExample >
108+ <TabItem label = " Python" icon = " seti:python" >
109+ ``` python
110+ from workers import WorkerEntrypoint
111+
112+ class Default (WorkerEntrypoint ):
113+ async def queue (self , batch ):
114+ for msg in batch.messages:
115+ # TODO : do something with the message that fails
116+ msg.retry()
117+ ```
118+ </TabItem >
119+ </Tabs >
89120
90121You can also acknowledge or negatively acknowledge messages at a batch level with ` ackAll() ` and ` retryAll() ` . Calling ` ackAll() ` on the batch of messages (` MessageBatch ` ) delivered to your consumer Worker has the same behaviour as a consumer Worker that successfully returns (does not throw an error).
91122
@@ -133,6 +164,8 @@ Configuring delivery and retry delays via the `wrangler` CLI or when [developing
133164
134165To delay a message or batch of messages when sending to a queue, you can provide a ` delaySeconds ` parameter when sending a message.
135166
167+ <Tabs syncKey = " workersExamples" >
168+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
136169``` ts
137170// Delay a singular message by 600 seconds (10 minutes)
138171await env .YOUR_QUEUE .send (message , { delaySeconds: 600 });
@@ -144,6 +177,21 @@ await env.YOUR_QUEUE.sendBatch(messages, { delaySeconds: 300 });
144177// If there is a global delay configured on the queue, ignore it.
145178await env .YOUR_QUEUE .sendBatch (messages , { delaySeconds: 0 });
146179```
180+ </TypeScriptExample >
181+ <TabItem label = " Python" icon = " seti:python" >
182+ ``` python
183+ # Delay a singular message by 600 seconds (10 minutes)
184+ await env.YOUR_QUEUE .send(message, delaySeconds = 600 )
185+
186+ # Delay a batch of messages by 300 seconds (5 minutes)
187+ await env.YOUR_QUEUE .sendBatch(messages, delaySeconds = 300 )
188+
189+ # Do not delay this message.
190+ # If there is a global delay configured on the queue, ignore it.
191+ await env.YOUR_QUEUE .sendBatch(messages, delaySeconds = 0 )
192+ ```
193+ </TabItem >
194+ </Tabs >
147195
148196You can also configure a default, global delay on a per-queue basis by passing ` --delivery-delay-secs ` when creating a queue via the ` wrangler ` CLI:
149197
@@ -158,7 +206,9 @@ When [consuming messages from a queue](/queues/reference/how-queues-works/#consu
158206
159207To delay an individual message within a batch:
160208
161- ``` ts title="index.ts"
209+ <Tabs syncKey = " workersExamples" >
210+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
211+ ``` ts
162212export default {
163213 async queue(batch : MessageBatch , env : Env , ctx : ExecutionContext ) {
164214 for (const msg of batch .messages ) {
@@ -169,10 +219,26 @@ export default {
169219 },
170220};
171221```
222+ </TypeScriptExample >
223+ <TabItem label = " Python" icon = " seti:python" >
224+ ``` python
225+ from workers import WorkerEntrypoint
226+
227+ class Default (WorkerEntrypoint ):
228+ async def queue (self , batch ):
229+ for msg in batch.messages:
230+ # Mark for retry and delay a singular message
231+ # by 3600 seconds (1 hour)
232+ msg.retry(delaySeconds = 3600 )
233+ ```
234+ </TabItem >
235+ </Tabs >
172236
173237To delay a batch of messages:
174238
175- ``` ts title="index.ts"
239+ <Tabs syncKey = " workersExamples" >
240+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
241+ ``` ts
176242export default {
177243 async queue(batch : MessageBatch , env : Env , ctx : ExecutionContext ) {
178244 // Mark for retry and delay a batch of messages
@@ -181,6 +247,19 @@ export default {
181247 },
182248};
183249```
250+ </TypeScriptExample >
251+ <TabItem label = " Python" icon = " seti:python" >
252+ ``` python
253+ from workers import WorkerEntrypoint
254+
255+ class Default (WorkerEntrypoint ):
256+ async def queue (self , batch ):
257+ # Mark for retry and delay a batch of messages
258+ # by 600 seconds (10 minutes)
259+ batch.retryAll(delaySeconds = 600 )
260+ ```
261+ </TabItem >
262+ </Tabs >
184263
185264You can also choose to set a default retry delay to any messages that are retried due to either implicit failure or when calling ` retry() ` explicitly. This is set at the consumer level, and is supported in both push-based (Worker) and pull-based (HTTP) consumers.
186265
@@ -233,6 +312,8 @@ Each message delivered to a consumer includes an `attempts` property that tracks
233312
234313For example, to generate an [ exponential backoff] ( https://en.wikipedia.org/wiki/Exponential_backoff ) for a message, you can create a helper function that calculates this for you:
235314
315+ <Tabs syncKey = " workersExamples" >
316+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
236317``` ts
237318const calculateExponentialBackoff = (
238319 attempts : number ,
@@ -241,10 +322,20 @@ const calculateExponentialBackoff = (
241322 return baseDelaySeconds ** attempts ;
242323};
243324```
325+ </TypeScriptExample >
326+ <TabItem label = " Python" icon = " seti:python" >
327+ ``` python
328+ def calculate_exponential_backoff (attempts , base_delay_seconds ):
329+ return base_delay_seconds ** attempts
330+ ```
331+ </TabItem >
332+ </Tabs >
244333
245334In your consumer, you then pass the value of ` msg.attempts ` and your desired delay factor as the argument to ` delaySeconds ` when calling ` retry() ` on an individual message:
246335
247- ``` ts title="index.ts"
336+ <Tabs syncKey = " workersExamples" >
337+ <TypeScriptExample filename = " index.ts" omitTabs = { true } >
338+ ``` ts
248339const BASE_DELAY_SECONDS = 30 ;
249340
250341export default {
@@ -262,9 +353,30 @@ export default {
262353 },
263354};
264355```
356+ </TypeScriptExample >
357+ <TabItem label = " Python" icon = " seti:python" >
358+ ``` python
359+ from workers import WorkerEntrypoint
360+
361+ BASE_DELAY_SECONDS = 30
362+
363+ class Default (WorkerEntrypoint ):
364+ async def queue (self , batch ):
365+ for msg in batch.messages:
366+ # Mark for retry and delay a singular message
367+ # by 3600 seconds (1 hour)
368+ msg.retry(
369+ delaySeconds = calculate_exponential_backoff(
370+ msg.attempts,
371+ BASE_DELAY_SECONDS ,
372+ )
373+ )
374+ ```
375+ </TabItem >
376+ </Tabs >
265377
266378## Related
267379
268380- Review the [ JavaScript API] ( /queues/configuration/javascript-apis/ ) documentation for Queues.
269381- Learn more about [ How Queues Works] ( /queues/reference/how-queues-works/ ) .
270- - Understand the [ metrics available] ( /queues/observability/metrics/ ) for your queues, including backlog and delayed message counts.
382+ - Understand the [ metrics available] ( /queues/observability/metrics/ ) for your queues, including backlog and delayed message counts.
0 commit comments