Musings on Consuming Queues

Thank you for all the intriguing new dev launches this week. I welcomed the public beta of queues in particular, as queues IME are a core building block for decoupled, resilient applications.

In that regard I would like to raise some questions, which I will try to illustrate by the following scenario.

Let’s say we want to build a distributed multi-player online game. It has the following core parameters.

  • Players join one of a number of teams (team red, team blue, team green, …)
  • Players go up against each other in group matches, like 3v3. Each match is managed centrally using one Durable Object per match and connects the players via websocket.
  • Once a match concludes, the outcome of that match will influence all other running matches of the two involved teams.

Solving the last coordination part via queues would be a quite ideal fit. But this leads to the following questions, which I couldn’t find answers to in the provided documentation.

  1. If multiple workers are consuming one queue, how is that handled? Will each of those get all the messages delivered at least once? How would the billing work in this case?
  2. Are there ways to create topics, similar to Kafka, like one for each team? Or alternatively, will it be possible to create queues programatically from within workers?
  3. If I set the max_batch_size = 1, will I get each message ASAP? What about max_batch_timeout = 0? What is the estimated delivery time for queues (longterm) - sub-second, second, sub-minute, …?
  4. How will scaling work, that is what happens if a queue saturates the ability of one worker to process data, will another copy of it be spawned? Any word what throughput you aim to achieve?

Thank you for your time :slight_smile:

There can only be one consumer per queue. Not multiple.

Today: you could call the API and create a new queue, but:

  1. We only allow 10 queues per account (Cloudflare Queues - Limits · Cloudflare Queues) - this will be a limit we raise in the coming weeks
  2. You have to bind queues to a Worker ahead of time (at deploy time).

See Batching and Retries · Cloudflare Queues - if you set a batch size as “1” and a timeout of “0” we will deliver messages as fast as possible. That’s typically in the low hundreds of milliseconds right now.

Right now there is only one consumer instance. That will change over the course of the beta as we allow concurrency (which is not multiple unique consumers: just instances of the same consumer). Throughput: thousands of MPS is the goal. You can achieve more with batching as that (necessarily) reduces the internal operations to deliver messages.


Thank you msilverlock for the detailed answer. :nerd_face:
While that’s not everything I was hoping for, these facts greatly help in planning the architecture nonetheless.

I very much look forward to what kind of improvements you may bring in the long term, as queues are a key building block to decouple and scale, something inherent to the paradigm of serverless.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.