Feature Request: Replace the subrequest limit with billing for additional subrequests

I’m building a feed reader with cloudflare workers. I use a cron trigger to update feeds, which worked at first, but lead to the “too many subrequests” error as soon as I had more then 50 feeds. I could split the updates accross different invocations, but that seems ugly and wasteful.

EDIT: Splitting the updates accross different incovations turned out to be harder then expected because “you can’t call a worker of the same zone from within a worker running on that specific zone”. I guess I could use two workers sending requests back and forth, but that’s really to ugly to contemplate. So it looks like I’ll have to use another service to do the updates. That really sucks.

1 Like

You can call the same worker, just not on the same zone. You could call the workers.dev subdomain 50% of the time and alternating them. It isn’t that hard to check the current URL and call the other, it’s 1 line of code.

1 Like

Though I hear there’s been work on that and it’s something to be on the lookout for :eyes:

2 Likes

Interesting, but it still seems easier to use another server to run a scheduler there, and create a worker with endpoints to list and update the feeds.

There is a native cron scheduler in Workers, I am not sure it might be what you are looking for.

Yes that’s what I was using but the 50 request limit also applies to that.

Yeah, that is true. I believe there is a limit of 3 cron jobs per worker, so 3 runs per “slot”, so up to 180 runs a minute.

I know there is some work done to change that limit, but no ETA nor confirmations on anything.

But the real problem isn’t the limit, it’s that it seems wasteful to do 180 runs per minute now, when I have only a 110 feeds to update, and it seems stupid to manually change how often to trigger the worker based on how many feeds I have to update and this all requires me to keep track of which feeds I already updated, which is also annoying.

Idea, can’t you run a cron job, with the frequency you need, which calls the worker (or another worker) it requires in batches (up to 50 - calls you need to do to save the updates, unless it’s KV)?

I tried that, but it’s a mess … I got it to work on miniflare but it didn’t work with the actual workers. (I actually thought at first that workers not being send fetch requests to the same worker in the same zone was the reason it didn’t work, but now I’m not sure how that applies to cron triggers, maybe the problem was something else.)

Even if it worked, it would only work for 2500 feeds (certainly something I hope I get beyond, I already have 110 feeds only using it myself, so with only 25 users using it as much as I do I’d get beyond that) After that would I would have to call workers who in turn call other workers …

Random question, why not using something like message queues (in other provider, for now at least) and call the worker that way? Or run a third party function (Google’s, Amazon’s, whatever) that calls the worker? Or even rent a very cheap server that calls them at need? You can find some at like 2$/month.

Yes, my plan is for now that I rent a server and I have a cloudflare with two endpoints, one that lists all the feeds, and one that updates them in KV, so the server sends one request to the first endpoint, fetches all the feeds, and then sends the result to the second endpoint.

1 Like

I’m actually using this now, because cron triggers are free, and it’s very straightforward with limits and cursors.

Heroku free plan did the trick for me. Pro tip: You can turn on/off dynos on demand by calling the Heroku API in your workers, making the free plan swim further. :slight_smile:

As for the 50 max, yeah you can run concurrent scheduled workers, chain your workers (convoluted), or just wait for them to lift the limit. The 3rd option is the best option, imo.

Any new whispers from the little birds @Walshy ? This limitation is the last one that makes the usage of an Unbound CRON painful. We use them for syncing a lot of data (like, lots) and the ability to have a much higher number of requests (like: hundreds) would be highly appreciated.

1 Like