I run a website that generates long urls. Several years ago, when we started using Cloudflare, I always thought the URL limit was at 32k and I think there was documentation about this limit as well, but I can’t find it anymore.
We have seen issues with the URL length and it appears that Cloudflare load balancing seems to limit the URL length at around 16k (orange cloud).
Proxied servers seem to still support 32k characters.
Exceeding the URL length results in a generic nginx error:
400 Bad Request
Request Header Or Cookie Too Large
Parts of our website is using Cloudflare pages. Pages seem to be able to handle longer URLs. Which is also weird, because I thought Pages would use workers internally and workers are documented to have a limit of 16k urls.
The Cloudflare documentation for 4XX errors state:
414 URI Too Long (RFC7231)
Refusal from the server that the URI was too long to be processed. For example, if a client is attempting a GET request with an unusually long URI after a POST, this could be seen as a security risk and a 414 gets generated.
Cloudflare will generate this response for a URI longer than 32KB
Overall the situation seems to be inconsistent, so I was hoping for some clarification (and hopefully to be able to use 32k URL length again ).
Edit: the issues seems to happen only for load balancer and not generally for orange clouded servers.
Ok, I figured out the problem, it was not related to load balancing. I had a page rule with “cache everything” on the URL of the load balancer.
I removed the “cache everything” page rule and it works now.
The overall behaviour is still not really clear to me.
While I removed the cache everything rule, what happens if a response is cached for URLs larger than 16k? Can this happen? Will this lead to generic 400 errors again?
Overall the URL length specification for Cloudflare is not really transparent, maybe this can be improved in the documentation?
16K characters is very long for a URL. What are you doing that requires this?
It’s a GET request that contains the parameters required for the server to calculate the correct response. On average most requests are <1k, but some users like to create really complex problems . So this is probably < 0.1% of requests, but is regularly creating problems, because it suddenly breaks.
We do have logic in place that makes sure users cannot go beyond 32k, but we don’t have anything for 16k as we thought that CF’s limit was 32k.
There are a few reasons that we cannot easily switch to a POST request and add the parameters to the body.
While I agree the use case is exotic, I wouldn’t say that there is anything wrong about it and it is according to specification.
I’m also seeing a ~32K limit for URLs in both inbound and outbound Workers requests. Not sure why the docs say otherwise - perhaps they’re just out of date.
If one of these reasons is the lack of caching for POST requests, you could create a Worker and use the Cache API.
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.