I’ve noticed that Cloudflare returns “Error 520 - Web server is returning an unknown error” when my cookie header’s size is about 4Kb. Which is strange, since the official limit is 16Kb. Other headers are standard, minuscule ones, so they don’t contribute too much to the overall headers’ size.
It’s weird that Cloudflare doesn’t even notice you about it, I’ve stumbled upon this just randomly. And the server error logs are empty, because obviously there are no errors from the server’s perspective.
But more importantly - it’s now impossible to reach the website for some users! I’ve implemented some code to make the cookies’ size smaller, but those users, whose cookies have already exceeded this limit can’t access the website to have their cookies reorganized! They basically have to clear their cookies, which the majority of people won’t think of when they see the error screen!
So a lot of questions here:
First and foremost: what to do with the users who are effectively banned from the website?
Why does this happen even though the headers’ size is nowhere near 16Kb?
Is there a way to log this behavior somehow to be aware of such situations?
Investigate whether your origin web server silently rejects requests when the cookie is too big.
I believe nginx’ max header size is 8kb, so if you’re using that on your server you might want to double-check that limit and modify if needed.
Alternatively, deploy a Cloudflare Worker which intercepts requests to your origin web server, and use it to trim or modify the user’s cookies to get it below 4kb.
The only thing I could think of at the moment is that your web server is silently dropping requests when the cookie size is too big. Try to deploy a different webserver on a subdomain that’s not being used, and check if that one also rejects requests when the user has large cookies.
Cloudflare does have Origin Service Level Monitoring, which will notify you when an increase in server errors is being observed, but unfortunately, that is only limited to enterprise customers at the moment.
I’m not aware of another method to be automatically notified of situations like this.
This was indeed the issue! For future reference: those are the settings that solved it (NGINX):
The default limit for one header is actually 4Kb in NGINX. So raising it explicitly helped. This is indeed weird that no errors have been written to the error log.
Hm, I guess it has something to do with HTTP2: can’t include a link to the docs (new account), but the trick directive is http2_max_field_size which has a default value of 4Kb.
The docs say that those directives are obsolete, but it did the trick nonetheless. I mean those directives were not explicitly set, so the server was operating with defaults on those, and “large_client_header_buffers” wasn’t taken into account for some reason.