Could this be a server setting that rate-limits based source IP. Since cloudflare will proxy back to origin from a limited set of IPs its possible that settings or local firewall products cPanel hosts love to use has been updated and enforcing this.
Do you see http requests from CF hitting your origin server with Wireshark?
we are not using cpanel or any firewall beyond firewalld. and no changes were made when the problem occurred.
it clearly is related to routing of some sort given that it has affected hundreds of people on multiple providers simultaneously and then just as simultaneously seems to have alleviated, with no info from cloudflare illuminating us on what actually is happening.
The escalation is ongoing, so far looks this looks to origin issue and our post mortem investigation is ongoing. Once complete, I’ll share details here. In reviewing the tickets, I believe the issue has now resolved (since around 9pm UTC) & the team is no longer seeing new issues reported.
We have not been able to link this to the Kinsta issue, so far. Nonetheless, based on the suggestions here, the team has asked the our engineers to investigate further as part of their post mortem.
Have you verified you are receiving no packets from CF?
KnownHost also run their own DDOS system, so I dont think anything is clear at this point.
Interestingly, Cloudflare new IPs went into affect today, have been definitely been whitelisted against concurrency rate limits?
No, but that was never the issue. the issue was intermittent 520 errors, and seems to have been resolved already.
At this point I’d just like to know what happened. I’ve been in close contact with the knownhost folks trying to track down the issue, which is moot since it somehow resolved without them doing anything.
Origin issue? Where’s the evidence to suggest this?
This started happening around 6am UTC (possibly earlier). How can you say this wasn’t related to Kinsta with a straight face? The issue “automagically” resolved when Kinsta indicated that you rolled back an update at your Miami Datacenter.
You’re not doing yourself any favors by continually redirecting and blaming the server.
A post was merged into an existing topic: Is Cloudflare slowing down?
All possible settings on our end were checked and triple checked. Our DDOS mitigation system was not in play on any of the impacted individuals.
KnownHost made zero changes prior to the cloudflare issues, and zero changes when it began working correctly again. We’re awaiting additional information from Cloudflare like everyone else but I cannot see how this wasn’t related to CF implemented changes.