The cloudflare 188.8.131.52 announcement indicated a strong use of DNSSEC-based negative caching.
In general, this is a good idea.
However, there’s a common use case where it’s not: ACME challenges.
Let’s Encrypt (I’m a customer) uses the ACME protocol to issue “SSL” certificates. This can be based on a DNS challenge - e.g. can you put a special token in your DNS on demand.
If they probe too early, your negative caching kicks in - and it takes about 15 minutes to clear, extending an operation that should only take a few seconds. These tokens don’t persist - they are created, live for a minute or two, and disappear. Rather a worst case for strong negative caching.
Fortunately, they are easy to identify - they have the (RFC-specified) format of “_acme-challenge.hostname.example.org”.
It would be a community service if you treated these specially in your negative caching - turn it off, reduce the negative cache time to a more reasonable number (say, 5 mins or less), or do something adaptive.
A 5 min negative cache time is consistent with the typical minimum TTL of many providers (hence these records) - and would provide you with reasonable DOS protection while not inconveniencing this use case.
By the way, the issue isn’t so much the Let’s Encrypt validation servers, as the fact that the clients check the public DNS (e.g. 184.108.40.206 or competitors) to see that the tokens SHOULD be visible to LE. It’s pretty much a polled process, since propagation times vary dramatically…
Off the top of my head, most other tokens (e.g. google API, etc’s txt records) tend to be long lived - once installed, they persist. But it’s possible that other token-like records could use similar special treatment…