That’s great!
Seems less great though, but I believe I have a plausible idea about that:
DNS records do have the TTL (Time To Live) as mentioned above, - however, when you are scanning domains (e.g. to verify it’s “correct” set up), how quickly do you actually want to re-scan?
You could always choose to follow the actual TTL, … but, that adds other caveats:
If some random person or organisation set their TTL to 60 seconds (1 minute), - you would end up on scanning that specific domain (or records) about 1'440
times per day, 10'080
per week, or roughly 524'160
per year.
Question is just, would there, like ever, be any meaningful reason to that?
It would likely do a lot more useless Internet traffic, than it would actually do any help, with that many scans.
As such, many of the organisations I know out there, that are leaning on (re)checking domain related things in the DNS system, are not just leaning directly to the actual TTL, but delaying even further, to avoid (too much) wasted traffic, and at the same time, play nicely to other Internet properties.
My guess, without having access to the programming code to verify the exact procedures (or similar), would be, that Cloudflare would scan again after these 7 days, and if things are then normal (e.g. as expected), it would be set back to Active.
I guess most would be in the same boat, as you are here, regarding that thought.
@cloonan Any chance the above could be escalated, either to find a fix (if required), or at least to confirm the plausible reason I gave above?