Cannot resolve

#1 doesn’t resolve using
Can someone confirm?


Switched to google public dns, and resolves. is too buggy for me.


I suspect servers to require responses that exceed 4096 bytes (!), which is the reasonable maximum response size many servers and libraries tolerate.

Just tested with a simple A query without any options returns no less than 3976 bytes.

Adding some options or switching to a query such as ANY likely requires more than 4096 bytes, but instead of returning a truncated response, it just… doesn’t return any response at all.

But worse… With a shorter advertised payload size, it returns a truncated response without the truncated bit set.


I think most of their IPv6 nameservers drop large responses. And most of their responses are large.

So if a resolver is unlucky, or prefers IPv6, it might retry over IPv6 for a few seconds before trying IPv4. Or perhaps retrying with a smaller EDNS size.


And retrying with a smaller EDNS size is unreliable due to the TC bit not being set in their responses.

Wondering what a decent workaround would be. Retry directly over TCP instead of reducing the advertised payload size? This is not great.


I don’t know. I can say that Unbound seems to resolve it reliably – even with QNAME minimisation and 0x20 randomisation – though it takes 1-2 seconds from a cold cache.

I’m not sure if it’s luck, or different fallback logic, or different retry logic, or what.

Is it just me, or does kresd strongly prefer IPv6? Unbound (by default) picks IPv4 or IPv6 servers randomly. Maybe kresd tries 2 or 3 of the IPv6 servers before hitting an internal timeout and giving up, or something like that.

Whereas Unbound will probably try one of the IPv4 servers quickly, and it’s incredibly persistent anyway.

(I have no idea how kresd works. For that matter, I have almost no idea how Unbound works…)

Edit: PowerDNS and Google seem to have no problem resolving it either.

Edit: Though Google considers the zone insecure.


kresd has about 20% bias towards IPv6, the problem is with the oversize DNSKEY response from the v6. We’ll add an override to remove the bias for their nameservers in the next upgrade.


I forgot to update this. It should be resolving now, let me know if you’re still seeing issues.