Issue with Googlebot Crawling Due to Robots.txt Block

What is the name of the domain?

nben[.]com[.]np

What is the issue you’re encountering

Googlebot is being blocked from crawling my website, despite having a robots.txt file that allows everything (User-agent: * Allow: /). Google Search Console reports an error stating “Blocked by robots.txt,” and the page fetch fails. The crawl time is shown as Mar 3, 2025, 8:46:39 AM, with indexing status marked as N/A.

What steps have you taken to resolve the issue?

I have verified that the robots.txt file (curl -A “Mozilla/5.0” -s -L https://nben.com.np/robots.txt) is publicly accessible and correctly allows Googlebot. I have also checked my Cloudflare settings, but I suspect there might be a firewall rule or security setting blocking Googlebot’s access. I would appreciate assistance in identifying any Cloudflare settings or rules that might be causing the issue.

Screenshot of the error

Hi,

Please try the following,

  • In DNS settings, change nben.com.np from Proxied (orange cloud) to DNS only (grey cloud).
  • Wait a few minutes and retry Googlebot verification.

If the issue persists with DNS only, this means that you need to re-check robots.txt file as the issue is likely on Google’s side or your origin server

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.