Is it okay to block Googlebot?

I saw lots - thousands - of very suspicious queries to my website’s /search endpoint, and was surprised that Cloudflare appeared to be letting a fake Googlebot access my site. But then I was even more surprised to see that it really was Google! Here’s an example (some are much worse: online pharmacy, etc):

Query string
User agent
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.90 Mobile Safari/537.36 (compatible; Googlebot/2.1; +
IP address
United States

Here’s another one using Path, also Googlebot:



Query string

Empty query string

I should add that query strings can not naturally arise though use of my site. Queries are POSTed to my server, but “q” is a variable that can be made to produce results. If you’re a hacker…

It hadn’t ever occurred to me to disallow /search in robots.txt and now I have done that. Also, I respond with a soft 406 error - “Not Acceptable” - if you attempt to add your own “q” .

Meanwhile, I continue to use a Firewall rule to issue a JS Challenge against this type of search, and will continue to catch Googlebot until it rereads my robots.txt.

Is this the correct way to handle this case?

1 Like

Googlebot might find those URLs if someone posted a link to those somewhere else - on your site or discussion forums, etc.

Adding those paths to robots and blocking the script from Googlebot is ok in this case.


That’s what I guessed about the origin of the URLs - I’ve caught over 6,000 so far though, that’s a crazy list of links for Google to follow. Our site is legit, there’d never be links like that, and no results for those queries either!

It feels odd to be blocking Google… thanks for the confirmation.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.