Hello Community,
I am looking for help regarding an error that Moz’s DotBot is getting when attempting to crawl our site. I have created a Firewall Rule that allows the User Agent access. There was also a firewall event in my feed that said that access was allowed, but the support team at Moz sent me the 403 error that occurred at the exact same time.
Moz’s support team also sent be the 200 ok response that they got when their other crawler Rogerbot crawled the site. They have also stated that the IP addresses for their crawlers constantly changes, so it wouldn’t be reliable. They suggest whitelisting the user agent, which is what I did.
Does anyone have experience whitelisting DotBot? Any assistance is appreciated.