I am a publisher and I activated Super bot fight mode with the Business plan.
There are many blocked/challenged request user agents of ad networks and tools, that might reduce my ads revenue if blocked for example:
IAS Crawler, admantx, moatbot, proximic, thetradedesk are evaluation tools that affect website quality score and ad requests.
AmazonAdBot, GumGum-Bot, CriteoBot are ad networks that might not publish ads if blocked.
Can a legitimate user agent be faked by bad bots? If true how to determine if the request is good or bad?
How can I know if the blocked request is a bad bot that should be blocked or good bot that was not whitelisted?
If I need, for example, to allow an ad network request, should I use firewall rule or ip access rule, and what parameters are required to truly identify the ad network requests?
Honestly? It’s not viable; IP picking is exhausting and volatile, vendors can change IPs, and in many scenarios, they share IP ranges with clouds or hostings that anybody can use to fake bots.
The best scenario is using the enterprise package; that way, you can have it tweaked to your needs. However, that’s not affordable to most businesses.
My suggestion is to set the SBFM to allow all requests; you will save yourself a lot of headaches that show due to the lack of flexibility the current bot fight has.