“Manage bot traffic with robots.txt” does not block bots on inner pages. The feature creates own robots.txt only for the homepage, NOT for inner pages.
Was the site working with SSL prior to adding it to Cloudflare?
For root is everything fine. But when bots visits inner pages and does not find any robot.txt - what rules they will respect?
I assume they will not look at domain’s root, if they visits inner pages directly - from search.
Bots either follow robots.txt or they don’t. If they follow it, they check for the existence of the file and follow it’s directives when they visit a website. Cloudflare’s own robots.txt as an example: https://www.cloudflare.com/robots.txt
So the problem (bug) I reported - the new feature “Manage bot traffic with robots.txt" does not block bots on inner pages as it does not create a robots.txt on it at all.
No you haven’t. You have found an example where robots.txt is returned unnecessarily beyond the site root.
I am sorry you misunderstood how robots.txt works. I have tried to explain it and verified your robots.txt conforms to the standard.
If you want a robots.txt for every page/file/directory you can certainly create them yourself. But the fact Cloudflare isn’t doing that for you is not a bug.
Rather than downvoting me because I’ve indicated it isn’t a bug and the file Cloudflare produced for you is formatted correctly and protects the entire site, let’s try this:
Here is the RFC that defines how robots.txt works.
Section 2.3 is quite clear as to where crawlers are to look for and obtain the file.