1- Is it a push or pull service from search engines point of view? Would search engines still crawl the websites?
Here’s an excerpt from the docs,
As a result, we are able to see trends in the way bots access web resources. That visibility allows us to be proactive about signaling which crawls are required vs. not
Does this mean that when a search engine bot visits a URL, CF will somehow signal whether the content changed since the last visit? For this to happen, I suppose, CF has collaboration with search engines. Do we have a list of search engines that have collaborated with Crawler hints?
2- What happens to traditional sitemaps? Will they be hidden from search engines? Do we still need sitemap and sitemap pings?
Unfortunately, CF docs have very little information about Crawler hints. Any insights would be helpful.