How to protect sign-up only pages while allowing Google's spiders?

I am embedded C developer, new to web development.

I have been asked to make a catalogue site for bespoke art. I have a conflicting requirement: I need routes like domain.tld/artists/:work to be

  • viewable by only signed-in users
  • scrap-able by search-engine spiders so we can benefit from SEO
  • not scrap-able by bad/attacker bots so as to keep server & bandwidth costs at a minimum

I reckon this is doable (and please correct me if I am wrong) by using a proxy server using something like botd.js (by fingerprintjs). But since it’s better to write less code, I’m wondering how can I integrate

into my work. Are all these offerings baked into Cloudflare Workers? Is there a tutorial on leveraging these on a quasi-ready backend?

This is probably the most difficult part. Do you already have this locked down?

The rest can be handled by a Firewall Rule to JS-Challenge or CAPTCHA, but NOT for known (good) bots.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.