Creating LogPush Jobs with Terraform fails for a specific zone
error: Error: error creating logpush job for zone (xxx1e724e4cafxxxx1a2564xxxxbce55): HTTP status 403: creating a new job is not allowed: error getting jobs to check allowance (1004)
When in terraform and you specify the same exact destination for the logpush as another zone in your Cloudflare account, you can not send logpushes from different zones to the same azure container / aws s3 bucket. You need to separate the logpushes by different Folders/blobs/containers / aws s3 buckets / or folder structures with different destination URI this is a Cloudflare requirement which makes the log aggregation even more difficult since now you have to for each zone on Cloudflare go into programmatically into different buckets/folder structures to process the logs ingested into your siem / log aggregator. Here is a point written to Cloudflare CSM
Actually this is a big problem for end customers not to be able to push all zone logs to a single storage uri
Your current design creates a unique duplicate key error API
Because each zone destination URI must be different
But in my example
I have multiple Cloudflare zones that need to push logs to a specific URI.
I then with my own internal automation ETL go into my destination folder/blob / aws S3
And pick up the logs from all the zones that are using logpush and ingest them into my seim engine or log aggregator
Now I don’t discovered/was advised that I need a separate container/folder with a separate destination URI per zone
This means I now have to build a complex etl process to pull out logs from different folders/blobs / s3 buckets
Instead of a single place as would be the case if I was able to input the same destination URI into multiple zone logpush fields
Why is this the case with Cloudflare why are we really unable to internally per company push logs from all zones to a single place
"Single azure blob container " or a single aws bucket without separation by folders?