Terraform Logpush to Azure Storage 403: creating a new job is not allowed: error getting jobs to check allowance (1004)

Creating LogPush Jobs with Terraform fails for a specific zone

Error: error creating logpush job for zone (xxx1e724e4cafxxxx1a2564xxxxbce55): HTTP status 403: creating a new job is not allowed: error getting jobs to check allowance (1004)

example for azure Cloudflare logpush below:

resource “cloudflare_logpush_ownership_challenge” “ownership_challenge” {
zone_id = var.zone_id
destination_conf = var.storage_container_sas

data “http” “blobchallangedata” {
url = var.container_sas_token

depends_on = [

request_headers = {
Accept = “application/json”


output “challangekey” {
value = data.http.blobchallangedata.body
sensitive = true

resource “cloudflare_logpush_job” “job” {
enabled = var.enabled
zone_id = var.zone_id
dataset = “http_requests”
logpull_options = var.logoptions
destination_conf = var.storage_container_sas
ownership_challenge = data.http.blobchallangedata.body

But getting the error above any ideas thanks all .

Using terraform following error

│ Error: error creating logpush job for zone (0fe1c448e82045753682ef8967dca3cc): HTTP status 400: pq: duplicate key value violates unique constraint “jobs_destination_fingerprint_unique” (1002)

Here is the update on why this error Happens.

When in terraform and you specify the same exact destination for the logpush as another zone in your Cloudflare account, you can not send logpushes from different zones to the same azure container / aws s3 bucket. You need to separate the logpushes by different Folders/blobs/containers / aws s3 buckets / or folder structures with different destination URI this is a Cloudflare requirement which makes the log aggregation even more difficult since now you have to for each zone on Cloudflare go into programmatically into different buckets/folder structures to process the logs ingested into your siem / log aggregator. Here is a point written to Cloudflare CSM

Actually this is a big problem for end customers not to be able to push all zone logs to a single storage uri

Your current design creates a unique duplicate key error API

Because each zone destination URI must be different

But in my example

I have multiple Cloudflare zones that need to push logs to a specific URI.

I then with my own internal automation ETL go into my destination folder/blob / aws S3

And pick up the logs from all the zones that are using logpush and ingest them into my seim engine or log aggregator

Now I don’t discovered/was advised that I need a separate container/folder with a separate destination URI per zone

This means I now have to build a complex etl process to pull out logs from different folders/blobs / s3 buckets

Instead of a single place as would be the case if I was able to input the same destination URI into multiple zone logpush fields

Why is this the case with Cloudflare why are we really unable to internally per company push logs from all zones to a single place

"Single azure blob container " or a single aws bucket without separation by folders?

Are the log files not randomly named per zone?

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.