Cloudflare WARP Docker Failed to start firewall

I am trying to get the warp-cli working on a docker container. The container is built with a slim Debian Buster base image and uses s6 as a process supervisor. The .deb installer is downloaded and installed in the container and the warp-svc service is configured to run as an s6 service. The docker container has the NET_ADMIN privilege and uses the host networking mode. When trying to connect using the warp-cli, it seems that it cannot start a firewall. The following is an excerpt from the warp-svc logs:

...
2023-03-26T15:36:43.507Z  INFO main_loop: firewall: Firewall starting
2023-03-26T15:36:43.507Z  INFO warp::warp_service::ipc_loop: IPC connection ended
2023-03-26T15:36:43.509Z  INFO warp::warp_service::ipc_loop: IPC: new connection privileged=true process_name="/bin/warp-cli" pid=233
2023-03-26T15:36:43.512Z  WARN main_loop: firewall::linux: Failed to set firewall rules via stdin. Retrying using temporary file exit_code=ExitStatus(unix_wait_status(256))
2023-03-26T15:36:43.519Z ERROR main_loop: firewall::linux: Failed to start firewall with exit code: exit status: 1
2023-03-26T15:36:43.519Z DEBUG main_loop: firewall: Firewall allow private IPs
2023-03-26T15:36:43.526Z  WARN main_loop: firewall::linux: Failed to set firewall rules via stdin. Retrying using temporary file exit_code=ExitStatus(unix_wait_status(256))
2023-03-26T15:36:43.533Z ERROR main_loop: firewall::linux: Failed to start firewall with exit code: exit status: 1
2023-03-26T15:36:43.533Z  INFO main_loop: warp::warp_service: New User Settings
2023-03-26T15:36:43.533Z DEBUG main_loop: warp::warp_service::ipc_handlers: Sending IPC update: SettingsUpdated
2023-03-26T15:36:43.533Z DEBUG main_loop: warp::warp_service::ipc_handlers: Ipc Broadcast ResponseUpdate: SettingsUpdated
2023-03-26T15:36:43.533Z DEBUG main_loop: warp::warp_service: update_settings: no restart required
2023-03-26T15:36:43.533Z DEBUG main_loop: firewall: Firewall allow private IPs
2023-03-26T15:36:43.542Z  WARN main_loop: firewall::linux: Failed to set firewall rules via stdin. Retrying using temporary file exit_code=ExitStatus(unix_wait_status(256))
2023-03-26T15:36:43.552Z ERROR main_loop: firewall::linux: Failed to start firewall with exit code: exit status: 1
2023-03-26T15:36:43.552Z  WARN main_loop: warp::warp_service: Disconnected, but reason unknown net_info=IPv4: [enp0s3; 10.0.2.15; Ethernet]; DNS servers:;   172.30.32.3:53; 
2023-03-26T15:36:43.552Z DEBUG main_loop: firewall: Firewall allow private IPs
2023-03-26T15:36:43.560Z  WARN main_loop: firewall::linux: Failed to set firewall rules via stdin. Retrying using temporary file exit_code=ExitStatus(unix_wait_status(256))
2023-03-26T15:36:43.570Z ERROR main_loop: firewall::linux: Failed to start firewall with exit code: exit status: 1
2023-03-26T15:36:43.570Z  INFO main_loop: warp::warp_service: captive_portal_fw_until: Indefinitely
2023-03-26T15:36:43.570Z DEBUG main_loop: warp::warp: Using auto fallback: true
2023-03-26T15:36:43.571Z DEBUG main_loop: warp::warp: Current Network: IPv4: [enp0s3; 10.0.2.15; Ethernet]; DNS servers:;   172.30.32.3:53; 
2023-03-26T15:36:43.572Z  INFO main_loop: warp::warp: Initiate WARP connection
2023-03-26T15:36:43.573Z DEBUG main_loop: firewall: Firewall allow tunnel
2023-03-26T15:36:43.580Z  WARN main_loop: firewall::linux: Failed to set firewall rules via stdin. Retrying using temporary file exit_code=ExitStatus(unix_wait_status(256))
2023-03-26T15:36:43.588Z ERROR main_loop: firewall::linux: Failed to start firewall with exit code: exit status: 1
...

How can I go about solving this issue? Is there a dependency missing?

Thank you

Hi @sss , I realise it’s been a while since you reported this issue so hopefully you found a solution in the meantime!

I’m using Podman Desktop on MacOS.

I have had exactly the same problem myself over the last couple of days. Initially, I worked around it by just running the container with the --privileged flag and everything seemed alright. After that, my Podman machine had a problem and I had to delete and recreate it. After I did this and rebuilt the container, I found myself with the exact same issue you have, even when running the container with the privileged flag. I have absolutely no idea what else has changed.

Did you figure this out? Thanks!

Hello @DevOpsFu,

Unfortunately I was unable to find a solution to this issue.

Damn, that sucks! Like I said, I did have it working yesterday so it is at least possible, I just don’t know what I changed! I will keep on it and I’ll be sure to reply to this thread if I do figure it out.

I have a solution, of sorts! The reason my container stopped working wasn’t due to anything that had changed on my machine, but due to some changes that I made on the Cloudflare side, combined with what seems to be cached settings on the container confusing things even more.

If your WARP client profile is set to use Gateway with WARP (the default), then this causes the issue with warp-svc being unable to set the firewall rules. If you set it to Secure Web Gateway without DNS Filtering then this will still give you full routing into your Zero Trust network, but without the traffic inspection capabilities given by the full gateway setting. With this setting, the warp-svc does not appear to try updating the local firewall, and the service starts up correctly.

Things to note:

  • Assuming you’re using a Service Token to connect, you will need a Device Profile which matches - you can do this by creating a profile configured with an expression that is set to User Email is non_identity@<team>.cloudflareaccess.com
  • I found that, while I was changing profile settings to narrow this down, I had to delete the /var/lib/cloudflare-warp/conf.json file in between connection attempts. It seems that the profile settings are cached here, and used before any profile updates are pulled from Cloudflare’s API, and so if the service has already cached the “bad” config, it can never get to the point where it will pull down the updated profile.

So far, this setup has got me working with everything I need, which is private connectivity to a couple of internal endpoints in my hosted infrastructure. I hope this helps someone else too.