False Positive 403 errors from Automated Monitoring services

Topic summary

  • Issue: Automated uptime checks via Wormly are triggering false 403 (Forbidden) responses instead of the expected 200 (OK) across multiple Shopify stores.

  • Timeline: Began in the last two weeks; most intense on July 26–27 with intermittent 403s cycling throughout the day. Similar events have recurred in the following week.

  • Impact: Stores are repeatedly flagged as offline, then cleared hours later, leading to multiple false outage alerts.

  • Vendor feedback: Wormly attributes the behavior to Shopify.

  • Request for input: Looking for confirmation from others experiencing the same and for mitigation ideas (e.g., setting a specific User-Agent, cookies, or custom HTTP headers in monitoring requests).

  • Status: No resolution reported; seeking community guidance and potential configuration workarounds.

Summarized with AI on December 23. AI used: gpt-5.

We use wormly to automatically monitor and alert us to any downtimes or outages for sites that we look after. In the last two weeks we have received multiple false positives of 403 errors :

Error: Got response code 403, expected 200

When these occur it will impact all of the shopify stores that are being monitored. On the 26th/27th of July these false positives occurred throughout the 24 hour period with 403. Sites would be reported as offline and then be cleared a few hours later, only for the process to then repeat itself through out the day.

We’ve seen this occur a few more times in the last week. Contacting wormly, we’ve been told this is a Shopify issue, and are wondering if anyone else has seen this over the last few weeks. Maybe there is agent / cookies / custom http request headers that we might be able to set to avoid these false positives?