issue with blocked internal resources in robot.txt

Topic summary

A user questioned whether blocking the URL https://infraredheatingsupplies.com/checkouts/internal/preloads.js?locale=en-GB in robots.txt is correct.

Resolution confirmed:

  • Blocking this resource is expected and correct behavior
  • The file is a backend checkout-related script with no indexable content
  • Search engines don’t need to crawl checkout infrastructure files
  • Only content pages (products, blogs, etc.) should be crawled

Additional clarification:

  • The same blocking applies to external checkout resources (URLs with /external/ instead of /internal/)
  • This is standard practice for e-commerce platforms

The issue was resolved with confirmation that no action is needed. A reference to Google’s robots.txt documentation was shared for further context.

Summarized with AI on November 2. AI used: claude-sonnet-4-5-20250929.

hi

is it correct that the below url is blocked - https://infraredheatingsupplies.com/checkouts/internal/preloads.js?locale=en-GB

Hi,

Yes this is correct and expected. I had a similar issue recently, but it just means that search engine bots are blocked from accessing that file. Which is fine, because you generally only want bots to crawl pages with content, such as blogs, products, etc.

Hope that makes sense?

Regards

peppergray.shop

HI

thank you for your reply. So to confirm this page shouldnt be crawled and doesnt have any content etc, its just a backend page for the checkout or linked to the checkout and so it doesnt need to be indexed

That’s right. No need for it to be crawled.

HI

thank you for confirming. I assume its the same with external

with blocked external resources in robots.txt for the same url but instead of internal it said external

Yes, the same for external.

If your issue is resolved, can you mark this thread as answered? Thanks and good luck with your store.

i found more insight on this question here…

https://developers.google.com/search/docs/crawling-indexing/robots/intro