Search engines allow you to search for content that is available in their index, but this does not imply that the item does not exist.
To identify websites that are not indexed by Google, as per your inquiry, it is preferable, to begin with, to Search Engines such as DuckDuckGo, Bing, and Yandex and look for material there.
Information that is not indexed in general does not imply that it will never be indexed. Consider how useful it would be if every /checkout/checkout/ page was indexed! You’d have an incredibly clogged Search Engine that returned utterly invalid results to requests.
Some businesses and sectors purposefully under-index development reasons.
(dev.website.com), or sensitive reports through robots=“noindex,” or
robots.txtrobots.txt
UserClient:∗UserClient:∗
Disallow:/Disallow:/
You can even specifically un-index from specific Search Engines while remaining on others! For example:
robots.txtrobots.txt
#Disallowing Googlebot from indexing any of the website#Disallowing Googlebot from indexing any of the website
UserClient:GooglebotUserClient:Googlebot
Disallow:/Disallow:/
#Disallowing Bingbot from crawling full website, except /data-you-want-bing-to-see and .js / .css files#Disallowing Bingbot from crawling full website, except /data-you-want-bing-to-see and .js / .css files
UserClient:bingbotUserClient:bingbot
Allow:.jsAllow:.js
Allow:.cssAllow:.css
Allow:/data−you−want−bing−to−seeAllow:/data−you−want−bing−to−see
Disallow:/
Or
Make Schema of your business & implement it.
https://incrementors.com/tools/local-business-schema-generator/
Or
Check this:
WebCrawler Web Search
Crawler.com
10 Search Engines to Explore the Invisible Web