GSC/ 1000 pages affected by Crawled - currently not indexed

Topic summary

  • Issue: A multi-product store reports 900+ URLs in Google Search Console (GSC) with status “Crawled – currently not indexed” and asks if there’s a bulk way to request indexing instead of doing “Test live URL” → “Request indexing” for each page.

  • Context/terms: GSC is Google Search Console. “Crawled – currently not indexed” means Google has fetched the page but chose not to add it to the search index (at least for now).

  • Responses: One reply advises that not everything crawled should be indexed; often non-indexed pages lack unique value compared to other versions on the site (e.g., duplicates or thin variants).

  • Outcomes/decisions: No bulk indexing method or alternative workflow was provided. The focus shifted to assessing page uniqueness/value rather than forcing indexing.

  • Status: Unresolved/ongoing. The question about a bulk indexing request remains unanswered. Implicit next step is to improve unique content or differentiate similar pages.

  • Notes: A screenshot of the GSC report was shared; it’s illustrative but not essential to the discussion’s substance.

Summarized with AI on January 17. AI used: gpt-5.

Hi there,

I have a multi product store since a few months and I am studying currently the GSC page.

Therefore I go step by step through the issues which are listed there. One of them is that there are more than 900 pages which show Crawled - currently not indexed.

I understand I could do page by page, with “test live url” and then “request indexing”.

However, would be quite some work for 900 pages.

Is there another way possible? Kinda bulk request for all pages at once?

Thank you in advance.

What is crawled shouldn’t necessarily get indexed. You will often find that pages not indexed have no unique value compared to another version of the page on the website.