We are having some problems with the web crawler from google. They are telling me they couldn't see the whole website while we (and our customers) can see it clearly. Google is telling me it has something to do with the web crawler, and they told me to do the following:
Kindly update the robots.txt file on your web server to allow Google's crawler to fetch the provided landing pages and images. In order for us to access your whole site, ensure that your robots.txt file allows both user-agents Googlebot-image (used for images) and Googlebot (used for web pages) to crawl your site. You can do this by changing your robots.txt file as follows:
User-agent: Googlebot Disallow:
User-agent: Googlebot-image Disallow:
So I just copy/pasted the script to the robots file but now i'm wondering if have done this the right way. If you scroll through the bottom at the following link you can see it's included now. Can somebody tell me if this is the right way to do it?
You don't want google to crawl the _whole_ site. For example the cart page isn't something you want or need to be indexed given that only matters to a single customer. Was there a more specific warning on what it couldn't get access to? It's possible this is a warning you can ignore.
★ I jump on these forums in my free time to help and share some insights. Not looking to be hired, and not looking for work. http://freakdesign.com.au ★