Googlebots Ignoring Robots.txt?
I have a site with the following robots.txt in the root:
User-agent: * Disabled: / User-agent: Googlebot Disabled: / User-agent: Googlebot-Image Disallow: /
And pages within this site are getting scanned by Googlebots all day long. Is there something wrong with my file or with Google?
It should be
- → Blocking ?page= in robots.txt
- → Using noodp meta tag in a robots.txt file
- → Disallow query strings in robots.txt for only one url
- → How to block subdomain used for URL shortening service with robots.txt?
- → Partial Fetch and Render due to Blocked Script (Twitter conversion tracking)
- → robots.txt: How to disallow all subdirectories but allow parent folder
- → Is .htaccess password protected site hidden from search engines?
- → allow subdirectory index but no other pages in subdirectory
- → I have sitemap.xml but many SEO audit websites say I don't have...what is the fix?
- → How to prevent a URL while indexing in to google?
- → Site appearing on Google SERP in spite of proper robots.txt configuration
- → Usage of 'Allow' in robots.txt
- → Google indexation