I have a few doubts about this robots file.
User-agent: * Disallow: /administrator/ Disallow: /css/ Disallow: /func/ Disallow: /images/ Disallow: /inc/ Disallow: /js/ Disallow: /login/ Disallow: /recover/ Disallow: /Scripts/ Disallow: /store/com-handler/ Disallow: /store/img/ Disallow: /store/theme/ Disallow: /store/StoreSys.swf Disallow: config.php
This is going to disable crawlers for all files inside each folder right? Or i have to add a asterisk at the end of each folder name?
I think this should do it. But i'm not sure if have to add
Allow: / right after
User-agent i suppose it isn't needed.
Anything wrong in this robots file?
PS: If someone can suggest a validation app for local use, i would be glad.
It's fine as is, if I understand what you want. E.g.
are both blocked, but
is allowed. Note that Allow is a less supported extension designed only to counter a previous Disallow. You might use it if, for instance, despite your
you decide you want a particular image allowed. So,
All other images remain blocked. You can see http://www.searchtools.com/robots/robots-txt.html for more info, including a list of checkers.
- → How do I call the value from another backed page form and use it on a component in OctoberCms
- → Inline JS callback syntax Vs Internal JS callback syntax
- → Prevent form action from being followed by robots
- → How to remove parameters from the root URL if it does I18n
- → SEO Canonical URL in Greek characters
- → Htaccess negation
- → Wrapping anchor elements with text
- → Adding schema.org to site
- → dynamic php title depends on page id and using of "if (isset"
- → Building sitemap for 2 wordpress install under 1 domain
- → Bigcommerce Repeating url
- → 301 Redirection from no-www to www in wordpress
- → Wrapper inside or outside HTML5 semantics main, which is best practice?