Block 100s Of Url From Search Engine Using Robots.txt
I've about 100 pages of on my website which I don't want to be indexed in google...is there any way to block it using robots.txt..It'd be very tiresome to edit each page and add noindex meta tag....
All the urls which I want to block goes like...
. . .
Not sure but will adding something like the following work?
User-Agent: * Disallow: /index-*.html
Yes it will work using wildcard Ref : "https://geoffkenyon.com/how-to-use-wildcards-robots-txt"
- → How do I call the value from another backed page form and use it on a component in OctoberCms
- → Inline JS callback syntax Vs Internal JS callback syntax
- → Prevent form action from being followed by robots
- → How to remove parameters from the root URL if it does I18n
- → SEO Canonical URL in Greek characters
- → Htaccess negation
- → Wrapping anchor elements with text
- → Adding schema.org to site
- → dynamic php title depends on page id and using of "if (isset"
- → Building sitemap for 2 wordpress install under 1 domain
- → Bigcommerce Repeating url
- → 301 Redirection from no-www to www in wordpress
- → Wrapper inside or outside HTML5 semantics main, which is best practice?