Blocking ?page= In Robots.txt
Basically Google is trying to index thousands of articles that all look something like this:
The urls range from
page=99 due to my pagination and infinite scroll.
How can I include just the
?page= part of the url in my robots.txt file so it does not index anything with a page number?
Not sure if this is the correct place to ask this question, but I am having an overly hard time finding an answer. Thanks.
for Google, preferably do it through google webmaster tools, go Crawl-> URL Parameters:
Add a parameter
page, choose its effect as
Paginate and Crawl only
Read more about Search Console Help - Learn the impact of duplicate URLs
- → Adding html data attribute to simple_forms input
- → Blocking ?page= in robots.txt
- → Cannot read property 'modalIsOpen' of undefined
- → Why does a POST request, properly routed, throw an "ActionController::InvalidAuthenticityToken" error in Rails and the shopify_app gem
- → How to setup Rails routes.rb for Vanity URL without requiring a prefix
- → add one button in shopify site header using shopify app in ROR
- → How can I access the webhook data in jobs? (Shopify Rails)
- → Ruby array to js array in .js.erb file on rails
- → rails 4 justifiedGallery script only load the latest post with images
- → Can't create new database entry from form submission
- → Mount/Render a React component manually using JS
- → Cant get this join to work in rails