Blocking ?page= In Robots.txt

- 1 answer

Basically Google is trying to index thousands of articles that all look something like this:


The urls range from page=1 to page=99 due to my pagination and infinite scroll.

How can I include just the ?page= part of the url in my robots.txt file so it does not index anything with a page number?

Not sure if this is the correct place to ask this question, but I am having an overly hard time finding an answer. Thanks.



for Google, preferably do it through google webmaster tools, go Crawl-> URL Parameters:

enter image description here

Add a parameter page, choose its effect as Paginate and Crawl only Value=1

Read more about Search Console Help - Learn the impact of duplicate URLs