Disallow Query Strings In Robots.txt For Only One Url

so I have one url, that has potential query strings it could be indexed with, i.e. I would definitely like to keep the base url, indexed, but no query parameters. I would like query parameters indexed on other pages, just not this one, so a catchall for all pages will not work. Secondarily, I am rewriting urls to remove trailing slashes, would this catch as well as

Does this work as a solution to my issue?

Disallow: /hatching?*

I have heard this only works for google crawlers... is there a more robust solution for all crawlers?

Thanks for any help! It is greatly appreciated.



User-agent: *
Disallow: /hatching?
Disallow: /hatching/

This robots.txt will block all URLs whose path starts with /hatching? or /hatching/, so for example:

  • /hatching?
  • /hatching?foo=bar
  • /hatching/
  • /hatching/foo
  • /hatching/?foo=bar

It’s only using features from the original robots.txt specification, so all conforming bots should be able to understand this.