I Have A 302 Redirect Pointing To Www. But Googlebot Keeps Crawling Non-www URLs

- 1 answer

Do you know if it is possible to force the robots crawl on and not ? In my case, I have a web app that has enabled cached urls with (to view the HTML code), but only on www.

So, when the robots crawl on, it has no data.

The redirection is automatic (> on Nginx, but no results.

I said that my on my sitemap, urls have all www.

My Nginx redirect :

server {
  listen                *:80;

  server_name ;

  location / {

    if ($http_host ~ "^([^\.]+)\.([^\.]+)$"){
      rewrite ^/(.*)$1 redirect;


Do you have an idea ?

Thank you !



If you submitted a sitemap with the correct URLs a week ago, it seems strange that the Google keeps requesting the old ones.

Anyway - you’re sending the wrong status code in your non-www to www redirect. You are sending a 302 but should be sending a 301. Philippe explains the difference in this answer:

Status 301 means that the resource (page) is moved permanently to a new location. The client/browser should not attempt to request the original location but use the new location from now on.

Status 302 means that the resource is temporarily located somewhere else, and the client/browser should continue requesting the original url.