Serve Different Page To Google Bot Crawler

I have an SPA with a lot of images in it. I want to expose those images to search engines. So I want to create "special" pages that will only be seen by the bot. The pages will contain metadata about the images.

Is it possible to make googlebot crawl one page but index it as another?



You can set a page which ONLY Google bot sees.

How it works:

You basically set up a server which serves like a client's browser, and it "sits" between your "real server" which delivers the HTML and assets (JS/CSS/images) and the Crawler Bot. This server is called a pre-render server and it only sends data to bots, not to real clients, because it has its own URL that is mapped to use it. the URL would like like any of your pages' URLs but with some special addition at the end (probably).

The pre-render server acts like a browser, so the Javascript is parsed, and only when the page is ready (you would need to carefully trigger a ready command somewhere in your code after all the ajax has been called and you the content has "settled down", and only when that command is called, the pre-render server will serve the content forward to the bot, so the bot will see a "static page", "fed to it with a spoon".

enter image description here

In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following:

  1. The site adopts the AJAX crawling scheme.

  2. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed.

  3. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results.

This technique isn't so easy to set up, but it is possible.