Serve Different Page To Google Bot Crawler
I have an SPA with a lot of images in it. I want to expose those images to search engines. So I want to create "special" pages that will only be seen by the bot. The pages will contain metadata about the images.
Is it possible to make googlebot crawl one page but index it as another?
You can set a page which ONLY Google bot sees.
How it works:
You basically set up a server which serves like a client's browser, and it "sits" between your "real server" which delivers the HTML and assets (JS/CSS/images) and the Crawler Bot. This server is called a pre-render server and it only sends data to bots, not to real clients, because it has its own URL that is mapped to use it. the URL would like like any of your pages' URLs but with some special addition at the end (probably).
ready command somewhere in your code after all the ajax has been called and you the content has "settled down", and only when that command is called, the pre-render server will serve the content forward to the bot, so the bot will see a "static page", "fed to it with a spoon".
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following:
The site adopts the AJAX crawling scheme.
The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results.
This technique isn't so easy to set up, but it is possible.
- → reindex an object so keys are sequential
- → Google indexing uses content instead of description
- → angular.js how to tell google to index my site?
- → No hreflang return tag detected by Google
- → Error 404 on Google Webmaster After Changing the CMS
- → Serve different page to google bot crawler
- → robots.txt: How to disallow all subdirectories but allow parent folder
- → How to prevent a URL while indexing in to google?
- → AngularJS and Google SEO Indexing
- → Google indexation
- → What is Google reaction to indexing redirect URLs?
- → Preventing search engines from indexing all posts