AngularJS SEO - Once And For All

- 1 answer

I'm in a big project with 15 sub sites and 13 different schema pages. Currently, the site is based on ui.route for all pages and my data set by $http angular request. After tests and trials on search console its looks like google don't see all my pages except the home page and data from $http request don't showing up. What I'm doing wrong?

What I'm doing so far is:

Set base tag in the <head>:

<base target="_blank" rel="nofollow noreferrer" href="/" />

Create .htaccess:

RewriteEngine On 
Options FollowSymLinks

RewriteBase /

RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ /#/$1 [L]

Add to app.config:


Exemple to my app.config:

function createState(name) {
    return {
        url: '/' + name + '/:id',
        templateUrl : 'templates/pages/' + name +'.html',
        controller : 'singlePage',
        resolve: {
           pageData: function(getData, $stateParams) {
                var params = $stateParams;
                params.type =;
                return getData.getPageData(params.type, params)
.state('info', createState('info'))
.state('news', createState('news'))
.state('event', createState('event'))



Why does the google crawler not follow my links / state changes created by UI Router?

Well, google crawl bot is able to execute JavaScript (this feature was implemented not changes created by UI Router?

Well, google crawl bot is able to execute JavaScript (this feature was implemented not long ago). But the bot is still crawling URL like all time before. It is checking the href attribute of all your a-Tags in your HTML markup and follow them up. If you are using the JavaScript state change functionality provided by ui.router the bot will never be able to follow this links. It also does not recognize the HTML5 URL route changed. -> so no pages will be crawled / indexed

You can counteract that with some basic SEO functionalities. But there a still some limitations you need to deal with. Some of this limitations are:

  • Social content provided by meta tags. (Sharing a page on facebook while using og:image, etc. will not work with AngularJS E2E binding)
  • The title tag used with E2E binding will not recognize by social media sharings.

How to make the crawl bot indexing your pages? This is pretty easy, just create a sitemap.xml including all your URLs, upload it to your webserver and register it by using google webmastertools. The google bot will now crawl all the URLs you provided in your sitemap.xmland finally it will index your pages/URLs! =)

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns=""

We did this and it's working very well. You can create your sitemap.xml manually. We moved a step farther and automated this stuff. Our XML and ui.routes are created on the backend side of our webapplications. So we have a configuration JSON file where we configure all our routes in. A script creates the XML and JavaScript ui.routes automatically.

This is the result of what we did:

If you want to build a nice SEO/Social optimized page, don't use SPA applications like AngularJS. I would also not prefer to create a precompiler. It makes no sense to create a SPA application and precompile it. Before creating a precompiler you should go back to the roots by using PHP, Node.JS, Java, etc. to create a webapplication.