Why Google Robots.txt Tester Has Error And It's Not Valid

- 1 answer

As you can see in below image Google WebMaster Tools robots.txt Tester tell me about 9 error but I don't know how to fix it and what is the problem? enter image description here

Please help me to figure out



That is a valid robots.txt - but you've got a UTF-8 BOM (\xef\xbb\xbf) at the beginning of the text file. That's why there's a red dot next to 'User' in the first line. This mark tells browsers and text editors to interpret the file as UTF-8 whereas the robots.txt is expected to use only ASCII characters.

Convert your text file to ASCII and the errors will go away. Or copy everything after the red dot and try pasting it in again.

I tested this on the live version, here's the result translated from byte form:

\xef\xbb\xbfUser-agent: *\r\nDisallow: /en/news/iranology/\r\nDisallow:
/en/tours-services/tour-a-whistle-stop-tour\r\nDisallow: /en/to

You can clearly see the BOM at the beginning. Browsers and text editors will ignore it but it may mess with a crawlers ability to parse the robots.txt. You can test the live version using this python script:

import urllib.request

text = urllib.request.urlopen('')


If you're able to install Notepad++, it actually has an encoding menu that lets you save it in any format.