How to avoid search engines from crawling your website?

Your answer is to create a robots.txt file in the root of your web directory and to have the code setting given below in the file.

User-agent: *
Disallow: /

You can read more about Robots exclusion protocol, here

Share it onShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn

Written by kurinchilamp


Website: http://

Leave a Reply

Your email address will not be published. Required fields are marked *

Read previous post:
How to use different Python version with virtual environments?

Use the flag -p with virtualenv command to specify the python version that you would want to use $ virtualenv...

Close