Default Set Up Robots.txt File for Blogger andrey среда, 24 июня 2015 г. No Comment

A few post back I have shared a post about how to set the search preferences in blogger and in the last of this post I had promised with you that in the upcoming posts I will share about blogger robots.txt. So today I bring a complete tutorial about the custom robots.txt file. Actually, this a simple text file (not html) which is giving instruction to search engine bots that which pages either index or not. The search engine bots and other crawling bots usually obey this text.
This very important factor in SEO, with the help this text file, you can block your blog's any irrelevant URL such as label, demo pages any other page which is not important to index in search engines.kbcfss bjkl

Set Robots.txt File in Blogger

Default Set Up Robots.txt File for Blogger

In blogger we are using default robots.text like below.
User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://example.blogspot.com/atom.xml?redirect=false&start-index=1&max-results=500
First, I want to explain the terms of this file because this very important to know what are you doing in your blog. If you do any mistake in adding of this file. In the result search engines can ignore your blog or reduce blog's ranking. In below I explained all terms of the robots.txt file.
User-agent: Mediapartners-Google
Mediapartners- is the name of Google AdSense crawling bots name and this is only beneficial for those blogs who are using Google AdSense on their blogs. It helps to display your niche ads on your blog. If you are not using Google AdSense in the blog, then you don't need to keep this line.
User-agent*
When you mark User-agent with asterisk (*) sign, this means you allow all crawling bots such as Google-bots, bing and other search engines and directories bots.
Disallow: /search
This code will block all URLs which having keyword search after the domain name. In blogger all labels and search pages urls having this kind structure. So, that means you block all labels and search pages from crawling.
Allow: / 
This code specially used for homepage and it invites the search engine spider to crawl and index your blog home page.
Sitemap:
Sitemap is helpful to show your content to search engines and it is inviting the search bots to crawl your blog posts quickly and it also notifies the search engines when you do any changing in blog.

How To Disallow post Or Page Using Robots.txt File?

If you want to prevent any page or post from search engines crawling. You can do by using custom robots.txt file, simply write after User-agent* disallow: then "/" and place your post or page's except the main domain name like below.
For post
Disallow : /2015/04/your-post-url.html
For page
Disallow : /p/your-page-url.html.

How to generate robots.txt file?

After reading, this this tutorial I hope you will be able to create robots.txt file. This very easy and you don't require to learn any special language or techniques. So you can create manually or use the below online tools the file.

Adding Custom Robots.txt File In Blogger

Once you create robots.txt file, then you easily add in blogger.
Go to your blogger dashboard
  • Select Settings, then Search preferences Crawlers and indexing > Custom robots.txt > Edit > Yes.
  • Paste your robots.txt code in it.
  • Click Save Changes button.
I hope after reading this article you will be able to set custom robots.txt in blogger and also understand its terms. This great way to index your blog in search engines and get traffic, but in the end suggest again use this option with caution because adding anything wrong search engines could ignore your blog. If you need more help place your question in the comments box below.
A few post back I have shared a post about how to set the search preferences in blogger and in the last of this post I had promised with you that in the upcoming posts I will share about blogger robots.txt. So today I bring a complete tutorial about the custom robots.txt file. Actually, this a simple text file (not html) which is giving instruction to search engine bots that which pages either index or not. The search engine bots and other crawling bots usually obey this text.
This very important factor in SEO, with the help this text file, you can block your blog's any irrelevant URL such as label, demo pages any other page which is not important to index in search engines.kbcfss bjkl

Set Robots.txt File in Blogger

Default Set Up Robots.txt File for Blogger

In blogger we are using default robots.text like below.
User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: /search
Allow: /
Sitemap: http://example.blogspot.com/atom.xml?redirect=false&start-index=1&max-results=500
First, I want to explain the terms of this file because this very important to know what are you doing in your blog. If you do any mistake in adding of this file. In the result search engines can ignore your blog or reduce blog's ranking. In below I explained all terms of the robots.txt file.
User-agent: Mediapartners-Google
Mediapartners- is the name of Google AdSense crawling bots name and this is only beneficial for those blogs who are using Google AdSense on their blogs. It helps to display your niche ads on your blog. If you are not using Google AdSense in the blog, then you don't need to keep this line.
User-agent*
When you mark User-agent with asterisk (*) sign, this means you allow all crawling bots such as Google-bots, bing and other search engines and directories bots.
Disallow: /search
This code will block all URLs which having keyword search after the domain name. In blogger all labels and search pages urls having this kind structure. So, that means you block all labels and search pages from crawling.
Allow: / 
This code specially used for homepage and it invites the search engine spider to crawl and index your blog home page.
Sitemap:
Sitemap is helpful to show your content to search engines and it is inviting the search bots to crawl your blog posts quickly and it also notifies the search engines when you do any changing in blog.

How To Disallow post Or Page Using Robots.txt File?

If you want to prevent any page or post from search engines crawling. You can do by using custom robots.txt file, simply write after User-agent* disallow: then "/" and place your post or page's except the main domain name like below.
For post
Disallow : /2015/04/your-post-url.html
For page
Disallow : /p/your-page-url.html.

How to generate robots.txt file?

After reading, this this tutorial I hope you will be able to create robots.txt file. This very easy and you don't require to learn any special language or techniques. So you can create manually or use the below online tools the file.

Adding Custom Robots.txt File In Blogger

Once you create robots.txt file, then you easily add in blogger.
Go to your blogger dashboard
  • Select Settings, then Search preferences Crawlers and indexing > Custom robots.txt > Edit > Yes.
  • Paste your robots.txt code in it.
  • Click Save Changes button.
I hope after reading this article you will be able to set custom robots.txt in blogger and also understand its terms. This great way to index your blog in search engines and get traffic, but in the end suggest again use this option with caution because adding anything wrong search engines could ignore your blog. If you need more help place your question in the comments box below.
by Jillur Rahman

Jillur Rahman is a Web designers. He enjoys to make blogger templates. He always try to make modern and 3D looking Templates. You can by his templates from Themeforest.

Follow him @ Twitter | Facebook | Google Plus

No Comment