12/24/2009
Did you know that Google indexes well over 8 billion web pages?
Before these pages are placed in Google's "index", they're each crawled by a spider known as the GoogleBot.
Unfortunately, many web masters don't know about the internal workings of this virtual robot. In fact, Google actually uses a number of spiders to crawl the Web. You can catch these spiders by examining your log files we provide every month.
This news articles will attempt to reveal some of the most important "Google spiders" and their purpopse.
Let's start with GoogleBot.
GoogleBot
Googlebot, is the search bot used by Google to scour the web for new pages. Googlebot has two versions, deepbot and freshbot. Deepbot is a deep crawler that tries to follow every link on the web and download as many pages as it can for the Google index. It also examines the internal structure of a site, giving a complete picture for the index.
Freshbot, on the other hand, is a newer bot that crawls the web looking for fresh content. The Google freshbot was implemented to take some of the pressure off of the GoogleBot. The freshbot recalls pages already in the index and then crawls them for new, modified, or updated pages. In this way, Google is better equipped to keep up with the ever-changing Web.
This means that the more you update your web site with new, quality content, the more the Googlebot will come by to ch�ck you out.
If you'd like to see the Googlebot crawling around your web property more often, you need to obtain quality inbound links.

Connect with us or request a quote.

WEBPRO
Since 1994, WEBPRO has perfected Front Page Marketing that drives more qualified traffic!









Business or Industry:
Geography:
Package:
Submit Message