How Google Ranks a page without links?

Google’s webmaster tool video dealt with this question in a recent video.

When Google encounters a page without links it judges it by the content and the keywords it finds. The first instance of the keyword indicates the subject matter then several repetitions confirms that page is about that subject matter. However, there comes a point where the repetition is judged as keyword stuffing and a negative effect occurs. If the page is about a very niche topic or phrase then it has the potential to rank better for that non-competitive keyword than a highly searched phrase.

Unnatural outbound links

Unatural Outbound links noticeLast week we got a webmaster email telling us that Google thinks we have been selling links on three our blog posts. This was a surprise as we had not participated in any link selling activity on these blogs but had participated in accepting blog posts from MyBlogGuest.com. We also noticed our page rank has disappeared back in December PR update.

Actions to remove links

We could not go through three years of blog posts to remove links that we suspected were in contravention of Google guidelines nor could we nofollow each one individually. Since the blogs were in wordpress our result was to use a plugin to unfollow all external links.

WMT reconsideration request

Following the implementation of the unfollow plugin we replied to Google through webmaster forum to say what we did to address their concerns via reconsideration request. We now wait to
hopefully get a positive feedback from big G.

Update: We see from this post on search engine land has recently been hit under the Google blog network clampdown. This is a surprise since we were given to understand that guest blogging activity was an accepted practice.

Local Citation SEO boosts Visibiity by 179%

An Agency has shown that doing some basic Google places optimization has increased visibility by 179% for 315 businesses in the US.

The steps followed by the Agency included

  • Creation of custom businesses description for each location
  • Add more content to do with photos, videos etc.
  • Remove duplicate listings on Google and other places
  • Amend listings for long and short tail keywords

 

See the  reported at Search engine land

Robots.txt file – Do I Need It?

A robots.txt file basically tells search engines which part of your site not to crawl. The format is to put it in your top level directory in the format. It is useful in restricting parts of your site you do not want indexed by certain search engines. Expamples include:

 

User-agent: *
Disallow:
Examples of Other formats include the following

Disallow All crawlers

User-agent: *
Disallow: /

Restrict folder

User-agent: *
Disallow: /private/

Restrict file to all robots

User-agent: *
Disallow: /directory/file.html

Some may wonder whether it is worth having a robots.txt file at all if you want to give search engine crawlers unrestricted access. Matt Cutts has answered in his weekly digest that it is useful to have the files even if it is to say disallow none.

Enhanced by Zemanta