Learn About Robots.txt with Interactive Examples

on Monday, January 7, 2013


One of the things that excites me most about the development of the web is the growth in learning resources. When I went to college in 1998, it was exciting enough to be able to search journals, get access to thousands of dollars-worth of textbooks, and download open source software. These days, technologies likeKhan AcademyiTunesUTreehouse and Codecademy take that to another level.
I've been particularly excited by the possibilities for interactive learning we see coming out of places like Codecademy. It's obviously most suited to learning things that look like programming languages - where computers are naturally good at interpreting the "answer" - which got me thinking about what bits of online marketing look like that.
The kinds of things that computers are designed to interpret in our marketing world are:
  • Search queries - particularly those that look more like programming constructs than natural language queries such as [site:distilled.net -inurl:www]
  • The on-site part of setting up analytics - setting custom variables and events, adding virtual pageviews, modifying e-commerce tracking, and the like
  • Robots.txt syntax and rules
  • HTML constructs like links, meta page information, alt attributes, etc.
  • Skills like Excel formulae that many of us find a critical part of our day-to-day job
I've been gradually building out codecademy-style interactive learning environments for all of these things forDistilledU, our online training platform, but most of them are only available to paying members. I thought it would make a nice start to 2013 to pull one of these modules out from behind the paywall and give it away to the SEOmoz community. I picked the robots.txt one because our in-app feedback is showing that it's one of the ones from which people learned the most.
Also, despite years of experience, I discovered some things I didn't know as I wrote this module (particularly about precedence of different rules and the interaction of wildcards with explicit rules). I'm hoping that it'll be useful to many of you as well - beginners and experts alike.

Interactive guide to Robots.txt

Robots.txt is a plain-text file found in the root of a domain (e.g. www.example.com/robots.txt). It is a widely-acknowledged standard and allows webmasters to control all kinds of automated consumption of their site, not just by search engines.
In addition to reading about the protocol, robots.txt is one of the more accessible areas of SEO since you can access any site's robots.txt. Once you have completed this module, you will find value in making sure you understand the robots.txt files of some large sites (for example Google and Amazon).
For each of the following sections, modify the text in the textareas and see them go green when you get the right answer.

Basic Exclusion

The most common use-case for robots.txt is to block robots from accessing specific pages. The simplest version applies the rule to all robots with a line saying User-agent: *. Subsequent lines contain specific exclusions that work cumulatively, so the code below blocks robots from accessing /secret.html.
Add another rule to block access to /secret2.html in addition to /secret.html.

Exclude Directories

If you end an exclusion directive with a trailing slash ("/") such as Disallow: /private/ then everything within the directory is blocked.
Modify the exclusion rule below to block the folder called secret instead of the page secret.html.

Allow Specific Paths

In addition to disallowing specific paths, the robots.txt syntax allows for allowing specific paths. Note that allowing robot access is the default state, so if there are no rules in a file, all paths are allowed.
The primary use for the Allow: directive is to over-ride more general Disallow: directives. The precedence rule states that "the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule. The order of precedence for rules with wildcards is undefined.".
We will demonstrate this by modifying the exclusion of the /secret/ folder below with an Allow: rule allowing /secret/not-secret.html. Since this rule is longer, it will take precedence.

Restrict to Specific User Agents

All the directives we have worked with have applied equally to all robots. This is specified by the User-agent: * that begins our commands. By replacing the *, however, we can design rules that only apply to specific named robots.
Replace the * with googlebot in the example below to create a rule that applies only to Google's robot.

Add Multiple Blocks

It is possible to have multiple blocks of commands targeting different sets of robots. The robots.txt example below will allow googlebot to access all files except those in the /secret/ directory and will block all other robots from the whole site. Note that because there is a set of directives aimed explicitly at googlebot, googlebot will entirely ignore the directives aimed at all robots. This means you can't build up your exclusions from a base of common exclusions. If you want to target named robots, each block must specify all its own rules.
Add a second block of directives targeting all robots (User-agent: *) that blocks the whole site (Disallow: /). This will create a robots.txt file that blocks the whole site from all robots except googlebot which can crawl any page except those in the /secret/ folder.

Use More Specific User Agents

There are occasions when you wish to control the behavior of specific crawlers such as Google's Images crawler differently from the main googlebot. In order to enable this in robots.txt, these crawlers will choose to listen to the most specific user-agent string that applies to them. So, for example, if there is a block of instructions for googlebot and one for googlebot-images then the images crawler will obey the latter set of directives. If there is no specific set of instructions for googlebot-images (or any of the other specialist googlebots) they will obey the regular googlebot directives.
Note that a crawler will only ever obey one set of directives - there is no concept of cumulatively applying directives across groups.
Given the following robots.txt, googlebot-images will obey the googlebot directives (in other words will not crawl the /secret/ folder. Modify this so that the instructions for googlebot (and googlebot-news etc.) remain the same but googlebot-images has a specific set of directives meaning that it will not crawl the /secret/folder or the /copyright/ folder:

Basic Wildcards

Trailing wildcards (designated with *) are ignored so Disallow: /private* is the same as Disallow: /private. Wildcards are useful however for matching multiple kinds of pages at once. The star character (*) matches 0 or more instances of any valid character (including /, ?, etc.).
For example, Disallow: news*.html blocks:
  • news.html
  • news1.html
  • news1234.html
  • newsy.html
  • news1234.html?id=1
But does not block:
  • newshtml note the lack of a "."
  • News.html matches are case sensitive
  • /directory/news.html
Modify the following pattern to block only pages ending .html in the blog directory instead of the whole blog directory:

Block Certain Parameters

One common use-case of wildcards is to block certain parameters. For example, one way of handling faceted navigation is to block combinations of 4 or more facets. One way to do this is to have your system add a parameter to all combinations of 4+ facets such as ?crawl=no. This would mean for example that the URL for 3 facets might be /facet1/facet2/facet3/ but that when a fourth is added, this becomes/facet1/facet2/facet3/facet4/?crawl=no.
The robots rule that blocks this should look for *crawl=no (not *?crawl=no because a query string of ?sort=asc&crawl=no would be valid).
Add a Disallow: rule to the robots.txt below to prevent any pages that contain crawl=no being crawled.

Match Whole Filenames

As we saw with folder exclusions (where a pattern like /private/ would match paths of files contained within that folder such as /private/privatefile.html), by default the patterns we specify in robots.txt are happy to match only a portion of the filename and allow anything to come afterwards even without explicit wildcards.
There are times when we want to be able to enforce a pattern matching an entire filename (with or without wildcards). For example, the following robots.txt looks like it prevents jpg files from being crawled but in fact would also prevent a file named explanation-of-.jpg.html from being crawled because that also matches the pattern.
If you want a pattern to match to the end of the filename then we should end it with a $ sign which signifies "line end". For example, modifying an exclusion from Disallow: /private.html to Disallow: /private.html$would stop the pattern matching /private.html?sort=asc and hence allow that page to be crawled.
Modify the pattern below to exclude actual .jpg files (i.e. those that end with .jpg).

Add an XML Sitemap

The last line in many robots.txt files is a directive specifying the location of the site's XML sitemap. There are many good reasons for including a sitemap for your site and also for listing it in your robots.txt file. You can read more about XML sitemaps here.
You specify your sitemap's location using a directive of the form Sitemap: <path>.
Add a sitemap directive to the following robots.txt for a sitemap called my-sitemap.xml that can be found at/my-sitemap.xml.

Add a Video Sitemap

In fact, you can add multiple XML sitemaps (each on their own line) using this syntax. Go ahead and modify the robots.txt below to also include a video sitemap called my-video-sitemap.xml that lives at /my-video-sitemap.xml.

What to do if you are stuck on any of these tests

Firstly, there is every chance that I've made a mistake with my JavaScript tests to fail to grade some correct solutions the right way. Sorry if that's the case - I'll try to fix them up if you let me know.
Whether you think you've got the answer right (but the box hasn't gone green) or you are stuck and haven't got a clue how to proceed, please just:
  1. Check the comments to see if anyone else has had the same issue; if not:
  2. Leave a comment saying which test you are trying to complete and what your best guess answer is
This will let me help you out as quickly as possible.

Obligatory disclaimers

Please don't use any of the robots.txt snippets above on your own site - they are illustrative only (and some would be a very bad idea). The idea of this post is to teach the general principles about how robots.txt files are interpreted rather than to explain the best ways of using them. For more of the latter, I recommend the following posts:
I hope that you've found something useful in these exercises whether you're a beginner or a pro. I look forward to hearing your feedback in the comments.

Google Panda Updates

on Sunday, December 23, 2012

Google Panda Updates:-
December 21, 2012 Panda #23
November 21st Panda #22
November 21st Panda #22
November 5th Panda #21
September 27th Panda #20
September 18th Panda 3.9.2
August 20th Panda 3.9.1
July 24th Panda 3.9
June 25th Panda 3.8
June 9th Panda 3.7
April 27th Panda 3.6
April 19th Panda 3.5
March 23rd Panda 3.4
February 26th Panda 3.3
January 15th Panda 3.2
November 18th Panda 3.1
October 19/20th Panda 2.5.3
October 13th Panda 2.5.2
October 9th Panda 2.5.1
September 28th Panda 2.5
August Panda 2.4
July 22nd Panda 2.3
June 18th Panda 2.2
May 9th Panda 2.1
April 11th Panda 2.0
February 24th Panda 1.0

Need a Disposable Email Address? Try These Great Services

on Thursday, December 20, 2012


Disposable emailaddresses are not a new idea. In fact they’ve been around almost as long as email has (well, almost). In case you’ve never used one, disposable emails are fake email addresses you can give to people you don’t trust, to services you suspect of sharing your address, and on any situation where you don’t want to use your real address. Depending on the service you use, you can either read emails received to this address on a Web interface, or receive them to your real inbox.
There are dozens of such services, and most are really easy to use. It’s just a matter of finding the one which offers the right features for you, and yes, there are all kinds of different features and perks to be had. So if you need a confidential disposable email address that you don’t want tracked, here are some excellent options you can try.
Important note: never use these services for sensitive information or for emails containing usernames and passwords you want to protect. While they are reasonably private, they are still not secure as your regular email account.

GuerrillaMail

disposable email address
GuerrillaMail provides more than just a disposable email address. In fact, the email address you get from GuerrillaMail never expires, so it’s actually not disposable – it’s even better. To use GuerrillaMail, all you have to do is choose your inbox ID, which would be your email address, and you’re ready to go. You can have any username you want, and choose between seven different domains. When you receive an email to your new address, it will automatically appear on your email page. You can reply from within GuerrillaMail, and even attach files. GuerrillaMail acts as a spam filter too, so spam shouldn’t even reach your inbox, and all email is deleted within 1 hour.
Since your address lasts forever (unless you tell it to forget you), you can use it again and again. All you have to do is enter the same email to view your inbox, so anyone who has it can view it. GuerrillaMail solves this problem by using aliases when sending out emails, so your recipients can’t access your GuerrillaMail. The downside is that sometimes emails I sent out from GuerrillaMail simply never arrived.
In a nutshell: Read, reply and attach files, permanent address with alias, emails deleted in 1 hour.

FakeInbox

disposable email
FakeInbox is slightly better looking than is common for these services, and offers fake email addresses that self-destruct after 60 minutes. All the provided addresses are in the same domain – fakeinbox.com – and you can either choose your own username or let the service generate a random one for you.  You can now give this address to anyone, and receive incoming emails on FakeInbox’s interface. You can also reply from here, and those are received instantly. A countdown at the bottom of the page shows you how much time you have left with this address. Done with it? You can delete it before your time is up. Want an extra 60 minutes? You can get that as well.
In a nutshell:  Read and reply, address and inbox deleted in 60 minutes (can extend).

MailCatch

disposable email
MailCatch is very similar to GuerrillaMail, but with a cherry on top. To start, MailCatch makes it very clear that you can give out a mailcatch.com email address before you even access the website. Just think up some random username, and then point your browser atusername.mailcatch.com to check your email. As with GuerrillaMail, all a person needs in order to access this mailbox is your username, and here there are no aliases here, so you really need to take that into account. MailCatch is read only, meaning you can’t reply to emails, only read them. What you can do, though, for 2€ a month, is set up forwarding or POP3 access to your MailCatch address, so you don’t have to use the Web interface to see your emails. All emails on MailCatch are deleted after a short time (several hours to several days).
In a nutshell: Read only, permanent address with alias, emails deleted after several hours to days, unique URL for your inbox, premium users can set up forwarding and POP3.

AirMail

disposable email
AirMail is a disposable email address service with some nice perks. Unlike other services, you can only generate random emails, with no control over what they’ll be. You can, however, generate one after the other until you find something you like. There seem to be at least 5 different domains in use. Once you find an address you like, you can give it to whoever you want, and receive the emails to AirMail’s interface. According to the AirMail website, you can view HTML emails, although I couldn’t see an image I sent myself. AirMail also claims to strip hidden codes from emails, thus preventing third parties from learning information about your browser and IP.
The nice thing about AirMail is that it feels almost like regular email. You get a sound notification upon arrival of a new email, and you can check your emails from any browser on any computer by copying your inbox’s unique URL. The address will last as long as you keep checking it. If you stop, everything will self destruct after 24 hours.
In a nutshell: Read only, address and inbox deleted after 24 hours, can access with unique URL, sound notifications, HTML viewer and code stripping.

EasyTrashMail

free disposable email
EasyTrashMail takes a slightly different approach to disposable emails, and for this one to work, you need to provide your real email address. On the up side, you don’t have to use a clunky Web interface – all emails sent to your disposable address are automatically forwarded to your regular inbox. So how does it work? You provide the service with your real email address, and choose how much time you want your disposable address to exist. There are many options to choose from, ranging from 15 minutes to 1 month. Once you click on “Create” you receive a random email address at easytrashmail.com, and or as long as it’s up, you’ll receive anything sent to it directly to your inbox, without having to disclose your real address.
In a nutshell: need to provide real email, no Web interface, address can last from 15 minutes to 1 month.

Mailnesia

free disposable email
Aside from its clever name, Mailnesia comes with several other cool features. Mailnesia is similar to GuerrillaMail and MailCatch in that it never really expires, only deletes email, and you can access the inbox simply by having the username. Like GuerrillaMail, you can set an alias for your inbox, so you don’t have to go around giving access to everyone who sees your address. There are several things that set Mailnesia apart: it comes in nine different languages, full HTML support, and the best part: auto clicking on confirmation/activation emails, which are what you’d use this address for most commonly. This means that if you use this address to sign up to a service or a website, Mailnesia will automatically click the confirmation link for you, without you doing anything. Pretty cool!
In a nutshell: Read only, permanent address with alias, emails deleted after several hours to days, comes in 9 languages, auto clicking on confirmation emails.

Some more services are

http://www.mailinator.com/
http://e4ward.com/
http://www.spaml.de/
http://www.tempinbox.com/
http://notsharingmy.info/
http://www.yopmail.com/en/
http://hidemyass.com/
http://www.yopmail.com/en/
http://www.bugmenot.com/
http://10minutemail.com/10MinuteMail/index.html
http://www.fakemailgenerator.com/

The Link building finding strategy

on Thursday, December 13, 2012

Once you have identified your keywords, you will want to pair them with prospecting phrases. These are searches to use in Google or Bing to find relevant resource and links pages like "intitle:resources" or "inurl:links." Below is a list of prospecting phrases you can use to help find relevant linking pages.

site:.gov
links
resources
intitle:links
intitle:resources
intitle:sites
intitle:websites
inurl:links
inurl:resources
inurl:sites
inurl:websites
"useful links"
"useful resources"
"useful sites"
"useful websites"
"recommended links"
"recommended resources"
"recommended sites"
"recommended websites"
"suggested links"
"suggested resources"
"suggested sites"
"suggested websites"
"more links"
"more resources"
"more sites"
"more websites"
"favorite links"
"favorite resources"
"favorite sites"
"favorite websites"
"related links"
"related resources"
"related sites"
"related websites"
intitle:"useful links"
intitle:"useful resources"
intitle:"useful sites"
intitle:"useful websites"
intitle:"recommended links"
intitle:"recommended resources"
intitle:"recommended sites"
intitle:"recommended websites"
intitle:"suggested links"
intitle:"suggested resources"
intitle:"suggested sites"
intitle:"suggested websites"
intitle:"more links"
intitle:"more resources"
intitle:"more sites"
intitle:"more websites"
intitle:"favorite links"
intitle:"favorite resources"
intitle:"favorite sites"
intitle:"favorite websites"
intitle:"related links"
intitle:"related resources"
intitle:"related sites"
intitle:"related websites"
inurl:"useful links"
inurl:"useful resources"
inurl:"useful sites"
inurl:"useful websites"
inurl:"recommended links"
inurl:"recommended resources"
inurl:"recommended sites"
inurl:"recommended websites"
inurl:"suggested links"
inurl:"suggested resources"
inurl:"suggested sites"
inurl:"suggested websites"
inurl:"more links"
inurl:"more resources"
inurl:"more sites"
inurl:"more websites"
inurl:"favorite links"
inurl:"favorite resources"
inurl:"favorite sites"
inurl:"favorite websites"
inurl:"related links"
inurl:"related resources"
inurl:"related sites"
inurl:"related websites"
list of links
list of resources
list of sites
list of websites
list of blogs
list of forums

25 Best Free Classifieds Websites to post Free Ads Online

on


If you are looking for Websites to post Ads, or if you are starting a small advertising campaign to post Ads around, then you surely must go through the list below. Because the Classified Websites listed below, will not charge you anything to post Ads, you even don't have to register. Your Ads will be instantly approved and will be live on internet in no time.

These all are Global websites, with a few exceptions, but are all highly trusted, used and appreciated by internet users around the world. Whether you are an Ad Seeker, Advertiser, Online affiliate Marketer, blogger or anyone who is interested in posting Ad, then you can afford to miss the below mentioned Websites.

SEE ALSO : 10 Best Free Antivirus Softwares - 2013

With No registration and Signup, and total control on the Ads, you can kick-start your Ad campaign and reach out to appropriate audience by selecting Country and region, also you can put in your Ad into correct category, to maximize its visibility.

1. Craigslist - No.1 Classifid Ad Service in World.
2. Quikr - Post Interesting Ads with Pictures for anything.
3. Gumtree - UK based Ad service for everything.
4. ClickIndia - India Based Ad service.
5. OLX - Buy and Sell anything you want. Put Ads for anything.
6. eBayClassifieds - eBay's Classifieds.
7. Adsglobe - India Based, Merchandise, Autos, Real Estate Ads etc.
8. Oodle - Post Ads for Cars, Homes and Rentals.
9. Sell - A MarketPlace to put up Ads and sell.
10. ClassifiedAds - Items for Sale, Personal, Pets, Homes etc.
11. Click.in - Real Estate, Property, Jobs, Education Ads etc.
12. IndiaList - Real Estate, Pets, Automobile Ads.
13. Locanto - Ads for jobs, housing, Dating etc.
14. Global Free Classified Ads - Ads-Sale for everything.
15. Classifieds Live - Buy and Sell anything here.
16. Bechna - For selling Real Estate Properties, Travel Offers, Community Offers etc.
17. Khojle - Cars and Bikes Ads, Jobs, Matrimonial Ads etc.
18. Adpost - Furniture, Music Instruments, Clothing, Travel Ads etc.
19. Adoos - Vehicles, Jobs, Real Estate and Community Ads.
20. Vivastreet - Buy Sell Ads, Vehicles, Pets, Jobs etc.
21. WebIndia123 - Global Service for Ads, Yellow Pages, etc
22. iNetGiant - Vehicles, Pets, Services, Jobs, Property etc.
23. Vast - Classified Ads for Cars.
24. Yakaz - Local Ads for Sale, Services, Rentals etc
25. Daype - Best suited for US locals.

So we hope all this amazing List will help you, select the one that suits you the best, choose a category and post your Classified Ad. Or simply put your Ad on all of these Websites, after all its free and doesn't requires SignUp or Registration.

Social bookmarking sites list

on Wednesday, December 12, 2012

Social bookmarking site list with high PR rank


  1. http://www.delicious.com
  2. http://www.chime.in
  3. http://www.digg.com
  4. http://www.del.icio.us
  5. http://www.connotea.org
  6. http://www.reddit.com
  7. http://www.citeulike.org
  8. http://www.slashdot.org
  9. http://www.technorati.com
  10. http://www.stumbleupon.com
  11. http://www.google.com/bookmarks/
  12. http://www.blinklist.com
  13. http://www.bibsonomy.org
  14. http://www.popurls.com
  15. http://www.librarything.com
  16. http://www.in.gr
  17. http://www.newsvine.com
  18. http://www.fark.com
  19. http://www.diigo.com
  20. http://www.nowpublic.com
  21. http://www.folkd.com
  22. http://www.tagza.com
  23. http://www.jumptags.com
  24. http://www.hotklix.com
  25. http://www.designfloat.com
  26. http://www.segnalo.alice.it
  27. http://www.disinfo.com
  28. http://www.tags.library.upenn.edu
  29. http://www.icerocket.com
  30. http://www.scoopit.co.nz
  31. http://www.gather.com
  32. http://www.metafilter.com
  33. http://www.pusha.se
  34. http://www.faves.com
  35. http://www.sphinn.com
  36. http://www.meneame.net
  37. http://www.youmob.com
  38. http://www.smallbusinessbrief.com
  39. http://www.kwoff.com
  40. http://www.dzone.com
  41. http://www.care2.com
  42. http://www.kuro5hin.org
  43. http://www.ekudos.nl/
  44. http://www.bizsugar.com
  45. http://www.webshare.co
  46. http://www.social-bookmarking.net
  47. http://www.a1-webmarks.com
  48. http://www.jeteye.com
  49. http://www.ikeepbookmarks.com
  50. http://www.shetoldme.com
  51. http://www.netvouz.com
  52. http://www.wirefan.com
  53. http://www.linkagogo.com
  54. http://www.fwisp.com
  55. http://www.colivia.de
  56. http://www.clipclip.org
  57. http://www.panodigg.com
  58. http://www.fizzy.com/
  59. http://www.diglog.com
  60. http://www.swik.net
  61. http://www.diggita.it
  62. http://www.url.org
  63. http://www.superuse.org
  64. http://www.designbump.com
  65. http://www.newsmeback.com
  66. http://www.dropjack.com
  67. http://www.it.toolbox.com/blogs/it-blogs/
  68. http://www.bookmarks.oneindia.in
  69. http://www.buddymarks.com
  70. http://www.topix.lt
  71. http://www.caitax.net
  72. http://www.b2blocal.net
  73. http://www.babblestorm.co.uk
  74. http://www.unalog.com
  75. http://www.linkswarm.com
  76. http://www.dotpoch.com
  77. http://www.cloudytags.com
  78. http://www.spletarna.net
  79. http://www.worldpharmanews.net
  80. http://www.medical-articles.net
  81. http://www.bookmarks.excite.co.uk
  82. http://www.bookmark-manager.com
  83. http://www.mylinkvault.com
  84. http://www.wists.com
  85. http://www.blogospherenews.com
  86. http://www.mystickies.com
  87. http://www.corank.com
  88. http://www.zoomit.ca
  89. http://www.web.smashits.com
  90. http://www.myhq.com
  91. http://www.sitejot.com
  92. http://www.shoutwire.com
  93. http://www.malaysiastory.com
  94. http://www.oyax.com
  95. http://www.design.fr
  96. http://www.de.lirio.us
  97. http://www.marathikatta.com
  98. http://www.dealigg.com
  99. http://www.dnhour.com
  100. http://www.stylid.org
  101. http://www.digg.design.fr
  102. http://www.pfbuzz.com
  103. http://www.dictio.es
  104. http://www.phigita.net
  105. http://www.fcc.com
  106. http://www.hobbymedia.it
  107. http://www.blogengage.com/
  108. http://www.bibsonomy.org
  109. http://www.shadows.com/
  110. http://www.blinklist.com/
  111. http://www.spurl.net/
  112. http://del.icio.us/
  113. http://www.netvouz.com/
  114. http://de.lirio.us/
  115. http://www.diigo.com/
  116. http://www.hanzoweb.com
  117. http://www.furl.net/
  118. http://wink.com/
  119. http://www.digg.com/
  120. http://www.markaboo.com
  121. http://www.ekudos.nl/
  122. http://clipclip.org/
  123. http://www.tallstreet.com/
  124. http://www.goesby.com/
  125. http://www.portachi.com/
  126. http://www.otavo.com/
  127. http://www.kaboodle.com/
  128. http://www.wirefan.com/
  129. http://www.wists.com/
  130. http://www.reddit.com/
  131. http://www.linkatopia.com/
  132. http://www.optimiz.us/
  133. http://www.linklog.nl/
  134. http://www.citeulike.org/
  135. http://www.getboo.com/
  136. http://www.tabmarks.com/
  137. http://www.zurpy.com/
  138. http://www.indiamarks.com/
  139. http://clipmarks.com/
  140. http://www.meme-stream.com/
  141. http://www.linkagogo.com/
  142. http://www.bookkit.com/
  143. http://segnalo.alice.it/
  144. http://www.lilisto.com/
  145. http://www.eigology.com/***ear/
  146. http://h2obeta.law.harvard.edu
  147. http://www.bmaccess.net/
  148. http://www.urlex.info/
  149. http://www.web-feeds.com
  150. http://www.librarything.com/
  151. http://www.rojo.com/
  152. http://www.linkblog.com.br
  153. http://buddymarks.com/
  154. http://www.oyax.com/
  155. http://ez4u.net/
  156. http://ww2.ikeepbookmarks.com/
  157. http://www.mobleo.net/
  158. http://www.startaid.com
  159. http://www2.myhq.com/
  160. http://www.zoogim.com/
  161. http://www.mylinkvault.com/
  162. http://bwsmith.com
  163. http://www.mypip.com/home/
  164. http://aboogy.com
  165. http://www.putvote.com
  166. http://www.searchles.com/
  167. http://www.shoppersbase.com/
  168. http://www.sitejot.com/
  169. http://www.technorati.com
  170. http://socialbookmarking.org/
  171. http://www.ambedo.com/
  172. http://www.netscape.com/
  173. http://www.icio.de/
  174. http://newskicks.com/submit.php
  175. http://www.fark.com
  176. http://www.bookmarktracker.com/
  177. http://rollyo.com/
  178. http://socialbookmarkingsite.com/
  179. http://www.pixelmo.com/
  180. http://www.butterflyproject.nl/
  181. http://myseoblog.net/2008/02/26/7/
  182. http://yoorl.com