.
Site Feeds
Featured Posts
Bots Soon to High-Card Humans
FeedJournal Sample Issue The Modern Emigrant Why FeedJournal? (or why the information age matters)
Categories
Credits
|
« Time for Code Freeze | Main | Writing Documentation » Sunday, July 30, 2006Deep Linking from RSSOne of the more unique and perhaps controversial features of FeedJournal is that it can filter out the meat of an article published on the web. How does it accomplish this? FeedJournal has four ways of retrieving the actual content for the next issue.
Actual Content
Linked Content
Rewritten Link
Filtered Content By applying these functions it is possible to scoop, or extract, the meat of almost any web published article. Of course it is only necessary to do this once for every feed. To my knowledge, FeedJournal is the only aggregator who has the functionality described in the last three sections. Is this legal, you ask? Wouldn’t a site owner require each user to actually visit the web site to read the content and click on all those fancy ads sprinkled all over? Well, my stance is that if the content is freely available on the web, I am free to do whatever I want with it for my own purposes. Keep in mind that we are not actually republishing the site’s content, we are only filtering it for our own use. Essentially, I think of this as a pop-up or ad blocker running in your browser. What is interesting to note is that some web sites have tried to include in their copyright notice a paragraph limiting the usage of their content. Digg.com, for example, initially had a clause in the their copyright effectively prohibiting RSS aggregators from using their RSS feeds! Today, it is removed. As long as FeedJournal is used for personal use, and the issues are not sold or made available publicly, I do not see any legal problems with the deep linking.
Posted by Jonas Martinsson at 19:51
Edited on: Sunday, July 30, 2006 20:06 Categories: Made In Express Contest, My products | |