Feedster won’t be bought [updated]
**I updated the story as it was written late late last night and didn’t make sense in some parts. You should re-read.**
First off, I don’t have anything against Feedster. Scott‘s a great guy. But I want to dispell some talk that Feedster will be bought.
Dave seems to think Google or Yahoo will snap Feedster up. “Isn’t it obvious that either Google or Yahoo will buy Feedster so their search engine can understand RSS.”
Ok. Let’s think this through. Why would corporations with multi-billion dollar market caps need to buy a little PHP code to know how to parse standard XML? None of the code could be used in a production environment. Feedster is already slow and it’s only getting a handful of searches a day.
With the number of PhD’s they have floating around the Googleplex, an enterprise level parser could be written in days. Google already knows how to search, and they can scale like no one’s business.
Blogger was bought out because it has a million users. Feedster is kick ass, but it’s only had 623,925 searches (not users) since it debuted. Still popular, but in web land that’s not that impressive.
The logical argument is now, “but Feedster has a ton of posts in its database… That’s why Google wants it!” So? The exact (at this time) number is 3,309,619 posts from 64,468 feeds. That’s a hell of a lot of content, but compared to what is out there it’s nothing. Technorati tracks 681,879 blogs which is more than one per post that Feedster watches. What do you bet Google could get 750,000 feeds on a few wide scrapes that no one besides them can afford to do?
But, here is the real problem. What good is RSS? It’s killer for weblogs, but not much else. Sites like C|Net , MacMinute and Slashdot don’t want to give away their content without advertising, and so far RSS hasn’t handled ads well. So basically what RSS would give Google a listing of “what’s new”. Isn’t that what Freshbot does? The beauty of Freshbot is that it doesn’t rely on anything but the page not 404’ing. Not on some worthless spec. Not on someone including the full post. Not on someone making sure to use “guid’s” because “link” is so 90’s.
About the only thing using RSS would save is [a little] bandwidth. Think about it this way. Everything that is in an RSS feed is on a web page, but not everything on a web page is on an RSS feed. The whole idea of an RSS search engine is a joke. If you *really* have to find out within the hour how Mrs. Johnson’s cat is doing, just subscribe to her feed. Since the end user (EU) cares only about getting the info, and RSS isn’t human readable (for the EU anyway), sucking RSS feeds down is a waste of time. This weblog was indexed in Google within 24 hours. That’s faster than Technorati AND Feedster.
Googlebot sucks down RDF already. Don’t believe me? Check it. You freaking Google bashers are going to point out “see they don’t know how to handle RSS you bastard”, but I think it’s just phase 1. They already index it, the next step is to change some search algorithms and figure out what to do with it.
For the record, I hope Feedster does get bought. Scott and Co. deserve some cash (they work like dogs). I bought an ad to show my support (so stop typing out that hate mail). The ad was pretty much a donation to Feedster as I’ve gotten 1 click out of 4,500 views, but a donation that I don’t mind giving.
A service like Feedster fills a need when you want to read about a topic on any weblog that discusses it. I’ve been catching up on JUnit lately for a magazine column, and the easiest way to get current on it was to subscribe to Feedster’s RSS feed for the search term “JUnit”. I ended up reading around three dozen weblog entries from programmers and other people with an opinion or news about JUnit, all of whom I was not previously subscribing to with an aggregator.
It’s also more granular than a page-based search engine like, uh, the rest of them.
What’s granular in a search engine?