A Technical Solution to the RSS problemlem

My RSS post generated several interesting emails, so for anyone interested, I've decided to post the idea here. I don't know if it is technically feasible just yet. That is what I was trying to get funding to investigate. It is certainly difficult either way, but I honestly believe I could make it work. I've always tried to keep ideas like this to myself but I've got enough good ones now to let some go. And I think it's going to be awhile before there is big money in RSS. By the way, if you are from Google, I'd love to work on fixing search in general with this method :-)

There are two problemlems with RSS. 1) I subscribe to something I think I will like, for instance Forbes management news, and they put out 8 articles a day, only 6-7 of which interest me. Multiply that 6 or 7 by 200 feeds and I am still wading through lots of crap. 2) There may be a feed I don't want to subscribe to, like Ladies Home Journal, which may still put out 2-3 articles a year that interest me. I would like those articles to make it to me. If you can solve those two problemlems, you have built a great RSS reader.

My solution was to build a society of autonomous agents. The agents will each have very simple roles. A handful of major agents harvest links and sort them out to the next level of agents, which specialize in broad categories. So for instance there is an agent that carries all the political posts, regardless of which feed they came through. There is an agent that carries all the business feeds, etc. etc. At every level, the agents get more and more specialized, and their number increases. At the bottom, right before the user-agent level, you have agents that contain feeds on very specialized topics, despite where they came from, and many feeds are carried by more than one agent at this level.

Each user has an agent that learns about him/her. Every post that is passed to the user can be rated on a scale of 1-10, if the user chooses. Over time this agent learns what the user likes.

The agent learns where to go to get feeds that match the users keywords and feed subscriptions, but also has a "wander" function such that it spends spare time interfacing with other agents at various levels, including the agents of other users, to see if it finds anything the user might like. It learns over time which agents to talk with to get the best information.

The 30 Most Important Twitter Influencers in Business for 2020

All the links the agent finds are put on a list and given an initial ranking of 1. A whole other society of agents exists that do nothing but scroll through a group of users and adjust the ranking of each article. So for instance, an agent comes along and says "hmmm… this contains a lot of business related words, and rob_business tends to like articles/posts like that, so I'm going to increase the score of this article by 1." Another agent may say "this is from a site that rob_business typically doesn't like, even when they post about topics he does like, so I'm going to decrease his score by 1." Another agent may come along and say "70% of our system users have read this article, and rob_business likes to see articles that are popular, so I'm going to increase the score of this by 2.1." Then add in a function that causes the score of each article to naturally decay over time, because older news is less relevant and interesting. At any time, I can log in to the site and view my top 20 (or however many I want) ranked links, and they should be different than everyone else's top 20. Rather than having a complex algorithm, it's the culmination of the votes of many simple agents that determines what the score of a post is.

I think this can be applied in a broad sense to search in general. The problemlem with web searches is that you want the information right then. This society of agents can be given time to work, because I may only check my feeds once or twice a day. My hope is that they learn to pass around links the way people do with each other. For instance, the way I recommend a book or movie to a friend with similar interests, or tell my sister that she won't like a certain band that I like because I know her tastes.

There are two problemlems with this approach. First, it could turn out to be very computationally intensive to have all these agents running around. Or maybe not. I don't know. The second problemlem is the more general problemlem with these type of distributed A.I. solutions. People don't like them because we tend to think top-down and this is a bottom-up design. So we don't really know how successful it will be at the top level until we try it. My hope is to discover that personalization is an emergent property of this agent based society.