Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A recent comment here mentioned search in early browsers (1991ish). The browser would fetch all links from the current page n levels deep in the background and uses that to build a local index.

I wonder if something like that could work today, only with the index being shared across the user base.

The benefit would be that it’s a decentralized system. No giant infrastructure required which needs to be paid for by a big corporation. Basically, the infrastructure needs would be outsourced to millions of devices. And for websites, users and crawlers would be the same thing. Which is to say, you cannot block one without also blocking the other.

It could also add feedback mechanisms. Active ones, such as commenting on pages and discussing them, as we do on HN. But also passive ones such as tracking how long the user interacted with the page, to score the value of pages/domains and improve the ranking algorithm.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: