Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Filesystems often aren't very efficient at lots of small files.

If they could handle compressed archives transparently then an array of files, maybe extended from the old windows URL= style files, might work.

An SQLite file also sounds like a great way of handling URLs, which Firefox does:

https://stackoverflow.com/questions/464516/firefox-bookmarks...



Efficiency/performance questions would be important if we would process thousands of such files per second, but this is not the case, or is it? We read/write these .url files at a pace of maybe 1 file per second maximum, if we are browsing fast, and want to save many bookmarks in a short time.

IMHO filesystem efficiency questions never arise for bookmarks of a user of a computer. If one day you want to do some data mining on your 10k bookmarks, it will probably take < 1 second, even if done with Python.

Do you see a real-life situation for which reading a .url in 1 µs instead of 100 µs would make any difference?

(If you're speaking about search/querying, then the OS search feature does it for us)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: