Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Derive your encryption key from the contents of the file and a "convergence key". The "convergence key" can then be null for global convergence, a shared secret for a privately shared convergence, or a random nonce for no convergence. The derived encryption key is stored the same in every case. When encrypting a file, clients trade off using space versus a file getting deleted if the server is required to remove the ciphertext. The server never knows the difference.


This could even actually be done by the user before storing it on the cloud service and finding duplicates would be trivial server-side. (Though I don't see much incentive for a person to do this since it only benefits the hoster.) For example, in the Mega interface, a user could specify the length of the convergence key (random salt that inversely affects the likelyhood of de-duplication on the host) with a default length of 0. This would then be part of the "key" proper, as those bits are required to access the original file.


And it should be done such that the server treats everything the same. The incentive comes from deduped files counting less against storage quotas, and no time spent uploading the file. I'm just commenting on the general approach here, not the applicability to any particular type of service.

But your 'random salt' idea suffers from the attacker just generating all possible encryptions of the plaintext due to the small number possibilities. The "convergence key" is solely a security-parameter-length key that you can pass around to your friends so that your files will dedupe with theirs while not being susceptible to confirmation attacks by others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: