Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you elaborate on the reasoning behind using byte length instead of bit length?

Most of the time when I use fixed-width int types I’m trying to create guarantees for bitwise operators. From my perspective I feel like it therefore makes the most sense to name types on a per-bit level.



We almost always talk in bytes. When trying to reason about alignment it's bytes, when reading a serial IO from a file it's bytes. I hardly ever think in bits and when I do, I think in hex not decimal.

I also like that it makes all the type names the same width (notably U1/U2 vs u8/u16).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: