Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> carrying bit length in the name is critical

I beg to disagree. In D:

    byte - 8 bits
    short - 16 bits
    int - 32 bits
    long - 64 bits
absolutely nobody is confused about this.


Because we've used those names since forever, but that's archaic random crap really. Nothing apart from maybe "byte" makes sense here, the rest is completely arbitrary historic cruft. Could as well have called the rest timmy, britney and hulk.


> Nothing apart from maybe "byte" makes sense here

Lest we forget: https://web.archive.org/web/20170403130829/http://www.bobbem...


Java uses exactly the same, and has a huge developer mindshare. While rooted in historic accidents, it's well-established.


Of course plenty of people are confused, the overhead of "short/long" just makes no sense, but yet another bad design from the past carefully preserved


Haven't run into a confused one yet, and D has been around 20 years.


That's just an indication of who you run into if you've missed people noticing/getting confused by this pretty obvious flaw


When we move into the 128-bit CPU era, will we call 128-bit integers "super long"? Maybe "elongated". Maybe "huge"?

Or, you know, we could just name them all by bit length and completely future-proof this system.


Why would we ever need 128-bit CPUs? I remember the PS2 had something like that (with details and caveats I don't understand), but subsequent games consoles went back to a more usual register size: https://en.wikipedia.org/wiki/128-bit_computing


All I know is we keep having this issue with saying "nah, this is it. Nobody will ever need more than this." And then inevitably the time comes when we need more.


Back in the 80s, 16 bit programmers knew that 32 bit code was coming, so they carefully crafted the code to be portable to 32 bits.

Of course, none of it worked on 32 bit machines because the programmers had never written 32 bit code before and did the portability measures all wrong.


We used to call 16-bytes a paragraph, so the nostalgic geek in me would love to see ‘para’ catch on. I never thought I’d be slinging around whole paragraphs of memory in registers!


We call them "cent" and "ucent" in D :-)


In that case D should probably start to have an internal conversation about what they're going to call 128 bits then, 'cause its going to become a thing sooner or later.

stdint already has that covered though: (u)int128_t


it has defined the type for very long: it's cent and ucent. It hasn't implemented it completely though, but named and defined it is already forever.


That's really interesting, and for me a totally unexpected name, having never seen that nomenclature before - would be interesting to see how consensus around that was arrived at - but hey, we gotta call it something! (But not DoubleQuadWord please ... )


So will the 256bit one be dollar or euro?


We'll think of something. Perhaps `bright`?


and

    ubyte - 8 bits
    ushort - 16 bits
    uint - 32 bits
    ulong - 64 bits
    ucent - 128 bits
    
    float - 32 bits
    double - 64 bits
    real - maximum precision hardware allows (80 bits on x87).


Wait, isn't long 32 bits, long long is 64 bits.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: