Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The IPv6 Numeric IP Format Is a Usability Problem (zerotier.com)
323 points by LukeLambert on Feb 19, 2016 | hide | past | favorite | 196 comments


The author suggests using deadbeef.1 instead of dead:beef::1. But his scheme cannot work. If you see deadbeef.ad there is no way to tell if it refers to his IPv6 notation, or to a domain under the .ad TLD (many other ccTLDs are valid hexadecimal numbers). And you can't replace the dot with a colon (because many of his other complaints were caused by colons). You can't use a character other than dot or colon (because so much network software is written assuming IPs/hostnames can only contains alphanumeric, dash, period, or colon chars that it would be too painful to introduce a new character).

So get over it. IPv6 is not meant to be usually exposed to endusers. Use hosnames. Use DNS, or mDNS or LLMNR on small networks without a resolver. Etc.


It helps little to say it wasn't designed to be exposed, because it will. We have DNS now, and still many times we deal with raw ipv4s, end users too. It's not like we have a choice most of the times...


Most of the reasons why you have to type v4 addresses are irrelevant in a v6 network. In a small to medium network you just plug in the router and you're done. No picking a suitable private network, no configuring a DHCP server, no typing in manual router addresses. SLAAC is very nice like that.

If you need to connect from machine to machine in the lan, use mdns and host names. This already works out of the box even in v4.


Developers deal with raw ipv4s, but end users? Unless you're setting up a wireless router, when does that actually happen?


Not all, but many end users frequently need to input IP addresses when any type of connection is being made on a private network that isn't using DNS.


And that's when having a : every X letters will help you catch where you are when you'll be typing that long string.


Exactly. There wouldn't be any need for the colons if the addresses weren't intended to be looked at or typed by humans.


The article has an edit at the end suggesting double-dot to avoid the confusion.

> IPv6 is not meant to be usually exposed to endusers.

"You're holding it wrong". This is a crap argument against the article. Network admins have to manipulate this stuff all the time, and the article is written from the point of view of a netadmin, not an enduser. It's trying to describe both a problem (lack of client uptake despite support; struggles with human comprehension) and a potential solution. The idea that humans would never have to look at an ip6 address has proven to be a fantasy.


Agreed this why ipv6 is textbook case in how not to define a standard.


I admit I look at ipv6 and cringe at the thoughts of implementing it.


The double dot suggestion is a knee jerk reaction without any thinking put into it. And this is a non starter as ipv6 is an established and widely deployed standard for 10+ years, ranting about it could have been done differently will achieve nothing apart impacting the author reputation.


What is the rational reason, if any, for gripes like these? The time to have this discussion would have been in like 1993 or so. Now, IPv6 is what we have, and the standards are what they are, flaws and all.

The only reason I can think of is psychological: People don’t want to learn new things, so they find reasons to dislike the new thing to be able to pretend they don’t need to learn it.

Also, the double-click argument is crap for two reasons: Firstly, it can be fixed by configuring your local software, and secondly, IPv4 addresses also had this so-called problem.

> IPv6 is still in the early stages of adoption

It really, really isn’t. It might look that way to you, in the US, at your home endpoint, but move to the backbone or outside the US and you get a very different picture. ARIN in the US just happened to be the last of the RIRs (except AFRINIC in Africa) to run out of IPv4 addresses, so the US was able to put off switching for longer than most, and the whole of the US is now consequently behind the curve.


> What is the rational reason, if any, for gripes like these? The time to have this discussion would have been in like 1993 or so. Now, IPv6 is what we have, and the standards are what they are, flaws and all.

The rational reason is that despite tons of "we have to adopt IPv6 or the world will end", people aren't learning it. Trying to understand the underlying reasons is worthwhile. The shittyness of the representation is one of those reasons.

(I agree with the people who say "making the parsing namespace even more complex won't help anyone" though)


It no longer matters if some people want to learn IPv6 or not. The world at large has adopted IPv6, by necessity. Those who refuse to follow with the times will, sooner or later, be replaced by those who are happy to learn new things. It’s not like the world will suddenly one day decide to give up and go back to IPv4. Adapt or die.


You seem to be confusing the difficulty of "learning" something which is trivially easy with the difficulty of using something that is designed in an obfuscated way that makes it hard to use.

This situation is the latter. That's the argument of the article.


You seem to be confusing the difficulty of "learning" something with the difficulty of "unlearning" something to learn something new.

I don't see any actual argument in the article, just narrow minded rant failing to see farther than it's own nose.


Hardly. IPv6 is straight forward for any network engineer with experience with existing networks. It's IPv4 but you get to design address space to logically suit your infrastructure and needs.


The article is rather against the use of a colon at all, and that is not obfuscation.


Everyone wants to have the larger address space that ipv6 provides. All the other stuff rolled into it and the obvious representation shortfalls etc are very unfortunate.


There isn't a clean way to solve representation of a 128-bit number. It's a big number. It'll be big to represent.

I am curious what other problems you think it has, that don't just come along with the bigger address space?

The realistic perspective here is still essentially the cost one: most people who have IPv4 hardware see little reason to move at all, until they have a problem.


They could have used base 36 and made it 16 characters. That would have made it at least feasible to remember.


I like this idea. A lot. For other readers:

https://en.wikipedia.org/wiki/Base36

I'd also want optional punctuation for formatting, for reading and data entry, just like with phone numbers. Maybe looking like this:

0123-ABCE-4567-FGHI


I'm concerned about privacy implications. Let's say that I'm a dissident somewhere. I'm using VPN services, JonDonym, Tor, etc to help obscure my identity. But then maybe WebRTC or whatever reveals the device IPv6.

I'm sure that it's possible to generate a new IPv6 for each session. But it's yet another pitfall for the unwary. For now, I just disable and firewall.


So you're behind a NAT for ipv4 for so long that you have forgotten that your computer is supposed to have a public IP address.

If you are a dissident or even a citizen concerned about privacy, you'd better be a part of the solution and not the problem as you are now by refusing to deal with ipv6.


What you're describing already exists. RFC 4941 defines a standard method for regularly "rotating" IPv6 addresses on a client machine, which is implemented by most major operating systems.


Rotating IP is nightmare for network admin: http://blog.bimajority.org/2014/09/05/the-network-nightmare-...


> UPDATE (2014-09-06): As Stéphane Bortzmeyer was the first to point out, RFC 7217 addresses all of my issues with “privacy” addresses. Let implementation come soon!


Very cool! Thanks :)


People are learning and using it just fine. Yes, it looks a bit different but that's about it. Everything works just the same except building properly routed networks is a lot easier with IPv6.


So, why not fix these problems? Go through the RFC process, if it doesn't collide with anything, it'll make it into the standard, IPv6.1 or whatever, and eventually things will support it. Sure, it might take ~10 years for all freezers and forgotten L3 switches to support it, but why not start at all?

We can chance these things easier than our visual pattern matching engine. (Not that we can't train it to perform better, but that just means we'll still make more mistakes and will be less efficient than with this small change.)

Fixed width fonts and indentation are good things, the majority of programmers use it for some reason, why not go for sane ergonomics in network related stuff?


> Sure, it might take ~10 years for all freezers and forgotten L3 switches to support it, but why not start at all?

I seriously doubt that any change in notation would be beneficial enough to be worth the 10 years of work, incompatibilities and bugs.

You don’t see people making this kind of fuss about MAC addresses; that’s because people know that they aren’t supposed to memorize them, and we treat them accordingly. That’s the thing about IPv6 - you have to realize that you have to let go of the notion of memorizing IP addresses, just as it would never occur to you to know your MAC address by heart. Use the DNS; eliminating the need to memorize IP addresses is what the DNS is meant for.


MAC addresses are actually worse, some software displays uses colons to separate, and some uses dashes. IPv6 addresses are ungainly, and have two forms (canonical short, and canonical full), but you can usually put either form wherever you need an address, and there's good library support. I guess that's a benefit of a multiple decades adoption; by the time I get to use it, everything works already. :)


> So, why not fix these problems?

They're individual preferences not problems. We can't realistically have a bunch of different notation standards to make everyone happy.


True, so why not make a good notational standard so that everyone is at least not unhappy?


I'm a network engineer and I work with IPv6 often these days. I think it's a good notational standard. I don't care about the difference between dots and colons. That's much more of a programmers fetish type of thing. As a general rule I think network engineers, who actually have to work with this stuff, just want consistency. If IPv8 wants to replace colons with middle finger emojis sure OK. Whatever I don't care. Just let me know. I'll figure it out.


Most IT operations people within corporate networks use ips instead of names. And unix admins only trust ips in their host firewalls. So many things rely on the ip address that people get to know that 10.1.0.0 is headquarters and 10.1.3.0 is marketing and 10.4.0.0 is Seattle .. etc etc.

I'm not saying that is a good thing or not, I'm just saying it IS.

It would be a pain to use ipv6 internally.


> It would be a pain to use ipv6 internally.

So.. Have you ever actually used ipv6?

With ipv6 your organization gets a prefix assigned, usually a /32 or a /48

So you would get assigned a prefix like 2626:32:400::/48

With ipv4 where if you were lucky you had a /16 that had 65,536 addresses - with "simple" /24 subnetting for 256 subnets of 256 hosts each.

With ipv6 you minimally have a /48 that has 65,536 subnets.

So you can just start subnetting things like 2626:32:400:0/64 is headquarters, 2626:32:400:1/64 is marketing. And you never have to worry about running out of address space on each subnet, since each subnet has room for 18,446,744,073,709,551,616 addresses.

And since there is room for 65,536 subnets, you can easily do things like immediately carve that up into blocks of 512 subnets for every geographical location and project... once. And not end up with "10.1.3.0 is marketing, but also 10.1.58.0 because we ran out of room"

So really, if you can type 192.168.x.y, you can type 2626:32:400:x:y.


In fact it's even easier. In the IP address you can embed an identifier for which office/datacenter, which department, which VLAN, etc. This makes it much more powerful and expressive.


I know that some network configurations will still have to use actual IPv6 addresses but for the most part you could save yourself a lot of pain by just using DNS and hostnames.


When I speak to our network guys to get a firewall entry they don't want to hear about names. Their network diagrams don't have names. Using names to them is unnecessary cognitive load.


What would happen if your current ISP got problems and you had to switch to a different ISP with a new range of addresses (assuming you don't have operator independent addresses and not using BGP)?


They'd NAT. And they're probably already NATed, so it would just be a few changes on the outside of the firewall.


Good luck NATing v6


That's already done. If you're big enough that renumbering is a concern, but small enough that you don't have your own assigned prefix, you use unique local addresses (fc00::/7) and NAT them to your providers space. (Which works better than it sounds - because you have enough space to nat one-to-one, the translation mechanism doesn't need to keep state. Simply replace one prefix with another, whack a new checksum in and send it along)


There's nothing about NAT that makes it IPv4 specific. In fact, it implemented in netfilter (Linux)/PF (OpenBSD). For IPv6, there's also NPT (Network Prefix Translation), though I'm not sure how widespread it is.

Now, the fact that you can NAT IPv6, doesn't mean that you should. Specifically, if you NAT your IPv6 prefix because that's what you do with your IPv4 block, then you're doing something wrong.


I would hope that the firewall guys are using a layer of abstraction in their rules rather than just putting the IPs in each rule individually. Network and protocol objects are fantastic, because you can assign names to network resources and then write rules that refer to hosts and networks by human names.

As an added bonus, if a resource changes addresses you just update the object and all of your rules are updated.


Such IT operations people and “unix admins” have chosen not to adopt the new thing slowly when they had the chance, so instead, they will now have the pain of adopting it quickly. This is entirely on them. Having that attitude towards IP addresses is not a practical attitude with IPv6 - you can do it, but it’s impractical. It is the attitude itself, not IPv6, which is the problem. This was going to be a problem whatever notation was used - there are simply too many bits in an address for it to be practical to deal with and memorize lots of raw IP addresses anymore.


The majority of people will use IPv6 transparently through DNS. I'm not sure why typing an address into a browser or even a terminal seems like the more common use case.


Hear, hear.

The fact that IPv6 addresses are so unruly is a blessing in disguise. Use DNS, /etc/hosts, bonjour, .ssh/config, whatever. Use names, stop using addresses directly, even with IPv4.


With IPv4, if DNS isn't working, it is very useful to be able to enter an IP address directly to check if the problem is with DNS or with the network.

(But then again, that is no reason not to adopt IPv6! Some things are going to get harder, but so many, many things are going to get easier that it is easily worth the tradeoff.)


And for those few times, you can still enter an IP directly. :)

For those who like to memorise IP addresses, my favorite ping in ipv4 is 8.8.8.8, but in IPv6 I like using 2600::1. Shorter, and more fun! (2600 as in the Hacker Quarterly/Hope.net or the old 2600 mHz hack, although the netblock is owned by Sprint, but I guess that is à propos..).


Yes, you can. But it is hard to deny that if there is a specific server/host you are trying to access, its address is likely to be a lot harder to memorize than that.

(And like I said, that should not be considered a reason to avoid adopting IPv6. The only thing I currently dislike about IPv6 is that my ISP does not give me a static network address.)


That's odd. What ISP is it? This goes against RFC recommendations (ex: https://tools.ietf.org/html/rfc6177 - assign between /64 to /48)

My ISP (DSL with teksavvy.com in Canada) offers a dynamic-ish /64 with SLAAC, then a static /56 subnet over DHCPv6. I know some people who had issues with their /56 subnet resetting, but that was usually solved by contacting tech support.


Hmm, realized this after posting: my comment on /64 to /48 is rather off-topic, and the reason why the ISP allocates dynamically might be that they are using 6rd [1].

From what little I understand of it, 6rd calculates an IPv6 subnet by using the IPv4 address. So unless your v4 adress is static, your v6 subnet will be dynamic. Some cable providers are using this in Canada (Videotron). I hope they get rid of it soon, because it's really clunky!

[1] https://en.wikipedia.org/wiki/IPv6_rapid_deployment


Seems Google have been able to snag some easy to use IPv6 addresses for their DNS servers as well.

    2001:4860:4860::8888
    2001:4860:4860::8844


Sprint.net has the IPv6 address 2600::, which responds to ping. It's super helpful for testing v6 connectivity, as well as being an awesome reference.


Sadly, it's a lot easier to have my router always always assign my NAS 192.168.0.2 than it is to configure DNS


Make a homelab and setup local DNS. It's not that complicated if you don't want to get fancy. You can also team it up with DHCP using tools like dnsmasq which is lightweight and works on everything from raspberry pis and routers with flashed firmware to a whole computer or VM on a larger host.

You can do it. You should do it. Why haven't you done it yet?


Not saying what I'm doing is right, but giving my NAS a static IP works on any device connected to the network. My router doesn't support manual DNS entries and I don't want to go through the trouble of flashing OpenWRT just to do something.

You can name off tools that make it easier all you want, but how is the average user supposed to know that? When I set up my NAS, I wanted it to be accessible from the same address no matter what. I knew (at the time) about static IPs and manual DNS entries. So I went to my router's configuration and it didn't support manual DNS entries. So I opted for a static IP, and it worked. Sure, it's a kludge and not future proof, but I don't care. That's the problem. The "solutions" only work if you both know about them and care enough to do it the right (instead of the easy) way


Because it is work, and then it will require administration. Which is more work. Computers are smart enough to handle this, why can't everything on a subnet just have (for example) zeroconf?


We have a Macs so I just use zeroconf/Bonjour names. Never had to configure anything.


From personal experience, most home routers today take the computer name and add that to a .local DNS entry only valid for the local network (meaning that .local is a reserved TLD).


/etc/hosts and .ssh/config scale very, very poorly, whereas the raison d'etre of ip6 is that it is there to scale very, very well.


Those are for "home use", for other stuff you use DNS. DNS is what they call "internet scale".


Interestingly, perhaps ironically in this context, is that the original HOSTS.TXT was the "internet scale" solute of the time in that it powered names for the entire network: https://en.m.wikipedia.org/wiki/Hosts_(file)


The majority never see IPs at all, in which case this is a non-issue. This is an issue for network people and developers.

All the alternatives are either unreliable and slow (bonjour/mDNS) or require manual setup prior to use. Given that machines, networks, routes, etc. are all becoming increasingly ephemeral in the end all you end up with is a DNS, .ssh/config, or hosts file with hundreds or thousands of stale entries for things that existed for five minutes. In some environments there are nice systems for naming things and IP address management but these are hard to set up and maintain and aren't feasible in really heterogenous settings.

I guess Martin Fowler was right: there are two hard things in CS, cache invalidation and naming things. This is naming things.


All the alternatives are either unreliable and slow (bonjour/mDNS)

I’ll agree with you on that, in certain cases. In my experience, OS X-to-OS X always works seamlessly, but Windows-to-OS X is much more annoying. It works around 70% of the time.

Say the hostname of the Mac on the local network is lorems-mac-mini.local. Then say I want to connect to this Mac over various services from my Windows computer (file sharing via Windows Explorer, vnc via TightVNC, nx via NoMachine, etc.). 70% of the time, providing lorems-mac-mini.local as the hostname works. The other 30% of the time, the same programs which worked just fine with the hostname as lorems-mac-mini.local all of a sudden won’t be able to find it on the network anymore unless the same hostname is entered without the .local part. Then, sometimes, neither solution works and the Windows computer can’t find the Mac at all unless I enter its IP address, which magically works.

Frankly, it became annoying enough that I now just enter the IP addresses of devices on my local network that I want to connect to now instead of their hostnames, since I know it’ll always work.

…in that sense, I guess I have to agree with your overall sentiment.


> This is an issue for network people and developers.

So, the problems mentioned in the post essentially were...

    * ambiguity specifying ports

    * software not recognizing addresses

    * length to type (3-32+7 characters)
Looking past DNS, the first is only really an issue in web browsers, which I would doubt most people will actually use. The second is arguably a software issue, not a format issue. The third genuinely seems marginally debatable. Still, it seems like the amount of code needed to translate from one format to another would be trivial. In the amount of time it took to write the post, someone could probably have just written the code instead. There are tons of plugin mechanisms for stuff (shell, editor, browser, etc...)..

> Given that machines, networks, routes, etc. are all becoming increasingly ephemeral

MPTCP addresses some of these issues.


In theory, a change to the text representation would be significantly easier to deploy than IPv6 itself, because it doesn't require a change to the wire format.

An end user could tweak the text representation for their own tools, while continuing to exchange packets with the rest of the Internet.


>> IPv6 is still in the early stages of adoption > It really, really isn’t. It might look that way to you, in the US, at your home endpoint, but move to the backbone or outside the US and you get a very different picture. ARIN in the US just happened to be the last of the RIRs (except AFRINIC in Africa) to run out of IPv4 addresses, so the US was able to put off switching for longer than most, and the whole of the US is now consequently behind the curve.

The US may have been slow to start, but is probably ahead of the curve now. AT&T (6rd, but still) and Comcast have a large amount of residential users that are IPv6 enabled; T-Mobile, Verizon, AT&T and Sprint all support it on wireless too (subject to apns and access technology).


I suspect it's more about mind share in the IT profession rather than IPv6 support by ISPs, as they started to roll out when all new equipment had IPv6 support by default and a subset of customers were asking for it.

In Europe for example, the IT profession has been bombarded with news on how IPv4 was running out, then it ran out, and then all problems because it ran out. Network courses have been teaching IPv6 for a long time, government has issued mandates to use it (and governments are not known to be fast on those issues...), and IT conferences used to talk about it to the point where it's such old news that it is no longer worth talking about.


If user access to Google via IPv6 is considered a good tool of measurement, then The US actually has the highest rate of IPv6 adoption. https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...

Globally (again using user access to google as measurement) the adoption rate is still less than 10%, One could argue this could be considered still early stages of adoption.


> then The US actually has the highest rate of IPv6 adoption.

actually, according to these statistics, the adoption rate in belgium is much higher: 40.39%


If you're really adventurous, you could just use Braille which has 255 Unicode symbols. Ahem

ip6emoji("fe8000000000000003ceecdfffe30c27",Char(0x2800)) => "⣾⢀⠀⠀⠀⠀⠀⠀⠃⣎⣬⣟⣿⣣⠌⠧"

Then:

   deadbeef000000000000000000000001 
   2607f2f8a36800000000000000000002
   fe8000000000000003ceecdfffe30c27
   fe800000000000000000000000000001
   2607f8b040078090000000000000200e 
Becomes:

   ⣞⢭⢾⣯⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁
   ⠦⠇⣲⣸⢣⡨⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂
   ⣾⢀⠀⠀⠀⠀⠀⠀⠃⣎⣬⣟⣿⣣⠌⠧
   ⣾⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠁
   ⠦⠇⣸⢰⡀⠇⢀⢐⠀⠀⠀⠀⠀⠀⠠⠎

At first I was just playing around, but after a bit it begins to resemble one of those binary clocks. It even becomes somewhat natural to read. Might actually use this for myself... something nice about the 2x4 bit block patterns. 64bit pointer addresses?


That results in a different usability problem. Installed fonts are very nonstandard.

At the moment I happen to be on an up-to-date version of Windows 7 and here's what it looks like:

http://imgur.com/U5vZFRQ

*edit: though it appears to render properly on Debian


>That results in a different usability problem. Installed fonts are very nonstandard.

And some users change their default font and don't even use installed fonts and override preferences because they're in charge of how things display on their computer.

I use Sofia-Pro and this is what I see: http://i.imgur.com/8PoNfdJ.png


Your browser/OS should be using font substitution¹ though when it encounters a character that the current font doesn’t have a glyph for. Font substitution is pretty much necessary since it’s currently impossible for one font to hold Unicode’s repertoire of 110,000+ characters; even the OpenType format is limited to a maximum of 65,536 glyphs.

Assuming that it is using font substitution (as it should be whether or not you’ve chosen Sofia Pro as your browser’s default sans-serif font… unless you’ve also changed some other settings too), then the reason those characters show up as question marks is because you don’t have any fonts installed containing glyphs for those characters. (I’d recommend Everson Mono² or Symbola³).

――――――

¹ — https://en.wikipedia.org/wiki/Font_substitution

² — https://en.wikipedia.org/wiki/Everson_Mono

³ — https://web.archive.org/web/20150625020428/http://users.teil...


>Assuming that it is using font substitution (as it should be whether or not you’ve chosen Sofia Pro as your browser’s default sans-serif font… unless you’ve also changed some other settings too),

Yes, actually. The only two fonts my browser is permitted to use are Sofia Pro and Meiryo. My point being nothing is wrong with alphanumeric representations of hexadecimal. Short of glyph fonts like Webdings, every font supports alphanumerics - even CJK fonts.

Some users change settings - heavily so. The "safest default" should be the assumption. Changing something that works for most people to work only for "people with a proper supporting font installed" is breaking the web as far as any devs should be concerned.


Doesnt work on my main up-to-date Win8 computer either.

(though interestingly when I open this page up with links2 on my vps, accessed through a terminal on the iphone, which is using a monospace font... it renders fine.. it doesn't render fine if I see the same thing with Putty on the Win8 computer. I guess fonts support in general on Win is bad)


"up-to-date version of windows7" reads like an oxymoron.


Why? Windows 7 gets regular updates. Windows 8/8.1/10 are not updates to Windows 7, it is a different operating system.

Edit: Not sure why all the downvotes, I honestly don't see how "up to date Windows 7" is an oxymoron.


> Why? Windows 7 gets regular updates. Windows 8/8.1/10 are not updates to Windows 7, it is a different operating system

I'll address this point, the sibling comment already addressed the other point (it wasn't meant as a criticism, by the way).

It's true that Windows 7 still get updates. However, these are mostly security updates, or bugfixes. We don't expect it to get new features. In this case, unicode font rendering. It is likely (but not guaranteed) that a user running Windows 10 would see a much better result.


But then that user would be running Windows 10, which means they get to pay Microsoft for the privilege of having their data harvested and having an advertising ID assigned to their device.

Why any sane individual willingly chooses windows 10 is beyond me.

edit: Can't reply, but turning off the "phone home features" doesn't actually turn them off. It still phones home and you still get assigned an advertising ID. Sure, I could block it at the firewall potentially, but this is about taking an ideological stance.


It looks just like yours on Win 10 under Chrome. I was curious and tried in Edge and it renders fine.

Also, the settings in Win 10 allow you to turn off a lot of the phone home type stuff. Other than that, my opinion is that 10 mostly feels like a more polished 7.


Windows 10 on Chrome here, I see the same as the Windows 7 user above.

Actually, it renders perfectly on Edge; so the problem is probably with Google Chrome.


Upvoted because downvotes without reasoning is pointless.

In regards to why I think you may being downvoted:

> reads like an oxymoron

The parent was being a bit facetious. We all know they are different operating systems. But it's far less likely that an operating system that no longer receives "feature" updates would be up-to-date on its fonts.

In any case, an oxymoron is (per Google):

> a figure of speech in which apparently contradictory terms appear in conjunction (e.g., faith unfaithful kept him falsely true ).

Which I think the parent accurately described. (It's apparently contradictory, but has meaning.)


I parsed that as Conway's Life. Which is odd, because I've not played with, nor seen it, since about 1993! I'll be plumbing that into Life to see what happens.


hehe, now I'm going to envision a giant Conway's game of life being played every time I see a room full of IoT devices on IPv6.

Wonder if there would be an interesting way to visualize a cascading failure of the kind that brought down AWS in past years. Which of course makes one wonder if there would be any kind of useful calculus for a series of such glyphs.

PS: according to wikipedia there is a calculus of communicating systems of sorts: https://en.wikipedia.org/wiki/Calculus_of_communicating_syst...


I like where you're going with this. By using a base 256 representation you shrink hex's 32 charachters down to 16, almost the same as IPv4's decimal dot representation of 15 characters: 255.255.255.255. But obviously you don't address the typeability issue.

Without using the shift key it's easy to type base 32 numbers. That comes out to an average of 26 characters per IPV6 address.

So Hex

  deadbeef000000000000000000000001 
  2607f2f8a36800000000000000000002
  fe8000000000000003ceecdfffe30c27 
  fe800000000000000000000000000001 
  2607f8b040078090000000000000200e 
Becomes base 32:

  6ULMVEU0000000000000000001
  160VPFH8R80000000000000002
  7UG000000000007JNCRVVU6317
  7UG00000000000000000000001
  160VSB0G07G28000000000080E
If we are willing to use the shift key we could move up to Base 64 and get it down to an average of 21 characters. (Anyone know why the standard base 64 alphabet starts at A instead of 0?):

  3q2+7wAAAAAAAAAAAAAAAQ
  Jgfy+KNoAAAAAAAAAAAAAg
  /oAAAAAAAAADzuzf/+MMJw
  /oAAAAAAAAAAAAAAAAAAAQ
  Jgf4sEAHgJAAAAAAAAAgDg
Now if we are willing to use unicode characters, why stop at base 256? Unicode has 95,000 characters so we could use a base 65,536 number and cut it down to 8 characters, now we're talking! Unfortunately to get it down to 4 characters would require 4,294,967,296 different characters. Even the extended unicode set won't get us there.

But maybe an alphabet with emoji could be practical. You could have smiley or sad faces in your IP address. All kinds of possibilities with that. Though software such as this HN website would have to be fixed to be able to display it. But you know, a long term project.


There's an RFC for IPv6 addresses in Base85: https://tools.ietf.org/html/rfc1924


> 7. Implementation Issues

> Many current processors do not find 128 bit integer arithmetic, as required for this technique, a trivial operation. This is not considered a serious drawback in the representation, but a flaw of the processor designs.

Oh, IETF, please never change.


Why, thanks! The original inspiration is probably related to http://what3words.com/. Mainly I thought it'd be "mad scientist" thing but it actually is fun and possibly useful in some contexts. As for usability, you could use text substitution [1] and have something like `ip174` change into `⢮`. Setting up 256 mappings isn't too terrible, especially since there is a system to Braille dot ordering.

[1]: http://lifehacker.com/162484/save-time-with-text-substitutio...

Base 32 seems like a good half-way solution of length reduction vs glyph complexity, as Base 64 or Base 89 (or whichever the RFC suggest) include too many distracting characters (IMHO).

Of course, Chinese/Japanese speakers have a natural advantage here (Katana and Hirigana both fall short of a contiguous 64 character mapping):

ip6emoji("fe8000000000000003ceecdfffe30c27",Char(0x3300),stride=8) => "㏰㌀㌏㏷"

( though hopefully that's not some form of insult in Chinese! ;) )


I like the base32 one. If you put two colons in front then you can even copy/paste and it distinguishes it from other text as an ipv6.

  ::6ULMVEU0000000000000000001
  ::160VPFH8R80000000000000002
  ::7UG000000000007JNCRVVU6317
  ::7UG00000000000000000000001
  ::160VSB0G07G28000000000080E


For reference, it looks like this on Safari v9.0.2 on OS X v10.11.2 'El Capitan':

http://f.cl.ly/items/0E182U1p3r430M073w2i/braille.png

Seems like the font that OS X’s text rendering system substitutes for the braille characters (U+28E3, etc.) is Apple Braille Regular over the other fonts installed that also have glyphs for those codepoints (Apple Symbols, Everson Mono (font I installed myself), and Symbola (font I installed myself)).

As an aside (and yes, I’m copying part of a post I made more than a year ago¹; I’m still interested in knowing the answer to this!) I’m pretty interested on how OS X and Windows decide on which font to use when there are multiple fonts installed containing the required glyph. For example, I have two other fonts on my OS X system that have a glyph for U+2705 (Everson Mono and Symbola), but OS X always seems to consistently pick Apple Color Emoji’s glyph. Maybe OS X’s text rendering system goes through the fonts in alphabetical order and uses the first one it finds containing the required glyph? It would be great if end users could have a bit more control over the font substitution process. I know it’s possible to do in some text editors like Emacs², but I believe that programs like that use their own text rendering systems instead of that supplied by the OS (could be wrong though).

――――――

¹ — https://news.ycombinator.com/item?id=8865067

² — http://stackoverflow.com/questions/6491202/overriding-emacs-...


I guess if you make sure to always use a mono-space version of Braille (I'm not sure that makes sense, it could be a requirement it be mono-spaced) and have really good whitespace awareness. That output gives me a very good quick general feel of an address, but very poor fidelity.


Finally, someone said it! I've always felt that the biggest hurdle in IPv6 adoption is the complicated address notation.


Better not tell this guy about abbreviating ipv4 addresses. http://127.1/ or http://2130706433/ might blow his mind.


If the colons make you sad, don't worry, the addresses are represented with dots in some places. For example, DNS:

    $ dig -x 2600:3c03::f03c:91ff:fe93:50b0

    ; <<>> DiG 9.9.5-3ubuntu0.7-Ubuntu <<>> -x 2600:3c03::f03c:91ff:fe93:50b0
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40052
    ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

    ;; QUESTION SECTION:
    ;0.b.0.5.3.9.e.f.f.f.1.9.c.3.0.f.0.0.0.0.0.0.0.0.3.0.c.3.0.0.6.2.ip6.arpa. IN PTR

    ;; ANSWER SECTION:
    0.b.0.5.3.9.e.f.f.f.1.9.c.3.0.f.0.0.0.0.0.0.0.0.3.0.c.3.0.0.6.2.ip6.arpa. 18272 IN PTR itchy.jrock.us.

    ;; Query time: 0 msec
    ;; SERVER: 127.0.1.1#53(127.0.1.1)
    ;; WHEN: Fri Feb 19 22:34:49 EST 2016
    ;; MSG SIZE  rcvd: 118


Another potential problem with the sometimes-length-varies aspect to IPv6 addresses is that serious software bugs can be hidden. An array allocated to an insufficient size may work for quite a long time with the vast majority of addresses that take advantage of shortening tricks like "abcd::::", and fail only when presented with an address string that uses the maximum possible IPv6 address length.

I think this article has a lot of really practical ideas that would help a lot.

I suppose the only other thing I’d want to allow in an IPv6 address is a Perl-like underscore anywhere for visual separation that acts like a comment; e.g. Perl lets you say things like 1_000_000 to mean 1000000. The article suggests a single dot but I think that could still be combined with visual underscores for things like "dead_beef_._0001".


An IPv6 address has 16 bytes and that's how to store them, period. For parsing / generating a string representation there's RFC5952 and libraries in every language that implement it.


> Another potential problem with the sometimes-length-varies aspect to IPv6 addresses is that serious software bugs can be hidden. An array allocated to an insufficient size may work for quite a long time with the vast majority of addresses that take advantage of shortening tricks like "abcd::::", and fail only when presented with an address string that uses the maximum possible IPv6 address length.

If the array you are referring to is the human-readable buffer used to store ipv6 addresses, I would assume the 'bugs' you are talking about are entirely similar with ipv4.


Formats like this are a great place to apply fuzz testing. It's easy to write automated test code that generates IPv6 addresses which at least look valid according to the BNF specification. That way you can get pretty good test coverage without having to manually figure out all the possible edge cases.


The issue with the dot alone is that in some cases it's indistinguishable from a domain name.

"dead.beef" is bound to exist now that ICANN is going crazy with top-level domains.


Someone on Reddit suggested two dots.


Which of course doesn't help save keystrokes…


Sure it does:

: == two keystrokes using two separate fingers

. == one keystroke using one finger

.. == ~1.25 (ish) keystrokes, since your finger has to move to the dot once and then tap twice and only one finger is involved

:: == 3 keystrokes using two fingers


I respectfully disagree. In normal typing, I don't consider a capital "I" to be significantly more taxing than a lowercase "i". Relatedly, I think there's a reason that I've never heard an argument about why we should prefer square brackets over parentheses (square brackets require no shift while parentheses do).

Right off the bat, I notice that when I'm quoting something using double quotes or placing something in parentheses or curly braces, I don't even have to think about pressing shift. I type them just as quickly as any other punctuation. So I don't think there's much validity to this argument based solely on keystrokes.


> square brackets require no shift while parentheses do

On a German keyboard, parentheses require a shift, but to get square brackets, one needs to hit AltGr (or "Option" on a Mac keyboard) which quickly gets annoying if one has to do it a lot.

Except for that, I agree with you.


But I thought "We're all living in Amerika, Amerika ist wunderbar"...


> Relatedly, I think there's a reason that I've never heard an argument about why we should prefer square brackets over parentheses (square brackets require no shift while parentheses do).

FWIW, I switch parens and square brackets for exactly that reason. It'd be great if distros offered this as an option, but I know my way around xkb enough to do this much.


I'm an outlier-- I hate chording so much that I use Caps Lock instead of the Shift key. (I'm in the company of the world's fastest typer...)

I haven't gone so far as to hack the system to allow Caps Lock to enable the colon and other punctuation, but I've long considered it.

...So for me, two periods is a million times better than a colon.


I get downvoted every time I mention this, but I'll live with it-- I want to recruit people to typing with caps lock (especially vim users, already used to modal typing).


On /some/ keyboard layouts that may be true, but remember, on different layouts, you get different results.

On German layout, for example, `:` is on `Shift` + `.` – and therefore just as easy to use as `.`

(Sorry for the formatting, but this damn page doesn’t have any useful formatting syntax, or escaping. If you want to enjoy the formatting properly, just use a userscript to run a markdown parser over this page).


If your timing is extremely precise, it can save keystrokes... :)


I wonder if some sort of base64 encoding wouldn't have been better.

"dead:beef:0000:0000:0000:0000:0000:0001"

becomes

"3q2+7wAAAAAAAAAAAAAAAQ"

Which sucks because of the non-alphanumeric characters and the long run of A's, but one gets the idea. Which is to a general user hex encoding might as well be Hungarian.


https://www.ietf.org/rfc/rfc1924.txt

(base 85 representation of IPv6 addresses)


Base85 naturally encodes 32 bits at a time into 5 characters, why is this using 128 bit math? I can't tell if it's a joke, with that date.

Edit: The commentary at the end suggests more of a joke, even though "It may be expected that future processors will address this defect, quite possibly before any significant IPv6 deployment has been accomplished." wasn't exactly false. I'm not sure why you linked an intentionally-bad RFC for a reasonable concept?


April Fool's RFCs are a bit of a tradition[0]. I'm a fan of the proposal for IPoAC[1].

[0] https://en.wikipedia.org/wiki/April_Fools'_Day_Request_for_C...

[1] https://tools.ietf.org/html/rfc1149


Oh I understand why the RFC exists. But they should have put a bit more effort into it, and joveian should have made clear that it was a low-effort RFC, joke or not.

I'll be more clear about my earlier post. I realized the RFC itself was put out as a joke, but I couldn't really tell if the 128 bit math was bad on purpose or out of laziness. Or what the RFC author actually thought about using such a compact representation.


I am curious about the background behind it and the author's opinion of the basic idea as well. I think the 128-bit math part was just intended to invent context for a jab at standards that assume recent hardware and not really intended to make sense in context. I admit it was a low effort comment on my part :/.

It seems like something along those lines could be a good idea, although in practice I think 22 URL-safe base64 characters with four error correction bits would be better representation. Looking at Wikipedia's nice base64 page, one possibility would be to use '-' and '_' for the two non-alphanumeric characters and allow the longest run of zeros ('A's) to be changed to '~'. Automatic error detection seems like a really good idea whenever humans are forced to interact with 128-bit numbers, but then you can't easily generate subnet masks by hand.

In general, I think avoiding interacting with them as much as possible is the most important step. At this point, it would take a while for any alternate representation to be widely supported by software even if there was wide agreement that it was a good idea. OTOH, a general "least bad" compact representation of larger numbers with error detection could potentially be useful for other things (even if it doesn't get used for IPv6), such as ECC public keys.


My Dad told me once, every good idea has already been thought of by someone else.


Anything that would make addresses case sensitive would I think be a terrible idea.


Aaand that's why we have DNS. Solved decades ago, next.


As someone who's used IPv6 at home as a consumer for probably at least two years, I can tell you I have and to type an IPv6 address into my browser exactly zero times. I think I've maybe had to type one out once when configuring my router (but probably didn't as I use DHCP for address assignment internally to my home network and to get an external address). I couldn't even tell you what the numeric address format is, because I literally never have to know it.


Not solved on LANs, ad-hoc local networks, virtual networks, or when DNS is down or broken.

This is not really an end-user issue but it is a serious usability issue for IT admins and developers. It's very very common to schlep around raw IPs constantly when messing with networks and I don't see that going away. It's also very important to be able to visually parse IPs when understanding the topology of a network, writing firewall rules or routes, etc.

This is a DX (developer experience) issue more than a UX (user experience) issue.

Edit: three specific problems with DNS:

(1) What happens when things are not configured yet?

(2) OSes are designed to have one DNS server but people belong to many networks either at once (local + VPN + virtual + ...) or serially via mobility. In reality you need many DNS servers, but then how do you deal with naming conflicts?

(3) DNS is dependent on IP so you can't use DNS to debug DNS issues. It's a circular dependency.

In practice IP schlep is very common.


Hostname discovery (mdns), etc solves it on local networks. If DNS is down or broken, it would be difficult for me to "use the internet", and I'd have to copy and paste a lot anyways.


So, run yet another service with a poorly thought out RFC and add another code base to my vulnerability monitoring?


At home/small business scale, yes. At enterprise scale, you shouldn't be relying on IP addresses for clients for anything important.


mDNS is slow and unreliable and on large networks it doesn't scale. I personally don't find it very useful since half the time it barely works even on wired LANs let alone big WiFi or distributed networks. It's also prone to naming conflicts (is this linux-1, linux-2, or linux-3?) and other OS configuration issues and is not secure.


On a network like that, wouldn't DDNS solve the problem?

I mean DDNS where the DHCP server tells the DNS server which IP has what hostname, not e.g. dyndns.


I don't think people trust DDNS enough in corporate networks and any security sensitive network.


> ad-hoc local networks

Actually that's not a problem for that. Since people who are actually using them have no idea about IPv4, too and just copy addresses. Actually you could create local address with everything prefixed by fd + 40 bit which could be just fdff:ffff:ffff:ffff::1 which isn't really hard to remember.


Or, you know, when your friend is hosting a StarCraft game over TCP and you have to write their address. People don't have DNS addresses and even if your ISP has a mapping by default (usually to addr-x-y-z-w.isp.com), the OS doesn't show it.

On the other hand, DNS could fix this - just have a TLD of ip6 and have it resolve all the examples in the article. It would require no changes to current software and will work transparently. I.e. you'd enter http://deadbeef.ip6:1234 and when the ip6 TLD servers receive a request for deadbeef.ip6, they will reply with dead:beef:0:0:0:0:0:0. Similarly with deadbeef.1.ip6 and so on. You could easily implement this in the OS too without much hassle and not even need servers on the internet to do it.


I see this idea like a really cool one.

http://deadbeed.ip6 would help to start using it.

If the local dns could be setup to autotranslate them to ip6 could help this notation to gain traction


Using a special TLD for IP6 addressing is the sanest proposal I've read here so far. Add the ip4 TLD while we're at it.

I just had this dystopian vision where a non-profit operated .ip6 to work as discussed then went defunct and a domain-grabber (named Network Solutions) bought it. Everybody scrambled to patch their recursive resolvers real fast :-)


If it catches on, I expect it to be built into every dns library so it does not even hit the network. The idea of grabbing the TLD and then redirecting to a middleman that inserts ads sounds quite profitable though...


Hosting Starcraft on a port open to the Internet sounds like a terrible idea. If you actually want to host a public server, then getting a dynamic domain name is not hard. If it's a private game, you're better off using a VPN, over which a broadcast discovery protocol can be used.

In any case, if you really need to, what's the problem with copy-pasting an address?


I don't even get why we have IP addresses anymore. Just use DNS duh.


DNS are centralized , IPs are not. When USA decides to block or take down you domain name, because they can, all you have left is your IP, they can't take your IP.


Meet https://www.arin.net/ and friends.


Well, one reason not to is limited hardware.

In an embedded system where memory is at a premium and you may have serious constraints on how long things take (e.g. for timing purposes), the ability to hard-code an address or have it entered in some way saves you from having to support an entire DNS layer in that system.


I think the joke is that DNS just hides IP addresses, it doesn't do away with them. No IP address = no DNS.


Finally, thank you. Someone with some sense around here.


Wow!


Most of these issues mentioned are created by trying to treat IPv6 like IPv4 instead of adopting modern techniques for IP address management, automation, named objects in network device configs, etc. I've only been using IPv6 in production for about a year and it's already second nature to me.



the double-click thingy can be remedied on xorg by adding 58:48 to the X resources, e.g:

XTerm*VT100.charClass: 33:48,35:48,37:48,42:48,45-47:48,64:48,95:48,126:48,43:48,58:48


> Then there’s how the :’s are used. For a full-length un-shortened IPv6 address, they are supposed to appear every 16 bits like:

> dead:beef:0000:0000:0000:0000:0000:0001

> I’m sure there was a reason for this choice, but to us after using IPv6 for years it still seems utterly arbitrary.*

If I had to guess, I would say they're there to chunk things up for reading aloud.

"Read me that address off the console."

"Okay, d-e-a-d..."

"Got it."

"...b-e-e-f..."

"Yep."

"...a bunch of zeroes, then 1."

They also make it harder to lose your place when reading it back.


What a bunch of nonsense !

The colon is a non-starter, not everyone uses a qwerty layout (azerty layout has direct access to the colon) but as ipv4 fields can be smart and automatically add a . after 3 characters or with a press of the left arrow (windows has been doing this for 15+ years), ipv6 fields can automatically add a : when needed.

Omitting leading zeros is not mandatory, you can input all those zeros if you so choose (turns out the author actually does).

Better blobs for double clicking selects them ? Well maybe try triple clicking then, though in my shell with default settings double clicking an ipv6 address selects it. Also separating fields of 4 characters improves readability, ipv4 also separated fields but I don't see the author criticizing this.

why not re-use the dot from IPv4 notation? Because it would add unnecessary complexity, also 17 years later is a bit too late to ask for such a drastic change in an established standard.

Lastly if you find the : unappealing, why don't you code your tools to show them as . and while at it add a layer in your code that will translate your preferred way of displaying ipv6 into the actual one ?

All I take from this post is that zerotier is probably incompetent, refractory to ipv6 and is certainly whiny about non-issues.


The FIRST problem is that IPv6 wasn't designed to be backwards-compatible with IPv4.

That is the MAIN reason why its deployment and adoption rate has been a long clusterfuck.


The problem isn't that IPv6 isn't backwards compattible. The problem is that IPv4 is not forwards compattible.

Arstechnica has an article about it: http://arstechnica.com/business/2016/01/ipv6-celebrates-its-...


I wonder if it couldn't still be accomplished at this late hour. An RFC to reserve 32 bits of an IPv6 Address along with a logical (read: Easy to remember) remaining 96 bits might be in order.


In theory - and if I remember correctly - you can embed IPv4 addresses into IPv6 addresses.

The prefix should be all zeroes, so you get something like: ::12.123.99.222 Which is not that hard to remember. :P


Could they modify IPv4 in a backwards-compatible way, allocating more address space to that part of the TCP header for example? And including perhaps a version flag?

In fact I'm not sure why it wasn't done that way to begin with, unless IPv6 fixes a bunch of other problems I was not aware of


Yeah, they really didn't think though the transition when designing IPv4.


The proposal says, "A nice de-facto standard would be to print the dot at the route netmask boundary". This will not work consistently because one does not (typically) know the netmask of a remote host, so this will result in IPv6 addresses being written differently locally and remotely (e.g., the address of a local DHCPv6 host in a /112 written as 20010DB80000000000000000..1 and remotely, perhaps, as 20010DB800000000..1 if SLAAC is incorrectly assumed. Thus they often will not match, complicating, for instance, help desk calls correlating users client IP address to server-side logs. One solution is to always do [zero] compression [only] at the longest run of zeroes, which is what the existing IPv6 address syntax does, i.e., consistently canonical behavior. Aside: this work is related to the reverse-engineering of IPv6 netmask remotely: http://conferences2.sigcomm.org/imc/2015/papers/p509.pdf


Thank you! Finally, someone else is saying what I've been thinking.


To make things worse are vanity ipv6 addresses

     2001:4b10:bbc::1

     2a03:2880:2110:df07:face:b00c:0:1


    www.sprint.net has address 208.24.22.50
    www.sprint.net has IPv6 address 2600::


Sheesh. Best i recall, they chose this pattern because it matched the notation used for MAC addresses. And a IPv6 network can in essence self-assemble by using said MAC as a basis for the IPv6 address.

Edit:

BTW, don't most home routers etc take a hostname and add it to a .local DNS domain stored on the router?


I've mostly avoided IPv6 because AWS uses IPv4 and it works fine.

but yeah, whenever I see an ipv6 format address, it takes way too long to parse it out. unless you were a network engineer at some point, it's not going to become second nature any time soon.


I don't see any of it as an issue.

String representations of IPV4's aren't all of equal string length either.

IPV6 can't be shortened into, for example, dead.beef.de, because it's ambiguous as to whether that would be a domain name, or an IPV6 address. Likewise, other suggestions make it ambiguous with an IPV4, or even if not technically ambiguous, likely to break some existing code.

Raw IP's aren't exposed to the masses often anyway, so the bulk of the downsides of the current compromise should be constrained to just technical people. They will just have to figure it out.


I deal with IPv6 day in and day out and I don't share the same confusion and annoyances the author does. Sure, there's a learning hurdle, but once you're over it, it's fairly smooth.


The proposal doesn't mention it, but part of the existing IPv6 address syntax is that, for transition mechanisms, it offers the option to embed a dotted format IPv4 address syntax in the end of an IPv6 address: https://tools.ietf.org/html/rfc4291#section-2.2

Examples: 2001:DB8::13.1.68.3 ::FFFF:129.144.52.38

In the suggested format, we'd lose this convenient transition feature and are left with: 20010DB8..d014403 ..FFFF81903426

This is less clear than the existing method with the colons and dots.

If one decides, in this proposal, to support trailing IPv4 addresses as is currently supported, e.g., for IPv6-mapped-IPv4 addresses, some pretty ugly things happen: 20010DB8..13.1.68.3 ..FFFF129.144.52.38

Is .13.1.68.3 a typo of an IPv4 address with missing whitespace or is it an IPv6 address with an IPv4 address embedded at the end?

And what about parsing "FFFF129"? Are we really going to change base from 16 to 10 between the "F" and the "1"?

Or are we going to introduce another separator, e.g., a leading "." on IPv4 addresses?

..FFFF.129.144.52.38

Or are we going to require that the ".." be the separator there?

0000:0000:0000:0000:0000:FFFF..129.144.52.38

Or are we going to use the existing format for IPv6 addresses that embed IPv4 addresses, and another format only otherwise (Hint, of course one must support the existing 20 year old format.)

To add another historic complication for some implementations: https://en.wikipedia.org/wiki/Dot-decimal_notation#IPv4_addr...

a leading zero on an IPv4 address octet meant that the byte is specified in octal.

One can easily argue that the best solution is to use a different special character, e.g., ":", for IPv6 rather than the IPv4 "." because leading zeroes had a special meaning in IPv4 syntax.

All in all, it looks to me like the early IPv6 community thought about the address syntax... a lot.


Actually the IPv6 address allows you to put a lot of useful detail into your address scheme. If this is supposed to be a complaint from a network admin he simply doesn't know how to plan an IPv6 deployment properly.

Edit: he hasn't even mentioned zone IDs represented by a % which would make him even more angry if he had to figure them out

tl;dr use mdns. You should never have to type an IP. Yes the mdns software sucks and has a huge attack surface because it's bloated.


What's the point of using non standard ports with IPv6? If a machine can have a million different IPv6, why would one even bother using a non standard port?


Because we have decades worth of existing software and existing standards which count on there being ports. We have standard ports for different software applications, we have changing port number on the same machine to reach a different service -- if we eliminated ports it wouldn't just replace IP addresses of one type with a new type, it would ALSO change lots of other things from the networking layer all the way up to the application layer.


I am not suggesting eliminating ports. The author describes having to specify a non standard port in the browser as a major annoyance. In a world where you have infinite IPs you would rather use another IP than listen to https on a non standard port.


Except we are heading in that direction anyway with software containers and efforts to give full networking support to them.

This is not a bad thing either - when all apps act like they're distributed, or could be, we'll get a lot better tooling for actually writing them.


Kind of like switching an addressing format?


One common reason is that binding to small ports requires root. If you are using a nonstandard port you would typically front that with a load balancer on the standard port. But if you are trying to debug the server itself, you might need to type in [abcd::1234]:8443 or something. This is not a concern for end users, but could be for sysadmins.


On Linux, you can use capabilities to avoid running as root. In the case of binding to ports lower than 1024, this would be by enabling the CAP_NET_BIND_SERVICE capability.


You can have millions if you manually configure them, the DHCPv6 and SLAAC will not grab several addresses for use.


That's no different from manually configuring non standard ports.


No, it's really not. You can ask the OS to allocate a new port when you start up a piece of software and then advertise that port via whatever coordination protocol you have.

If you're grabbing an IP address you have to, at least in theory, coordinate with other machines on the network to make sure it's not already taken. It's not the same.


God knows what the standard will be when we start using IPv6, but I guess a datacentre would allocate each server a prefix of a few thousand ips. The server is then free to use any of them without any risk of collision. The point of this vast address space is precisely to enable the infrastructure to be a lot more stateless (as in not worrying about individual IPs).


> You can ask the OS to allocate a new port when you start up a piece of software and then advertise that port via whatever coordination protocol you have.

As long as the coordination protocol is not output a wrongly formatted url to the terminal and have the user cut and paste it into their browser, the coordination protocol should be able to handle it.


Interesting; Mac OS double-click highlighting doesn't actually handle all the examples given in the article. e.g. deadbeef00000000.1 works, but deadbeef.1 doesn't. I guess the first segment needs to contain a digit, which perhaps triggers a mode where the period is interpreted as a decimal point.


"Last but not least, nearly all graphical terminals refuse to highlight IPv6 addresses with a simple double-click. This issue might not have existed in the mid-late 1990s"

Yes, we'd barely introduced fire then, we certainly didn't have the technology to double click to highlight a word...


I don't see any problems with the current IPv6 apart from backward incompatibility with IPv4. If IPv6 is difficult to start with, I would recommend to look at MAC address, Wi-Fi address, Bluetooth address first and then you will understand more about IPv6


As I understand it a lot of work went into making SLAAC in order to overcome the hassle of having to manually handle these IPv6 addresses. The idea is that you shouldn't usually have to type in a full IPv6 address by hand.


Good points, just 25 years too late. This shouldn't be a post in 2016.


Imagine reading IPv4 addresses over a crummy radio on a loud manufacturing plant floor while troubleshooting connectivity issues. Now imagine reading an IPv6 address in the same conditions.


You do what the US Army has been doing on the battlefield for many, many decades: use the NATO phonetic alphabet. https://en.wikipedia.org/wiki/NATO_phonetic_alphabet


Upper estimation of human population is 7.4 billion. With average number of devices at 5, that is 37 billion. In decimal that is 11 digits. In HEX (‭89D5F3200‬) that is 9 digits.


You can already have dots in IPv6 addresses if it's an IPv4-mapped/compatible address (e.g. ::FFFF:129.144.52.38)


How about base32 with semicolons for separators?

No case issues, semicolons don't appear in dns or ipv4, no shift key required.


People complain too much about anything...


"Yes, this is very likely a pointless bunch of gripes."

The article should have started with this. Could have saved me countless seconds of skimming the article while summarizing in my head "boo hoo, I haven't figured out how to make my workflow any better after 2 years."


You can write everything as you like in DNS. IPv6 is not a problem.


This is satire, right?


If only...


No mention of ipv6buddy.com in the comments is a real shame


You can write as you like in DNS. IPv6 is not a problem.


> To fix the ambiguity, brackets were introduced

literals were introduced because the order of parsing for an email host is first "Domain" for any non literal, then literal which defaults to IPv4 [127.0.0.1], then a literal prefix was added for IPv6 and any future registered protocol "[IPv6:::]"

the order for parsing for a URI is:

// host = IP-literal / IPv4address / reg-name

// IP-literal = "[" ( IPv6address / IPvFuture ) "]"

ipv6 just happens to use a colon which conflicts with the port delimiter from authority in a URI so it's a literal and not a registered name

// [ userinfo "@" ] host [ ":" port ]

> why not re-use the dot from IPv4 notation

because you have conflicts from "0.0 -> 0.0.0.0" to "255.16777215 -> 255.255.255.255"

0-9 conflicts with an IPv4 decimal

a-f conflicts with GTLDs

the only reason your blobs don't have a conflict with an IPv4 Historic is because hexadecimal notation starts with 0x

> try double clicking on those

try double clicking on any of these valid characters from "reg-name"

// unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"

// sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "" / "+" / "," / ";" / "="

or these from IPvFuture

// IPvFuture = "v" 1HEXDIG "." 1( unreserved / sub-delims / ":" )

// unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"

// sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "" / "+" / "," / ";" / "="

if you want to develop your own "standard" either use the literal IPvFuture, or use a Registered Name

non literals IPv4, IPv4Historic, and Domain names are valid registered names, but domain names aren't even part of the URI standard

the only reason you would have conflicts with domain names is because they're de facto parsed after an IP, so a double dot would probably be discarded as invalid, which is why punycode exists for unicode

if at that point you didn't have any conflicts it would be a registered name, but you wouldn't have any way to resolve them

lastly, if you want to fix the nonissue of double clicking use a registered name, if you chose to use underscore you may have conflicts with dns

edit trying to figure out newline parsing


> ipv6 just happens to use a colon which conflicts with the port delimiter from authority in a URI

This is exactly what the article means by

> To fix the ambiguity, brackets were introduced

The addition of brackets disambiguates the grammar.

> the order for parsing for a URI is:

> // host = IP-literal / IPv4address / reg-name

No, that's a part of the grammar; it only means that a host is either an IP-literal, an IPv4address, or a reg-name; it does not imply any sort of ordering to those rules. Normally, the grammar should be unambiguous. Unfortunately here, the grammar for IPv4address and reg-name actually are ambiguous; I'll get to that.

> the only reason you would have conflicts with domain names is because they're de facto parsed after an IP

It's not defacto. It's in the same standard,

> The syntax rule for host is ambiguous because it does not completely distinguish between an IPv4address and a reg-name. In order to disambiguate the syntax, we apply the "first-match-wins" algorithm: If host matches the rule for IPv4address, then it should be considered an IPv4 address literal and not a reg-name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: