While this is technically true, it doesn't seem like that helpful a response to say "well you were getting benefits, just not this one". Until now you've had to know to ask for parallelism; it should have been there by default, and soon (I hope) it will be.
Oh, I agree 100%. I was just as confused as he was when I first started using Go, and I'm very excited that this change might finally happen.
However, I think calling my response "technically true" is selling Go's concurrency primitives a little short. They enable a really great programming model, which makes it much easier to express certain ideas in code. And I/O-bound programs benefit even if GOMAXPROCS=1.
I didn't understand that subtlety when I first started with Go, and it seems like GP commenter (and probably other HN readers) didn't either. "Concurrency is not parallelism" is very helpful in explaining that, which is why I linked to it.
What I do not want in a programming language is for it to advertise features which are subject to confusion in common parlance, then say "I did what I said, you just weren't smart enough to understand what I said".
The debate over concurrency vs parallelism hit its peak in the mid 1990s, well before Go came along. I remember having arguments with fellow students (and the occasional professor) at university about how it affects memory allocations (stack vs heap), virtual memory layout (the thread vs coroutine stack being one of the bigger issues; Linux's early introduction of mremap was particularly handy) the OS scheduler, etc. It's not just a Go / language feature thing.
You aren't wrong, but it does make it difficult to discuss with people. At least within the programming field it's useful to have two terms with distinct meanings to discuss the topic. Consider the terms accuracy and precision [0], where again the popular understanding is that they're (essentially) synonymous. But within the scientific and engineering world that uses the terms the distinction is critical.
EDIT: Verification and validation [1] are another pair of terms where the distinction is important for industry, but popular understanding often mixes them up.
I agree that if you look at it simply as the number of cores used, then the distinction is not very interesting. But a better way to look at it is as follows:
* Parallelism is a feature of the algorithm, chosen to make it run faster. E.g. you can use a parallel algorithm to factor a matrix. From a technical standpoint, the challenge of parallelism is how to deconstruct the problem so that each processing unit will be able to take on a share of the effort -- parallelism implies cooperation.
* Concurrency is a feature of the problem. E.g. many users using Facebook at once. If at any one time you have 10K requests, then your concurrency level is 10K, no matter how many cores are used. The challenge of concurrency is how to allocate computing resources among the competing requests -- concurrency implies competition.
Parallelism is orderly; concurrency is messy. Parallelism is a choice; concurrency isn't.
It wasn't very long ago (before the multicore era) that essentially nothing running on personal computers was truly parallel. Yet multithreaded applications (concurrency) worked just fine.
The advantage of making the distinction is you can discuss the subtly different semantics which often arise in the area of parallel, concurrent and distributed computing. If you equate them, you can only discuss with limited level of detail, which makes it harder to understand the subtle differences coming from different computational models.
For instance, the subtle difference arising from MIMD execution and SIMD execution. The former allows for concurrent execution, whereas the latter does not. However, both are parallel execution models. This isn't theoretical either as GPUs are strictly SIMD machines in their execution.
Interestingly there is one example of parallelism without concurrency and with just one processor and just one core - vector instructions. Although it would be a breathtakingly good compiler that could emit vector instructions to schedule two goroutines in parallel
You can still benefit from the "claims of the language", even if code isn't running in parallel.