Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Goroutines: The concurrency model we wanted all along (jayconrod.com)
159 points by ingve on July 7, 2023 | hide | past | favorite | 142 comments


For a few years, I found myself continuously appreciating Go's design and models more and more. Right now, a bit bored of working in the same paradigm for a decade, I've been trying to find new approaches but find myself rather trapped - as after a novel exploration, it turns out the current approach really does seem to work better.

Go's whole concurrency model of Communicating Sequential Processes is cool, although seems to be the aspect I question the most. Erlang's Actor Model does seem a bit superior and Go can introduce footguns at points, but I've yet to really investigate the alternatives as much as I'd like.

Also, Beej's guide to network programming https://beej.us/guide/bgnet/html/ mentioned in the beginning really is quite fantastic.


I don't mean this in a sarcastic way but Go was explicitly designed for the lowest common denominator developer. Kernighan said it himself.

That's the problem with Go in general I think. Much like my experience with Ruby on Rails it seems like once you want to go venture outside the safety of the walled garden you can't. The language is specifically designed to keep you as safe as possible. Great for bottom-tier devs and unmotivated people working on corporate TPS reports. Bad for the motivated engineer. Opportunities like this (to introduce a new idea into the go concurrency ecosystem) are basically impossible.


> once you want to go venture outside the safety of the walled garden you can't. The language is specifically designed to keep you as safe as possible.

> Bad for the motivated engineer.

Sounds to me like it's really just bad for the engineer who wants to show off how smart they are through the overcomplicated custom abstractions they build.

Condescending response to condescending comment aside, I understand that those custom abstractions can occasionally be great and have their place, but in practice their value usually starts reversing (yes, reversing, not diminishing) once you need to onboard many engineers frequently (junior or senior alike), which is the case for a big part of tech projects (think big tech, high growth startups, but also open source projects where you want first-time contributors to be able to dive in and solve their issue quickly).

And I'm saying this as a motivated engineer who loves playing with various languages but uses Go for most serious things.


For serious work, Java has a bigger list of features, both on the primitive side and the abstract, that let you handle a lot more use cases than Go simply because Go doesn't have the tools for them. So long as you stay away from Spring Boot and Hibernate, Java doesn't have to be all about BeanFactories.

(Funny aside - somehow, Google managed to turn Rust into abstraction hell: https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/s...)


But do you really need those features? Or is it just force of habit?

Learning how to express oneself consistently and with no more fluff than is strictly needed is much harder than people think. I admire people who can solve complex problems by simple means. I'm not terribly impressed by people whose obsessions drive them to write code with low readability.

What features does Java have, and Go lack, which represent a real problem? With emphasis on the word "real".


Being able to tune the JVM properties for GC lets you build a reliable RTOS in Java.

Checked exceptions offer stronger guarantees of error handling.

Generics (something they did add to Go after years) let you build reusable methods without relying on a ton of boilerplate.

None of these features cause the indigestibility of the bloated Java frameworks.


I've written quite a bit of software that has to deliver consistent latency and low jitter in Java and that isn't easy to do. It usually comes with big limitations as to what you can do. It is not just a matter of "tuning" the GC properties. It takes a lot more than that.

Java is no more suited for realtime applications than Go. I work a bit on industrial realtime systems that actually do make certain guarantees, and neither the JVM nor the Go runtime are anywhere near anything you would want to touch with a ten foot pole for those kinds of applications.

And if by RTOS you mean "embedded" then no, Java isn't suitable for that either since the runtime is huge and it typically needs a bit of RAM headroom to work properly.

Checked exceptions only guarantee that someone will have to write code that catches those exceptions. It does not come with any guarantee that they will do anything meaningful with them. You still have to make sure that errors are handled by having code review practices that ensure error handling.

It is no more a guarantee that a checked exception will be handled properly than it is a certainty that errors returned from function calls in Go will be heeded. This is engineering reality vs academic fantasy.

I kind of agree on generics although, as you point out, Go has generics now. That being said, when sensibly applied, you will spend more time using generic types (like collection types etc) than creating them. Generics is something one typically wants to use sparingly since writing generic code is a lot harder than people like to think. I've seen a few projects where the code ended up being extremely hard to reason about because someone went overboard with generics in the design.

I don't know yet if Go's approach to generics is good. I haven't used it enough (I don't need it that often), and it doesn't exactly seem like people are going overboard using it. Yet.

This is also the first time I've had someone make the argument that Go has more boilerplate than Java.


There is actually military hardware out there that runs on the JVM and not Go. That's a far stronger argument for its viability.

>This is also the first time I've had someone make the argument that Go has more boilerplate than Java.

Typical Java code ends up with more boilerplate than Go. Java gives tools to avoid boilerplate, but these are never used applicably in the enterprise environment which has a thirst for standardization.


Actual examples of actual realtime systems where the JVM is used would be a strong argument. Vague references to usage in military hardware isn't.

Neither Java nor Go are languages you'd pick for realtime systems that require sensible guarantees. And why on earth would you use whether they are suitable for RT systems as any sort of metric? It is like judging a spoon by how good it is at being a fork.

> Typical Java code ends up with more boilerplate than Go. Java gives > tools to avoid boilerplate, but these are never used applicably in > the enterprise environment which has a thirst for standardization.

So you miss generics, even though Go has generics, because it avoids boilerplate, but Java has more boilerplate than Go, so something something enterprise standardization or lack thereof?

Uhm...


Here's a link I found about Java RTOS in military software: https://www.militaryaerospace.com/computers/article/16708321....

>So you miss generics, even though Go has generics, because it avoids boilerplate, but Java has more boilerplate than Go, so something something enterprise standardization or lack thereof?

Java itself forces a limited amount of boilerplate. J2EE and Spring Boot and Hibernate and Swagger force a lot of boilerplate. Using Go itself means writing a lot of boilerplate, regardless of what libraries are included.


"here's a link I found"? So you made a claim, then you googled for things that might support your view, then posted them after skimming them?

I don't think we need to discuss further.


I think my problem really comes down to the fact experimentation with Go is fundamentally impossible. It's like trying to experiment with another highly opinionated framework.

It's usefulness for producing something that works well quickly is great. The second you want to take some time to explore it simply punches you in the face and sends you to the back of the line. In some sense, I guess, Go just offends me.


Experiment with what, explore what?

Your comment is very general, and doesn't make much sense - Go is just fine for prototyping.

If you mean PL research and being able to experiment with custom methods of abstraction and DSLs then yes, indeed, there's not much to do. Which, as I mentioned above, is a feature in most cases.


I got downvoted because I guess "experiment" was too loose of a term. Indeed, I mean being able to modify the system to suit you. Go, instead, is a system that molds you in it's image. Which is why I believe it's the next iteration of Java and not something like C or C++. It's just simply not "open" enough to create a novelty that enhances it in a way that Google approves of.


I think you're incorrect.

It's good for the motivated engineer, because instead of running off and creating architecture astronautical wonders of over-engineering, they are getting down to business and making progress to a goal.

Your bottom-tier engineers aren't going to be any better in Go than they are going to be in Java.


> instead of running off and creating architecture astronautical wonders of over-engineering

Maybe in theory, in practice you get people writing their own implementation of generics with codegen and Canadian aboriginal syllabics…


Did that happen more than once?


That particular example of course no, but people writing overengineered beasts to overcome go's limitation happens all the time.

Another famous example is how k8s, dealt with the lack of generics: https://medium.com/@arschles/go-experience-report-generics-i...


> That particular example of course no, but people writing overengineered beasts to overcome go's limitation happens all the time.

"happens"???

Yeah, it happened all the time, but for the last 14 months or so Go has had a serviceable implementation[1] of generics.

So, sure, you could argue that projects used to implement hacky workarounds for generics, but that argument gets less relevant as time goes on.

That argument is irrelevant to anyone starting a project today in Go. Probably also irrelevant to the type of microservices projects that I mostly see in Go, because any hack workaround can be replaced gradually by the language generics.

[1] Unless you are referring to some deficiency of the generics implementation in Go that I am ignorant of.


Generics is just one famous example, but go is so opinionated about everything, that there are countless others, from error handling to package management or concurrency primitives.

It turns out that if you refuse to give the users things you think are heretical, the users will walk around your arbitrary limitations, and the result will just be worse for everyone.


> It turns out that if you refuse to give the users things you think are heretical, the users will walk around your arbitrary limitations, and the result will just be worse for everyone

You are correct, no "but..." qualifier.

It's a trade-off, and the language curators have to balance the frequncy of user's hacks against cognitive load for all users.

You have to draw the line somewhere when adding language features and it is totally reasonable for a language that advertises itself as a low cognitive-load language to limit itself to a subset of features that useful for it's target users and ignore the rest of PL features.

Just compare how awful it is in other languages to start an async routine and read it's results when it is complete, to go where you just read a channel you passed to the routine.

That single thing alone shows that Go's creators want to deliver stuff to users who value producing solutions over language lawyering.

Whatever you can do for almost all programs written in C++ or Rust will be more readable in Go with little noticeable performance degradation.


> Just compare how awful it is in other languages to start an async routine and read it's results when it is complete

Excuse me, what?

“Starting an async routine and getting the result” in langages with async/await is just:

    // in JavaScript
    let result = await doSomething();
    // or in Rust:
    let result = do_something().await;
Please tell me how spawning a goroutine + a channel[1] is more convenient than this. Now start adding a timeout and a cancellation to you goroutine code and see how messy it gets, where it comes for free in Rust.

[1] a necessarily bounded one, because Commander Pike said “f*ck you”.


A walled garden prevents you from doing a lot of things, not just over architecting. It inhibits the top tier engineers as much as the bottom tier ones.


> A walled garden prevents you from doing a lot of things, not just over architecting. It inhibits the top tier engineers as much as the bottom tier ones.

Up to a point, depending on the limitation.

After all, writing raw assembly removes almost all limitations and yet people still argue for introducing limitations in higher level languages.

Static typing is a limitation, and yet more people argue for it than against it.

Structured control is a limitation, and yet people prefer that to raw `goto`s.

Lack raw memory access is a limitation, and yet people argue against raw memory access in favour of alternatives (such as byte arrays).

Top-tier engineers don't feel constrained by static typing, structured program controls, raw memory access or any one of the multitudes of limitations that popular languages define as their "walled garden".

There's no clear line marking the boundary between a constraint that is helpful and one that is not. It's more of a gradual spectrum with assembly on one side and Lisp on the other, and all other languages somewhere in between.

I'm largely in agreement that a language that prevents the programmer doing stupid things also prevents them doing clever things, but I feel that certain particular limitations improve the maintainability and readability of code.

Those languages that have a kitchen-sink approach to features are notoriously for being difficult to maintain. Most large C++ projects, for example, literally have a defined subset of C++ that they will approve in PRs, and they are still considered difficult to maintain.

IMHO (really emphasising the "humble" in that acronym), and as someone who's playing around designing a language (again!), a good language these days should be defined by what it doesn't let you do, not what it lets you do.

Lets you pick one of three different ways to declare a function? Bad Language! No cookie for you

A function declaration always looks the same no matter where it appears? Good! That's a limitation I actually want!


What you're getting at is the presence and lack of features leads to an idiomatic version of how one should code in a language. Go has very few features, leading to one obvious way to code. Python has some well implemented features and a lot of clunky and badly implemented ones, leading to the same pressure for one way to code. C++ has far too many that overlap in confusing ways, leading to no clear way to code.

The thing about Java is that idiomatic Java should be like the Java standard library, instead of being like J2EE and Spring Boot.


This is a bit like listening to racing drivers blaming their car.

Could you provide us with some concrete examples? At the very least so it is possible to determine if you are trying to do something that Go clearly isn't suited for.


Copy pasting code definitely helps you grow as an engineer.. also, the more often you write err != nil, the better the code will be, right?


I didn't assert either of those things.


Generics , although kept simple, definitely reduced the amount of copy pasting in go. I actually think go generic’s system is the best i’ve seen because they managed to provide enough features to remove the stupid parts of the language, while not changing the general feeling and encourage devs to build abstraction monsters.


I don't think that its bad for the motivated engineer, its just a trade off between expressiveness and simplicity.

I personally find myself choosing go because I feel I am way more productive with the language. By keeping me constrained, I have less choices to make on how I solve the problem, and focus more on solving the problem.


I agree about the constraints. That was my biggest complaint when I learned Scala. Every single thing I built there were 10 different ways I could do it and it really slowed me down. I'm sure if I stuck with the language I would get a feel for the idiomatic ways over time but... I'd rather just make stuff.


I suspect I would agree with this today, but just for the contrast. At the time (about 12 years ago now I think) coming from Java it felt like someone lifted the ceiling, suddenly I no longer felt like always bumping into the lid of the jar when trying to abstract away repetitive code.


Then go with java which is just an all around better language.


>I don't mean this in a sarcastic way but Go was explicitly designed for the lowest common denominator developer. Kernighan said it himself.

What's the issue with making Go good for the lowest common denominator developer? This is fantastic for businesses as they can get new developers productive within a few days. It's great for the ecosystem because the barrier of entry is low.

Just because it's made simple and easy to pick up doesn't mean you can't write complex software with it. It's a programming language, you can write whatever you want with it. If it makes writing complex software easier, then that's a good thing. The only reason why someone would think that's not a good thing is their ego because $FAVOURITE_LANGUAGE has 10x as many language features and 10x as many ways to do the exact same thing.

What is this walled garden you think exists that prevents motivated engineers from doing good work? I like to think of myself as motivated and Go is the reason for that. It made writing complicated concurrent software so much easier for me, and it made reasoning about them so much easier. I do not understand this criticism of Go.


I can't help but feel that the reason why new developers reach parity with existing ones so quickly is not due to the productivity of Go, but because the productivity ceiling has been intentionally lowered so that everyone is equally constrained. Having to keep reading reinvented wheels because the language is intentionally poorly expressive is a burden on getting work done. There maybe is only one way to do a thing, and everyone has to keep doing it, rather than being able to write it once and build an abstraction

Compare say Java 5 code written with raw servlets implementations and JDBC calls versus modern Spring code. The implementation of the former is easier to understand because everything is very explicit, but the intent of the latter is far easier to understand, since the boilerplate is mostly gone, allowing the actual business logic (what you care about 90% of the time) to be obvious. And you can of course see the implementation if you need to


Every time I hear “our experts’ code can look like our beginners’ code” I cringe, because experts’ time is much more valuable and they should be communicating concisely. A beginner’s job is to become an expert.


This is the central fallacy of developer culture: that complexity is a sign of intelligence and that complex systems are superior.

Simplicity is much harder than complexity. Anyone can add and anyone can subtract recklessly, but it takes genius to subtract without losing too much in the process. The genius is in identifying the essential complexity in a problem and finding ways to dispense with incidental complexity.

I've personally been using Rust more lately for various reasons, but I really do like Go and think its choices are excellent for its niche. In any case simplicity can be practiced in any language by choosing the most straightforward way to achieve things and avoiding over-engineering. Rust has a lot of features and paradigms but I can be productive in it by remembering that just because it's there doesn't mean you always have to use it.


> Rust has a lot of features and paradigms but I can be productive in it by remembering that just because it's there doesn't mean you always have to use it.

The problem with kitchen sink languages is that someone on the team will use a feature you don't know about.

If enough people on a big enough team do that the project inexorably moves towards the most complex representation of any idea.

On solo projects, some of the most complex languages can be used by effectively by a newcomer. On any other project with the same stack, you need to add in significant ramp up time.


The problem with complex languages is that while you can choose to eschew complexity in the code you write, you very often have to deal with the complexity in the libraries you use. So in languages like Rust and C++ you tend to have a high degree of complexity (at least by way of high degrees of abstraction) in the standard library and the wider ecosystem because abstraction (and thus complexity) are valued and idiomatic in their respective ecosystems—it makes it very hard to opt out of. For example, at least for a good while, there wasn’t a great JSON parser library except serde which is immensely complex and I would spend a lot of time troubleshooting errors in the macro expansions for things that would Just Work in Go’s encoding/json (of course, there are plenty of legitimate grievances with encoding/json, but these are mostly orthogonal to the simplicity/complexity discussion).


> This is the central fallacy of developer culture: that complexity is a sign of intelligence and that complex systems are superior

There are two main competing fallacies in developer culture: the first one is that complexity is a good thing, the second one is that complexity can be avoided.

They are both equally wrong, complexity is a curse, but that's the curse of the real world and if you try to hide that under a rug and call that “simplicity”, then everybody is gonna have a bad time. This blog post is a good dicussion on that topic:

https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...


People should really watch Hickey’s talk — simple is not easy. Plenty (if not most) problems are complex (otherwise why would we bother with computers?), and complexity can’t be reduced just by “taking it apart”. The composition of two functions can have an entirely different complexity than what they have differently.

Abstraction is the only way we can combat complexity (and as mentioned, most of that complexity is essential that can’t be reduced further), so if a given language sucks at it, it sucks.


I agree that there is a lot of essential complexity in the problems we solve with computers, but I disagree that most complexity in software is essential. I think most complexity in software is incidental, existing to allow interoperability with other software or support legacy software and systems.

If one could hypothetically rewrite the whole stack from the ground up with a 100% consistent set of APIs, interoperability standards, protocols, naming conventions, etc. and only one implementation of each core function, there would be an absolutely massive reduction in code size and complexity.

Not that you could do this in the real world since the labor and cognitive load required would be astronomical, but consider it as a thought experiment.

I have wondered if in the long term AI LLMs might be a way out. I wonder if a suitably powerful code-comprehending LLM (much better than current ones) could do what I just described.

"Here is the source code to an entire Linux distribution with all its applications. Re-implement this entire code base in language X while preserving all functionality with a consistent style, minimal code reuse, maximum parsimony, and maximum performance. Omit all functions and abstractions not required to achieve the objective of each software package or interface..."

(Three months and 10,000 GPUs later...)


>Abstraction is the only way we can combat complexity

Abstraction just hides existing complexity and adds another layer of complexity.

Using a FactoryFactoryFactory isn't going to eliminate complexity.


As someone else here said, it may not be Kernighan who said that, but Rob Pike.

And he didn't say lowest common denominator developer, he said something like junior Google developer.

And IIRC he said it somewhere in his blog, called command-central.blogspot.com or some such.


He was also contrasting it with C++ which has many dozens of features and a developer has to understand the interplay between all of them to use the language effectively.


Right, I remember, and in this case, "a developer" applied to not just future junior Googlers, but also him and some of his project mates, and in that same post he said they hated or at least disliked those issues with C++.


Not sure if this post below is the same as that post mentioned above, since it's some years since I first read the above one, but the one below is interesting anyway:

Less is exponentially more:

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


One thing 4 decades of writing code has taught me that when experienced programmers get cocky, code quality suffers because a) cocky programmers tend to not be as smart as they think, yet take risks that will sooner or later trip up even themselves, and b) nobody but you gives a shit how clever your code is and clever code is really just a burden and a liability.

It is not a problem that Go manages more developers to perform well. It is a clear advantage. Thinking it is a problem says something about lack of maturity.


This is so condescending, but I'm glad you're not pretending to have some other motivation. Go was designed for new devs (per pike, i dont think kernighan ever said this) and that is good for all devs, even those with overinflated egos. Making your software easy to understand and think about is good, and effective engineers will get more done working with simple code than complex code that memorializes their useless brainpower.


Don't shoot the messenger, the condescension is Go and Pike's. He pretty much explicitly said he wrote Go to solve the problem of programmers who he considered too stupid to program anything else properly, and his behaviour (both in personal conduct and decision making) has continued this trend of treating his users like idiot sheep who can't be trusted with any language features much beyond 80s imperative programming. It all feels a bit "old man spiteful at kids these days"


Rob Pike once told me I was a moron (not those exact words) but that intent when I asked a question in some performance Golang slack about how big a Golang map is in memory (as in the total bytes used by it internally).

He went to a previous commit from the one I had linked, saw my stupid attempt at estimating it, and proceeded to go on a rant in which he simultaneously called me an idiot and pointed to the answers in the runtime code. Pretty entertaining.

Of course, the code was stupid because I wrote it knowingly so, just so I could fill the function with compilable code and go debug something else. I guess not writing a comment is on me for not knowing Rob Pike was going to go out of his way to find it the next day.

Fun day at the office, that one. So yeah, I am not the greatest fan of Go or him either. But that was from way before when I had to write a JSON-LD parsing library and cast stuff to `interface{}` everywhere.

I write personal stuff in Rust now, but would happily do it in any other language with a proper toolchain and type system. Go has only one of those, so I don't dislike it. Just don't really enjoy it.


The Rob Pike quote is accurate and not condescending. If it bristles your feathers, I think that may have more to do with your ego than Rob pike.


That’s a curious assertion in that oft maligned quote.

I agree that it’s not meant to be condescending.

Go is very much AT&T Limbo from plan 9, with loose ends wrapped up and refined on Google’s dime, since Google brought Bell Labs people and their projects to Google, and the quote is therefore a purposefully misleading retrofitting of a history and design predating any concern about noob hires at Google.


p.s. (since i can't edit) Go compiles to native instead of to a vm like Limbo did. evolution of the idea...


There’s a time and place for everything. Not everyone works on a corporate team needing to KISS their software to death because they probably won’t even be working on it 18 months later. Software is a creative expression for some people, and useless brainpower for you. I’m not one to judge.


Speaking of overinflated egos. While there's no question pike, thompson, and k are a cut above, I think they made assumptions that engineers cannot be trusted.

Garbage collection is another such example. Compare that to e.g. rust's model which hoists the understanding of lifetime to the engineer, while encumbering the compiler with the dirty work. Whereas golang asks the engineer to do neither (granted golang came later, so maybe they didn't think it was possible).


> I think they made assumptions that engineers cannot be trusted.

I disagree, if you don't think engineers can be trusted, you wouldn't want to trust engineers to remember to close files or connections, you'd have some language feature to do that automatically.

C++ and Rust solved this with RAII and destructors.

In my opinion, this is on the same level as "just remember to check bounds" or "just remember to free memory".


The languages target different use-cases and rusts ownership model does indeed slow you down vs not having to think about it at all.

Esp. closures, which Go uses a lot all over the place, are quite cumbersome in Rust.

(I like both languages a lot)


Go doesn’t have immutability or enforced synchronization, and something as simple as a read/write conflict in a map can panic. If your language (like most) has shared mutable state, you must think about ownership, and it’s better for the compiler to check your work reliably.


You must think about synchronization, really, not necessarily ownership.

What you're saying is true, but it's still not an obvious choice - there's still a tradeoff.

The (hypothetical, e.g. Rust) compiler can check only a subset of correct programs, which means that many correct programs can't be successfully checked. This is all good if it only affects edge cases, it's not that good if it affects common cases.

It additionally becomes less of an issue (in Go) if you avoid shared mutable state and use channels for complex cases.

So all in all I agree the compiler checking your synchronization is nice, but it's a tradeoff (at least for now), and with the current state of it I think most projects will be just fine without it. On the other hand, there are projects that should definitely use languages that allow for static analysis like that.


Yeah, I’m thinking of ownership as exclusive access handed off in sync, and disposal as one more duty of the final owner.

I would feel a lot more comfortable with channels if a linter would verify that every message is either mutex protected or a newly created graph (maybe a deep copy) never accessed again by the sender. Go already does some escape analysis to minimize heap allocations, but doesn’t flag a goroutine receiving nested maps or array slices that may be shared.


You’re making an argument in favor of Go: Go only makes you think about ownership in proportion to the amount of shared mutable state in your program. Many Go programs don’t have any shared mutable state and the ones that do often limit this state to a small kernel that manages the ownership details. In Rust, every single program has to deal with ownership, and the burden is proportional to that of the total amount of state in the program (not just the shared mutable bit).

Further still, “it’s better for the compiler to check your work reliably” is limited to the shared mutable state that is completely in the purview of a single process—if you have a networked resource or a file that is accessed by multiple processes, rustc won’t save you. In Go’s niche (web services, daemons, etc), this is the overwhelming majority of all state.


> networked resource or a file that is accessed by multiple processes

Leases or transactions save us in that case. Transactional memory has been offered, but hasn’t gone mainstream.


Yes, there are solutions to those problems, but the point is that rustc doesn’t force you to use them. For example, rustc will happily let you write to a file without first ensuring that the process has exclusive write access to the file. It is no safer than Go in this regard.


I'm not sure what you mean by closures being cumbersome in rust.

Rust uses closures very heavily, for things like iterators, spawning threads, async tasks, and lots of other things. I've never felt them to be cumbersome at all.

Actually, for me, closures in rust are easier to use than in any other language I have used besides for various lisps!

https://doc.rust-lang.org/book/ch13-01-closures.html


When I say closure I mean anonymous functions that capture their environment, esp. in a mutable way.

Those work fine in Rust for simple use-cases, but it gets annoying quickly.

With a garbage collected language like Go, closures can freely capture stuff, use it, and then once no closure (or any other code, fwiw) references it anymore, it gets freed.

Escape analysis and garbage collection make even more complex (like closures that outlive the scope) use cases (where lifetimes would be hell) trivial.


This seems like a silly argument. By the same token, we can argue that Rust doesn’t think engineers can be trusted because it pushes a bunch of type and memory safety onto the compiler instead of letting the engineers manage it as their ancestors did. Of course, they were right to do so—we shouldn’t trust engineers with all of these tedious, error-prone processes. And while Rust solves for memory safety with borrow checking, that turns out to be a pretty limited solution from a productivity perspective (or at least that seems to be the majority opinion among people who have extensive experience with both languages) compared to borrow checkers—sometimes it’s appropriate to trade productivity for performance/correctness, but many times it isn’t.


Even if they can be trusted (I wouldn't trust myself), GC is a big time saver and I'd say well worth the trade-off. There's far more performance to be had elsewhere (compiler) but wherever that's or manual memory management is needed, there is a language to fit that need.


> I think they made assumptions that engineers cannot be trusted.

Having spent ~2 decades working with engineers, I'm inclined to agree... Past, present, and future self included!


Hot-take from a bottom-tier dev that thinks running off and creating a bunch of bullshit nobody else can support is a good idea^tm.


I wouldn't call myself a bottom tier dev. I've done plenty of Go programming in a corporate-ish environment. It's restrictive, which is nice, because the ceiling for expert is basically on the floor. It's the Ruby on Rails of programming languages. A perfect walled garden, lead by a benevolent dictator (Google), who will guide us all to the promised land.

Would I recommend Go to someone as a first language? Absolutely not. Would I professionally recommend Go to a company in order to lower the skill ceiling to floor, keep seniors+ in check, and keep juniors safe? Yes. That is all it is good at. Everything else it offers another language does better.

I have about a decade and a half of experience as a software engineer. In almost every case, over a long enough time period, a centrally dictated constrained system is almost never ideal. I've ported more companies off Ruby on Rails onto something sensible than I can keep count of. I imagine I will be able to charge $150/hr to do that for companies who bought into Go in about 10 years as well. Once you need to reach outside that box to do a very bespoke task your company needs you're stuck. With RoR I found this to be the case with ActiveRecord. With Golang I imagine it'll either be the concurrency model or the stiff parameters for nearly every aspect of the language. My present company is already at that stage. Their answer was to build around it. So now we have a hodge-podge of tooling in a couple languages and all benefits of a centrally governed language have gone out the window.

These days I generally recommend most people run Python as it seems the most future proof. I've also recommended a few companies move back to Java due to constraints Python could not solve for.


If I could only collaborate with and work on stuff built by better programmers then me, I wouldn't mind grokking whatever language/tooling to do it.

Since I'm probably very average, I'd rather be safe from having to deal with the mess left by bellow average work. So for me it goes Go for work, and a lot of Forth, Lisp (Scheme and Racket mostly), Elixir/Erlang on my own time. Whatever doesn't need to touch other people's work can be malleable as possible. If it gets publicly pawed at, I need carbon steel rails thank you very much.


I don’t think this is a bad thing. Most developers think of themselves as more advanced than they are and end up making a mess trying to do abstraction beyond both their ability to do it well and beyond what the problem requires. And even the truly advanced developers very often prefer simpler tooling because it allows them to concentrate their resources on the complexities of the problem and not those of the tool.

Further, the idea that Go is somehow bad for motivated engineers doesn’t match my experience and it doesn’t explain the proliferation of interesting tools and services (much of the container / devops ecosystem). I think your mistake is assuming that the only kind of motivation/creativity/etc involves tinkering with language runtimes and that someone who is building original things like Docker, Terraform, or Kubernetes must be “unmotivated”.


Pike said that.


The thing I find weird about Erlang's design is that actor mailboxes have no backpressure mechanism, which seems like a pretty big footgun. Go channels have their own quirks, but requiring them to be bounded was a good design choice.


And Elixir introduced GenStage to help with this [0]. The thing that I love about erlang's actor model over Go (that for me a fatal flaw with Go and CSP) is the "spooky action at a distance" issue. It's much easier to reason locally within an erlang project, in my opinion, versus a Golang project, since once a channel is created, it's often very difficult to trace its usage.

[0] https://elixir-lang.org/blog/2016/07/14/announcing-genstage/


BEAM has some backpressure; it used to have more.

If you send to a remote process, and the dist buffer is full, you'll be suspended. (Or you can send with no_suspend, and drop rather than suspend).

When you send to a local process, you need to lock that processes's mailbox, which offers a little bit of backpressure if there's contention. Doesn't provide backpressure when the recipient process is just ignoring its mailbox, though.

IIRC, earlier releases, maybe around r12-r16 would penalize processes that sent to a local process with a big mailbox, but I think that was removed because it didn't really work as envisioned, so better not to have the complexity.

There's ways to build systems that are resilient to large mailboxes, but it's not necessarily simple.


The best effort I've seen to get around this is https://github.com/lpgauth/shackle

It was motivated by a need to eliminate OOM errors and supports large scale operation at very low latency.


> Go channels have their own quirks, but requiring them to be bounded was a good design choice.

Making it the default is a good design choice, but mandating them isn't: sometimes you don't want backpressure and buffering is what you want, and with no unbounded channel, you easily end up with being an unbounded amount of goroutines instead… (it happened to me once earlier this year)


Ponylang has backpressure for its actor's


Haskell threads have some similarities to Goroutines. They are also real cheap and there's no problem creating lots of threads (M:N scheduling behind the scenes). There's also no function color problem with the threads.

There's one special thing that Haskell threads have that I think I've only seen in Erlang: it's safe to kill threads. Things will be cleaned up properly. You can safely expect that Haskell code created by someone else has been written to not go into a bad state if it's suddenly cancelled.

For example, you can write a generic timeout function that works by launching the work in a new thread, then wait N seconds, then kill that thread if it's still running. You don't need to make any kind of signalling thing where you send "please stop" message to a Goroutine.

The article talks about Goroutines in context of serving lots of requests. I like to think Haskell's threads and the fact that you can kill them safely makes it much easier to develop servers or job orchestration with simpler code.


Yes! People malign async exceptions but the fact that you can zap GHC thread is genius.

I am mostly a Haskeller (10 years now) but I used Go at a startup once. It was shocking how often we leaked goroutines. Whereas in Haskell, if you use async it's hard to.

Haskell exceptions are tricky (and there are plenty of bloggers who make a niche living acting like they're so hard to deal with you need to hire them) but in 2023 it's trivial. Just use the unliftio wrappers and you're golden.


To be fair, there have been plenty of bugs in Haskell libraries wrt exception safety. Cancellation and asynchronous exceptions are actually really hard in practice. But yes, actually having the possibility can make some code a lot easier to write and think about.


the flipside is async exceptions mean you can always kill a thread

if a goroutine doesn't select on a chan, you're screwed

and don't get me started on how bad Go gets in terms of exception safety. I've seen defer go wrong in so many ways. Unrestricted mutability is mostly to blame.


My experience with Goroutines is that eventually you run into the limits of working with them and your code explodes in size. Your

    for item := range channel {
        // code goes here
    }
becomes

    for {
        var item Item
        var ok bool
        select {
        case <-ctx.Done():
            break
        case item, ok = <-channel:
            if !ok {
                break
            }
        }
 
        // code goes here
    }
(If you have a solution to this pattern, please do share! I would _love_ a better way.)


What's the limit here?

That code is written like that because it's the only way to write code like that. You have a channel producing items, it will produce items for as long as the channel is open. You have a context to propagate cancellation.

You code gets bigger but it doesn't explode in size as this boilerplate is static in size regardless of what computation you're doing with those items.

You can improve that code by the way by fixing a bug and reducing the lines.

    L:
        for {
            select {
            case <-ctx.Done():
                break L
            case item, ok := <-channel:
                if !ok {
                    break L
                 }

                // code goes here
            }
        }
If you think there's a way to write that code any more concise than that, then there isn't. I don't see how this is an issue. The fact that there's pretty much one way to write this code makes it so much easier to parse when reading code that uses this pattern. To me, that's great.


alternatively:

    var errClosed = errors.New("closed")

    func doAThing[T any](ctx context.Context, c chan T) (T, error) {
        // the generics are not necessary, but they're also not awful for
        // DRYing up this type of a select.
        select {
        case <-ctx.Done():
            var empty T
            return empty, fmt.Errorf("could not doAThing: %w", ctx.Err())
        case item, ok := <-c:
            if !ok {
                var empty T
                return empty, fmt.Errorf("channel of type %T is done: %w", empty, errClosed)
            }
            return item, nil
        }
    }

    // ... and later...
    for {
        item, err := doAThing(ctx, channel)
        if err != nil {
            break
        }
        // code goes here
    }
I dislike labeled loops, naked returns, and else statements, so this is the pattern I use.


No. Just no.


Generics!

    func streamingApply[T any](ch channel[T], fn func(T)){ 
         for {
            select {
            case <-ctx.Done():
            break
            case item, ok = <-ch:
                if !ok {
                    break
                }
                fn(item)
            }
        }
    }

For everyone that's complained about generics being introduced to go...It really does help clean up this type of repetition.


If your problem fits, you can sometimes keep the former version in play with the pipeline pattern, where child goroutines are stopped when the parent goroutine closes the channel in a defer when it's exiting.

That being said, my experience is that anything moderately sophisticated requires the use of for/select. In my opinion, that's fine, because select gives you so much power the somewhat more complex form is worth it.


It does become a channel mess. The trick I use is to code everything using the same patterns (down to the same characters, name conventions, line breaks, etc) as much as I can so i can use the text editor to work (refactor) that mass. It's definitely the less funny part and why I'll still use clojure and it's async.io module whenever it makes sense (lots of different data). That way, I macro myself out of that problem.

Now if you cant use another language, change project. Go is not perl nor python, it's a sophisticated C. Use a sophisticated C for the wrong project / product and you'll want to kill yourself.


It's more like a toothless C. In that it knows a bunch of tricks already, you can't teach it anything new, it can't bite you, and it's mostly harmless.


Thumb up to that.


If your loop body is completely CPU-bound, then either adding a select statement, or a separate goroutine that awaits cancellation and closes the channel like collinvandyck76 suggested, will work. But commonly the body will make other IO-bound calls to which the context gets passed, so you can just handle the error.

    for item := range channel {
        _, err := process(ctx, item)
        if err != nil {
            return err
        }
    }


Do you typically need to check both `Done` and `ok`? The latter is only false when the channel is empty and closed, so unless multiple routines are pulling from that channel, you should only need one, should you not?


You absolutely should, because context cancellation does not imply a closed channel and vice versa.


Oh, duh. I shouldn't try to read code past midnight...

Thank you!


IMO JVM + Loom is the better runtime and with Kotlin it has the superior language also.

Before Loom there was often a case to pick Go just for goroutines but that time has passed for me at least.

Go still has it's place in my toolbox, namely shuffling bytes from one fd to another when such a task is called for in an environment where the JVM would be too cumbersome or I just want redistributable binaries for one reason or another. (though Rust is increasingly consuming these use-cases for me I really don't like async Rust)

The call out that Loom doesn't currently do forced preemption/time-slicing is notable though, for now you should make use of platform threads for CPU-heavy parallelism if you have tasks that simply won't yield. The fact you can still just use platform threads whenever you need them largely obviates that from being impactful.


>Before Loom there was often a case to pick Go just for goroutines but that time has passed for me at least

Not true in .Net world as async and Tasks were available since long time ago.


Which have absolutely nothing in common with Loom.


It had the superior language even with Java 1.1.


    Java is conspicuously absent from the list of languages above. Until now, you've had to either spawn an unreasonable number of threads or deal with Java's particular circle of callback hell. Happily, JEP 444 adds virtual threads, which sound a lot like goroutines. Virtual threads are cheap to create. The JVM schedules them onto platform threads (real threads the kernel knows about). There are a fixed number of platform threads, generally one per core. When a virtual thread performs a blocking operation, it releases its platform thread, and the JVM may schedule another virtual thread onto it. Unlike goroutines, virtual thread scheduling is cooperative: a virtual thread doesn't yield to the scheduler until it performs a blocking operation. This means that a tight loop can hold a thread indefinitely. I don't know whether this is an implementation limitation or if there's a deeper issue. Go used to have this problem until fully preemptive scheduling was implemented in 1.14.
TIL, I would've thought from that description that virtual threads and goroutines were identical, but looks like I have an outdated mental model of goroutines from the <= 1.14 era.


Well articulated.

Few people realized what callback hell was. Web devs got lucky they added await to JS.

I always appreciated Go for its coroutines. Does make for simpler code.

You know a language has done well when the biggest consistent argument is that it’s “boring”


I love Go, but I find the following things contradictory in the design.

"Don't communicate by sharing memory, share memory by communicating" from go proverbs[0].

And in the article, "When you create a goroutine, you're essentially allocating a closure and adding it to a queue in the runtime."

Because of this it becomes very easy to share memory by mistake and you end up with bugs. On the other hand Elixir's design makes this kind of thing very hard to do by mistake.

[0] - https://go-proverbs.github.io/


The StreamObserver API came at a time (2015) when it seems liked RxJava was going to take over. That didn't end up happening, but the API is still around. While it is more cumbersome, some things are /impossible/ to do with the Go style blocking. For example, try cancelling out of a Recv() call. The only way is to tear the entire Stream down. Goroutines never successfully married select {} and sync.Cond, or context Cancel. These are needed to successfully back out of a blocking statement. Unfortunately, that can't be done, and a goroutine that blocks is really stuck there. The only saving grace is that goroutines are relatively cheap (2-4K of memory?), and it's okay if a few O(100K) of them get stuck.


As I recall, cancelling out of a Recv() as in a network read can be achieved by setting an expiry time in the past.

i.e. on a net.Conn one can use SetReadDeadline() to unblock/cancel a Read().


It's not without its downsides.

The fact that IO blocks, and also doesn't work with the select statement, means you need to spin up a mess of extra goroutines in some situations. There was some good prior discussion of this here: https://news.ycombinator.com/item?id=13331284


I am still waiting for people to discover concurrentML. Whenever pthreads becomes a hassle I reach for CML and it hasn't failed me yet. It is pretty similar to go's concurrency model, but with dynamic selects that doesn't suck. And it is over 30 years old.


Thanks for this article and to ingve for submitting it.

Concurrency and async is my favourite topic.

I think event loop programming is very interesting, and Goroutines mean every process is effectively an event loop. So I like server programming. I have an epoll server that multiplexes multiple sockets per thread which is a bit more scalable than a thread per socket. I like nodejs libuv, but I think we can go further.

I wrote a very simple toy lightweight 1:M:N (1 scheduler:M kernel threads:N lightweight threads) thread scheduler in C, Rust and Java.

https://github.com/samsquire/preemptible-thread

It works on the principle that hot loops can be interrupted BY ANOTHER THREAD (the scheduler thread) on a timer by setting a loop to its limit, to give lightweight threads a chance to execute.

What I think I want today though is an extremely rich process/concurrency API that resembles a stream API but for processes. For example, we should be able to create pipelines that can be paused, resumed, forked, merged, drop_while, iterate_until and whatever else would be useful.


Try Elixir?


> but it marks all your memory as copy-on-write. Each write to a copy-on-write page causes a minor page fault, a small delay that's hard to measure

I'm really surprised to hear this called out as problematic. It's actually one of the most efficient aspects of fork() IMHO. COW is almost ideal is it not?

The real problem with fork() is that the entire virtual address space has to be cloned. In practice this means that it's hard to avoid overallocating/overcommitting physical memory, which in Linux anyway leads to OOM of random processes. That is truly awful.

He also goes on to note that threads are lighter than processes, but the author neglects to mention clone() in Linux. I think that an article written in 2023, hailing goroutines as an alternative to multiprocess server architecture, is severely lacking.

> Threads do still have a substantial cost though, and you'll run into scaling problems if you create a new thread for every connection

Depends on the threading model, doesn't it? ref. Solaris

I appreciate this article but it lacks quite a lot of depth. Reads like a first year student's inner monologue. Which is great, but I'm surprised it has gained HN traction.


Async function coloring is a worse disaster than "goto" of the previous generation and go is one of the very few languages that didn't fall into that trap.


I thought this would be pro-implicit green threading, of which only Go and Haskell do successfully to my knowledge. Instead it seems to support async/await, which are very different takes.


I really like this guys writing style. Very easy to digest, nice short sentences that are easily understood.


It is dubious that the article does not mention structured concurrency even once, despite it being the model du jour


What is the benefit of using greenthreads+channels over, say, greenthreads+channels+transactions?


Generics do exist though?


goroutines are just m:n threads with message passing queues. please correct me if I'm missing some obvious detail that makes them unique in any way. in that case, I've been using this model for a couple decades, as they are similar to green threads. They are in fact a wonderful model for concurrency but I think they leave a number of expert-programmer aspects of threads off the table.


The message passing queues following CSP semantics regarding synchronization is the big deal. If you only ever use buffered queues then you're missing half of the story.

Occam is probably the most important totally forgotten thread of programming language development.


The message passing queues in CSP aren't any different from buffered queues, right? I've been asking people this for some time and never have heard a convincing "no, and here's why they are different". There are CSP implementations in C++ and Python and their implementations seem to be buffered queues that block if there are no readers.


CSP doesn’t have queues, only blocking channels. Some implementations may provide them on top, as go does.

I would suggest reviewing the Hoare book on CSP, it’s very readable.


I have read it (along with lots Russ Cox wrote about CSP and Go, such as https://swtch.com/~rsc/thread/) and McIlroy (https://www.cs.dartmouth.edu/~doug/sieve/sieve.pdf) . From what I can tell, CSP channels are semantically equivalent to a particular subtype of queues and use the same OS primitives.

I'm not an OS expert but I've worked with distributed systems and threads for decades, and nobody I've asked has disagreed that Go's implementation of CSP is an M:N thread model with (blocking) queues.


CSP doesn't need queues (buffered channels) as a primitive because you can create them. Here's a Queue in CSP from the CSP book I linked elsewhere:

  BUFFER = P⟨⟩
    where
      P⟨⟩ = left?x → P⟨x⟩
      P⟨x⟩⌢s = (left?y → P⟨x⟩⌢s⌢⟨y⟩
                | right!x → Ps )
If data can be read from left it is and it's enqueued, if data can be written to right (because a reader is available) it dequeues it. But neither the read nor the write block the entire process.

Go's buffered channels act in the same way so end up offering a shortcut versus pure CSP. There are other ways that Go deviates from CSP (shared memory, among others) but with channels it's not deviating, just optimizing an existing behavior.


Not really forgotten because Phil Winterbottom was familiar with it and went on to work on Go.


Can you elaborate or provide sources on what you mean by "CSP semantics" here?


They mean communicating sequential processes. Here's a book on the topic by C.A.R. Hoare: http://www.usingcsp.com/


"Just m:n threads" is serious downplaying, most operating systems and languages that had that retired it eventually. The go implementation of goroutines is really really good.


I'm not saying their implementation isn't good, it is good. And probably message-passing is the right concurrency primitive to expose at a language level for people who want to take advantage of multi-core machines running concurrent code.

I'm not downplaying anything; I'm expressing merely that with a title like "Goroutines: The concurrency model we all..." makes it sounds like go invented something new, but afaict, they're just a well-engineered implemention of already well-understood concurrency principles.


100%, as I was reading the article I was remembering the different types of concurrency mechanisms I've used and I think it's helpful to have something like goroutines as a first-class language feature but it's not new in any way.

When it comes to concurrency I think it's good to go through the curriculum:

- main loop + interrupt handler

- processes (fork)

- multi-core concerns

- threads (now is a good time to learn about mutex, semaphore, critical section)

- lightweight threads (fiber, coroutines, green threads): learn how they are different

- async/await statemachines written by the compiler

Ok, now we have the background info to make wise choices and call things by their generalized names to avoid holy wars.


m:n threads is the basis of every non-embedded operating system, so I wouldn't say that model's been retired or is even under plausible threat. Loom and such are refitting Java with it, for example. The only problem with m:n is that it requires a thread-aware runtime to fully implement the abstraction.


Linux (or glibc I guess) and Solaris both rolled back their m:n threading implementations over a decade ago. It's a really hard problem, and you kind of need to own the whole stack. The only other system with a comparable implementation is Erlang.


linux is m:n today and always has been, so I guess I don't know what you're talking about.


IMO thread pools are anti-pattern in several ways. They don't globally load-balance properly and can be exhausted. I think the Go concurrency model with many lightweight threads is really nice. The only issue I have is that race conditions in Go code can violate the type system and cause horror-crashes. I've never used Go in production but have talked to outfits who do and have confirmed that race conditions are a serious issue.


One of my biggest pet peeves are race conditions in tests. They typically only show up when run in CI, and are impossible to reproduce locally. I have a hunch this is because of resource-starved VMs running on over provisioned hardware, which messes with time in unpredictable ways, making tests flaky. Sure, tests shouldn't rely on time, and these failures sometimes point that out for us to fix, but it's very annoying to troubleshoot when you can't reproduce it locally. I've tried everything from running in resource-limited containers to similarly spec'd VMs, and wasn't able to.

We could use a time-faking library, but sometimes this is unavoidable if some dependency, whether from a 3rd party or in stdlib, depends on reliable timing.

I'm wondering how other Go devs approach this.


Have you tried running your tests with the race detector enabled?

https://go.dev/doc/articles/race_detector https://go.dev/blog/race-detector

It only logs when a race condition happens, and it may have a non-negligible impact on performance, but it should print useful info to aid in debugging.


Oh, of course, I always run with -race enabled. But this is what I mean, the races are only detected in CI, not locally. Sometimes the stack traces point to a simple fix in the test itself, but often the issue is much more difficult to troubleshoot, especially since it can't be reproduced locally. It's also frustrating since this behavior is intermittent, and most of the time tests run fine in CI, but there are "troublesome" days when the issues happen much more frequently, which is why I think this is so dependent on the CI infrastructure (GitHub Actions FWIW).

A solution would be to run CI on self-managed nodes that behave more reliably, but that has maintenance costs that we don't want to deal with.


I see. My experience with detecting and fixing races using that tool, evidently applied to much simpler code, was basically getting notified that some variable was being written without synchronization by multiple goroutines, protecting it with a mutex, and being good to go. That's why I assumed linearity between running it and coming to a solution.


Sorry, race conditions? You really have to go out of your way to implement those.


What are the expert-programmer aspects you're thinking of?


Probably that it doesn't mix well when you need to pin work to a native/specific thread. This can matter when that thread is used across runtimes or with single threaded concurrency designs like GUI threads for qt or gtk.



shared memory, https://go.dev/blog/codelab-share although of course go added thread primitives. it's just that the language creators explicitly wanted to steer people away from the "subtle" details of multithreading.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: