Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find myself in this weird corner when it comes to async rust.

The guy's got a point in that doing a bunch of Arc, RwLock, and general sharing of state is going to get messy. Especially once you are sprinkling 'static all over the place, it infects everything, much like colored functions. I did this whole thing once back when I was starting off where I would Arc<RwLock> stuff, and try to be smart about borrow lifetimes. Total mess.

But then rust also has channels. When you read about it, it talks about "messages", which to me means little objects. Like a few bytes little. This is the solution, pretty much everything I write now is just a few tasks that service some channels. They look at what's arrived and if there's something to output, they will put a message on the appropriate channel for another task to deal with. No sharing objects or anything. If there's a large object that more than one task needs, either you put it in a task that sends messages containing the relevant query result, or you let each task construct its own copy from the stream of messages.

And yet I see a heck of a lot of articles about how to Arc or what to do about lifetimes. They seem to be things that the language needs, especially if you are implementing the async runtime, but I don't understand why the average library user needs to focus so much on this.



I find the criticisms a little strange - async doesn’t imply multithreaded, and you don’t need to annotate everything shared with magic keywords if you’re async within the same thread because there’s no sharing. Only one future at a time is running on the thread and they’re within the same context.

When moving between threads I do what you suggest here and use channels to send signals rather than having a lot of shared state. Sometimes there is a crucial global state something that’s easier to just directly access, but I just write struct that manages all the Arc/RwLock or whatever other exclusive access mechanism I need for the access patterns. From the callers point of view everything is just a simple function call. When writing the struct I need to be thoughtful of sharing semantics but it’s a very small struct and I write it once and move on.

I also don’t understand their concern about making things Send+Sync. In my experience almost everything is easily Send+Sync, and things that aren’t shouldn’t or couldn’t be.

I get that sometimes you just want to wear sweatpants and write code without thought of the details, but most languages that offer that out of the box don’t really offer efficient concurrency and parallelism. And frankly you rarely actually need those things even if the “but it’s cool” itch is driving you. Most of the time a nodejs-esque single threaded async program is entirely sufficient, and a lot of the time Async isn’t even necessary or particularly useful. But when you need all these things, you probably need to hike up your sweatpants and write some actual computer code - because microseconds matter, profiled throughput is crucial, and nothing in life that’s complex is easy and anyone selling you otherwise is lying.


> Sometimes there is a crucial global state something that’s easier to just directly access, but I just write struct that manages all the Arc/RwLock or whatever other exclusive access mechanism I need for the access patterns. From the callers point of view everything is just a simple function call. When writing the struct I need to be thoughtful of sharing semantics but it’s a very small struct and I write it once and move on.

This is a recurring pattern I've started to notice with Rust: most things that repeatedly feel clunky, or noisy, or arduous, can be wrapped in an abstraction that allows your business logic to come back into focus. I've started to think this mentality is essential to any significant Rust project.


Yeah it was a bit of a block for me as well, I don’t know where it came from, but I resisted wrapping things. Reality is breaking things up into crates is encouraged anyway, and just abstracting complexity away is Not That Hard, and can usually be pretty small and concise to boot.

I think I’m used to other languages provided a lot of these abstractions or having some framework that manages it all. The frameworks in rust tend to be pretty low level (with a few notable exceptions) so perhaps that’s where it comes from.


Well for one- creating abstractions always comes with a tradeoff, so it's good to have some basic skepticism around them. But Rust embraces them, for better and worse. It equips you to write extremely safe and scalable abstractions, but it's also designed in a way that assumes you're going to use those capabilities (mainly, being really low-level and explicit by default), and so you're going to have a harder time if you avoid them

Another thing, for me, was that I came from mostly writing TypeScript, which is the opposite: the base language is breezy without abstractions, and the type system equips you to strongly-type plain data and language features, so you'll have a great time if you stick to those

But yeah, it's been interesting to see how different the answers to these questions can be in different languages!


Rust embraces abstractions because Rust abstractions are zero-cost. So you can liberally create them and use them without paying a runtime cost.

That makes abstractions far more useful and powerful, since you never need to do a cost-benefit analysis in your head, abstractions are just always a good idea in Rust.


"Zero-cost abstractions" can be a confusing term and it is often misunderstood, but it has a precise meaning. Zero-cost abstractions doesn't mean that using them has no runtime cost, just that the abstraction itself causes no additional runtime cost.

These can also be quite narrow: Rc is a zero-cost abstraction for refcounting with both strong and weak references allocated with the object on the heap. You cannot implement something the same more efficiently, but you can implement something different but similar that is both faster and lighter than Rc. You can make a CheapRc that only has strong counts, and that will be both lighter and faster by a tiny amount, or a SeparateRc that stores the counts separately on the heap, which offers cheaper conversions to/from Rc.


I am very aware of the definition of zero-cost.

We're talking about the comparison between using an abstraction vs not using an abstraction.

When I said "doesn't have a runtime cost", I meant "the abstraction doesn't have a runtime cost compared to not using the abstraction".

If you want your computer to do anything useful, then you have to write code, and that code has a runtime cost.

That runtime cost is unavoidable, it is a simple necessity of the computer doing useful work, regardless of whether you use an abstraction or not.

Whenever you create or use an abstraction, you do a cost-benefit analysis in your head: "does this abstraction provide enough value to justify the EXTRA cost of the abstraction?"

But if there is no extra cost, then the abstraction is free, it is truly zero cost, because the code needed to be written no matter what, and the abstraction is the same speed as not using the abstraction. So there is no cost-benefit analysis, because the abstraction is always worth it.


The way you used it in your parent comment didn't make it clear that you were using it properly, hence my clarification. I'm honestly still not sure you've got it right, because Rust abstractions, in general, are not zero-cost. Rust has some zero-cost abstractions in the standard library and Rust has made choices, like monomorphization for generics, that make writing zero-cost abstractions easier and more common in the ecosystem. But there's nothing in the language or compiler that forces all abstractions written in Rust to be free of extra runtime costs.


I never said that ALL abstractions in Rust are zero-cost, though the vast majority of them are, and you actually have to explicitly go out of your way to use non-zero-cost abstractions.


Are you sure about that?

>Rust embraces abstractions because Rust abstractions are zero-cost. So you can liberally create them and use them without paying a runtime cost.

>you never need to do a cost-benefit analysis in your head, abstractions are just always a good idea in Rust

Again though, and ignoring that, "zero-cost abstraction" can be very narrow and context specific, so you really don't need to go out of your way to find "costly" abstractions in Rust. As an example, if you have any uses of Rc that don't use weak references, then Rc is not zero-cost for those uses. This is rarely something to bother about, but rarely is not never, and it's going to be more common the more abstractions you roll yourself.


There's always a complexity cost even when there isn't a runtime cost. It just so happens that in Rust, the benefits tend to outweigh the costs


The whole point of an abstraction is to remove complexity for the user.

So I assume you mean "implementation complexity" but that's irrelevant, because that cost only needs to be paid once, and then you put the abstraction into a crate, and then millions of people can benefit from that abstraction.


You've got a very narrow view that I'd encourage you to be more open-minded about

No abstraction is perfect. Every abstraction, when encountered by a user, requires them to ask "what does this do?", because they don't have the implementation in front of their eyes

This may be an easy question to answer- maybe it maps very obviously to a pattern or domain concept they already know, or maybe they've seen this exact abstraction before and just have to recall it

It may be slightly harder- a new but well-documented concept, or a concept that's intuitive but complex, or a concept that's simple but poorly-named

Or it may be very hard- a badly-designed abstraction, or one that's impossible to understand without understanding the entire system

But the simplest, most elegant, most intuitive abstraction in the world has nonzero cognitive cost. We abstract despite the cost, when that cost is smaller than the cost of not abstracting.


Even the costs you are talking about are a one-time cost to read the documentation and learn the abstraction. And the long-term benefits of the abstraction are far greater than the one-time costs. That's why we create abstractions, because they are a net gain. If they were not a net gain, we would simply not create them.


The whole point of abstraction is to replace the need of understanding all the details of the implementation with a more general and simpler concept. So while the abstraction itself may have a non zero cognitive cost for the end user, this cost should be lower than the cognitive cost of the full implementation that the abstraction hides. Hence the net cognitive cost of proper abstraction is negative.

Abstractions allow systems to scale. Without them, it would be impossible to work on a system that's 1M lines of code long, because you'd have to read and understand all 1M lines before doing anything.


> abstractions are just always a good idea

The "zero-cost" phrase is deceptive. There's a non-zero cognitive cost to the author and all subsequent readers. A proliferation of abstractions increases the cost of every other abstraction further due to complex interactions. This is true of in all languages where the community has embraced the idea of abstraction without moderation.


Well, the intent of an abstraction is it comes at a non zero cost to the author but a substantial benefit to the user/reader. If it’s a cost to everyone why are you doing it at all?

Rust embraces zero to low cost abstraction at the machine performance level, although to get reflective or runtime adaptive abstractions you end up losing some of that zero cost as you need to start boxing and moving things into heaps and using vtables, etc. IMO this is where rust is weakest and most complex.


> There's a non-zero cognitive cost to the author and all subsequent readers.

No, the cognitive cost of a particular abstraction relative to all other abstractions under consideration can be negative.

The option of not using any abstraction doesn’t exist. If you disagree with that then I think we have to go back one step and ask what an abstraction even is.


It also often makes debugging harder.


> async doesn’t imply multithreaded

Async the keyword doesn’t, but Tokio forces all of your async functions to be multi thread safe. And at the moment, tokio is almost exclusively the only async runtime used today. 95% of async libraries only support tokio. So you’re basically forced to write multi thread safe code even if you’d benefit more from a single thread event loop.

Rust async’s set up is horrid and I wish the community would pivot away to something else like Project Loom.


No, tokio does not require your Futures to be thread-safe.

Every executor (including tokio) provides a `spawn_local` function that spawns Futures on the current thread, so they don't need to be Send:

https://docs.rs/tokio/1.32.0/tokio/task/fn.spawn_local.html

I have used Rust async extensively, and it works great. I consider Rust's Future system to be superior to JS Promises.


So you’re stuck choosing a single CPU or having to write send and sync everywhere. There’s a lot of use cases where you would want a thread-per-core model like Glommio to take advantage of multiple cores while still being able to write code like it’s a single thread.

> I have used Rust async extensively, and it works great. I consider Rust's Future system to be superior to JS Promises.

Sure, but it’s a major headache compared to Java VirtualThreads or goroutines


> So you’re stuck choosing a single CPU or having to write send and sync everywhere. There’s a lot of use cases where you would want a thread-per-core model like Glommio to take advantage of multiple cores while still being able to write code like it’s a single thread.

thread_local! exists, and you can just call spawn_local on each thread. You can even call spawn_local multiple times on the same thread if you want.

You can have some parts of your programs be multi-threaded, and then other parts of your program can be single-threaded, and the single-threaded and multi-threaded parts can communicate with an async channel...

Rust gives you an exquisite amount of control over your programs, you are not "stuck" or "locked in", you have the flexibility to structure your code however you want, and do async however you want.

You just have to uphold the basic Rust guarantees (no data races, no memory corruption, no undefined behavior, etc.)

The abstractions in Rust are designed to always uphold those guarantees, so it's very easy to do.


> Rust gives you an exquisite amount of control over your programs

It does.

Problem is that there isn't the documentation, examples etc to help navigate the many options.


> So you’re stuck choosing a single CPU or having to write send and sync everywhere. There’s a lot of use cases where you would want a thread-per-core model like Glommio to take advantage of multiple cores while still being able to write code like it’s a single thread.

No your not, you spawn a runtime on each thread and use spawn_local on each runtime. This is how actix-web works and it uses tokio under the hood.

https://docs.rs/actix-rt/latest/actix_rt/


Yea this is exactly what I do. It makes everything much cleaner.


How is the future system superior? Is this a case of the languages type constraints being better vs non-existent? Saying something is superior doesn't really add much.

I am genuinely asking because I have little formal background in CS so "runtimes" and actual low level differences between , for instance, async and green threads mystifies me. EG What makes them actually different from the "runtime" perspective?


Wow, I've been using tokio for years and never knew about this. Thanks!


>but Tokio forces all of your async functions to be multi thread safe

While there are other runtimes that are always single-threaded, you can do it with tokio too. You can use a single threaded tokio runtimes and !Send tasks with LocalSet and spawn_local. There are a few rough edges, and the runtime internally uses atomics where a from-the-ground-up single threaded runtime wouldn't need them, but it works perfectly fine and I use single threaded tokio event loops in my programs because the tokio ecosystem is broader.


So with another async runtime it's possible to write async Rust that doesn't need to be thread-safe??? Can you show some example?


You don't even need other runtimes for this. Tokio includes a single-threaded runtime and tools for dealing with tasks that aren't thread safe, like LocalSet and spawn_local, that don't require the future to be Send.


Every executor (including tokio) supports spawning Futures that aren't Send:

https://docs.rs/tokio/1.32.0/tokio/task/fn.spawn_local.html

There is a lot of misinformation in this thread, with people not knowing what they're talking about.


I really like the message passing paradigm. And languages like Erlang have shown that its an excellent choice... for distributed systems. But writing code like that is a very diffferent experience from, say, async JavaScript, which feels more like writing synchronous code with green threads (except you have to deal with function coloring as well). I believe people will try to write code in a way that is already familiar to them, leading them down the path of Arc and RwLock in Rust.


> But writing code like that is a very diffferent experience from, say, async JavaScript,

I write a fair amount of code in Elixir professionally and this isn't how I view it.

There are some specific Elixir/Erlang bits of ceremony you need to do to set up your supervision tree of GenServers, but then once that's done you get to write code that feels like so gle threaded "ignore the rest of the world" code. Some of the function calls you're making might be "send and message and wait for a response" from GenServers etc. but the framework takes care of that.

I wrote some driver code for an NXP tag chip. Driving the inventory process is a bit involved, you have to do a series of things, set up hardware, turn on radio, wait a bit, send data, service the SPI the whole time in parallel. With the right setup for the hardware interface I just wrote the whole thing as a sequence, it was the simplest possible code you could imagine for it. And this at the same time as running a web server, and servicing hardware interrupts that cause it to reload the state of some registers and show them to each connected web session.


Go also uses goroutines and channels to facilitate message passing, or as they describe it, "sharing memory by communicating."

I imagine Rust to be a language far more similar to Go, in both use cases and functionality, than JS.


And in the end, almost everything ends up using Mutex, RWMutex, WaitGroup, Once, and some channels that exist only to ever be closed (like Context.Done), and only if you need to select around them.

It's great, but message passing it is not.


As a quite senior Go developer, I'd like to +1 this a ton. You're far more likely to have shocking edge cases unaccounted for when using channels. I consider every usage very, very carefully. Just like every other language, I think the ultimate solution is to build higher-level abstractions for concurrency patterns (e.g. errgroup) and, now that Go has generics, it's the right time to start building them.

If you haven't seen this paper, I bet you'll find at least one or two new bugs that you didn't know about: https://songlh.github.io/paper/go-study.pdf


The first one is indeed non-obvious, but the remaining snippets presented as bugs would not pass a review unless hidden inside 1k+ LOC PRs. Some are so blatantly obvious (seriously for loop and not passing current value as variable?) that I'm surprised that authors have listed them as if they're somehow special.


> for loop and not passing current value as variable

In most languages, current for loop value is always accessed a variable, not a reference. The only languages where it's not the case that I know of are Go and Python (JavaScript used to also have this problem with for(var ...), it was fixed with for(let ...)). So if you don't regularly write Go, it's easy to make this mistake.


I still like channels because they may be a net reduction in the number of concurrency primitives in use, which complicates quantification in the paper - their taxonomy is great, though. Channels have some sharp corners.


Reducing the number of concurrency primitives does not imply reduction in complexity. On the contrary in fact, I've seen the messes created by golang in large production systems. Here's a good article: https://www.uber.com/blog/data-race-patterns-in-go/


If you choose to use Mutex, that's on you.

Rust gives you channels (both synchronous blocking channels and async channels), and they work great, there is nothing stopping you from using them.


I'm pretty sure the gp was talking about Go Mutex, not Rust Mutex.


Ah, my mistake.


Because all languages and developers assume that Erlang is only about message passing. And they completely ignore literally everything else: from immutable structures to the VM providing insane guarantees (like processes not crashing the VM, and monitoring)


> I imagine Rust to be a language far more similar to Go, in both use cases and functionality, than JS.

I mostly agree. But I would wager that for a significant amount of people their first exposure to "async" is JS and not any number of other languages. And when you try to write async Rust the same way as you might write async JS, things just aren't that pretty.


> But then rust also has channels. When you read about it, it talks about "messages", which to me means little objects. Like a few bytes little. This is the solution, pretty much everything I write now is just a few tasks that service some channels. They look at what's arrived and if there's something to output, they will put a message on the appropriate channel for another task to deal with. No sharing objects or anything. If there's a large object that more than one task needs, either you put it in a task that sends messages containing the relevant query result, or you let each task construct its own copy from the stream of messages.

The dream of Smalltalk and true OOP is still alive.


Why does Smalltalk constantly get credit for being true OOP? Simula was doing OOP long before Smalltalk. Most languages choose Simula style OOP, and reject the things that make Smalltalk different.

If you say Smalltalk is better OOP I might agree, but calling it "true" is not correct.


Because the term was coined by Alan Kay, who apparently later said he probably should have called it message oriented (paraphrasing).

There's also a written conversation you can find online where he disqualifies pretty much all of the mainstream languages of being OO.

A lot of people, like you, say that OO == ADTs. Or rather, what ever Simula, C++ and Java are doing. Some will say that inheritance is an integral part of it, other's say it's all about interfaces.

But then there's people who say that Scheme and JavaScript are more object oriented than Java and C#. Or that when we're using channels or actors we're now _really_ doing OOP.

There's people who talk about patterns, SOLID, clean code and all sorts of things that you should be adhering to when structuring OO code.

Then there's people who say that OO is all about the mental model of the user and their ability to understand your program in terms of operational semantics. They should be able to understand it to a degree that they can manipulate and extend it themselves.

It's all very confusing.


> Because the term was coined by Alan Kay

This is pretty unlikely. See https://news.ycombinator.com/item?id=36879311.


> The term "object-oriented" was applied to a programming language for the first time in the MIT CSG Memo 137 (April 1976)

That's publications though. Alan Kay says he used it in conversation in 1967: http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay...

There's probably also a distinction to be made between "object-oriented" and "object-oriented programming".


The referenced research also considers the publications by Kay and his team (including his theses and the Smalltalk-72 and 76 manuals) and other uses of the term. I think Kay mixes things up in retrospective; in his 1968 thesis he used the terms "object language" and "object machine", but not "object-oriented"; imagine giving your new breakthrough method a name, but then not using that name anywhere in the publication; that seems unthinkable, especially with an accomplished communicator like Kay. The first time "object-oriented" appears in a publication of his or his team is in 1978.


> Most languages choose Simula style OOP

Right, including Smalltalk 76 and 80 onwards themselves. Remember Kay's statement "actually I made up the term object-oriented and I can tell you I did not have C++ in mind, so the important thing here is I have many of the same feelings about Smalltalk" (https://www.youtube.com/watch?v=oKg1hTOQXoY&t=636s); the reason he refers to Smalltalk this way in his 1997 talk was likely the fact that Smalltalk-80 has more in common with Simula 67 than his brain child Smalltalk-72. Ingalls explicitly refers to Simula 67 in his 2020 HOPL paper.

> and reject the things that make Smalltalk different

Which would mostly be its dynamic nature (Smalltalk-76 can be called the first dynamic OO language) and the use of runtime constructs instead dedicated syntax for conditions and loops (as it is e.g. the case in Lisp). There are a lot of dynamic OO languages still in use today, e.g. Python. Also Smalltalk-80 descendants are still in use, e.g. Pharo.


Alan Kay is generally credited with coming up with the term "object-oriented", so for better or for worse, many people defer to his definition and his embodiment of ideas when looking for a strict definition of the term.


> many people defer to his definition and his embodiment of ideas when looking for a strict definition of the term.

I consider the definition e.g. used by IEEE as sufficiently strict, see e.g. https://ethw.org/Milestones:Object-Oriented_Programming,_196..., but - as you say - it's not the defintion used by Kay.


Honestly, though, that’s like crediting William Burroughs with Blade Runner.


Erlang often crops up in these conversations.


OOP was a shit idea that needed to die.


There's a difference between Smalltalk's concept of OOP and Java's. I'm not talking about Java's.


I remember picking up this sort of advice from a professor way back in college. It's a godsend. Structure the problem as data flowing between tasks and connect them up with queues, avoid sharing state. It's just a better way to deal with multithreading no matter what language you use.


There is a time any place for sharing state and data. However it is extremely complex to make that work, and so if at all possible don't. In general the only time I can't use queues is when I'm writing the queue implementation (I've done this several times - turns out there are a number of different special cases in my embedded system where it was worth it to avoid some obscure downside to the queues I already had).

When you need the absolute best performance sharing state is sometimes better - but you need a deep understanding of how your CPUs share state. A mutex or atomic write operation is almost always needed (the exceptions are really weird), and those will kill performance so you better spend a lot of time minimizing where you have them.


I like this too.

I would also suggest looking into ringbuffers and LMAX Disruptor pattern.

There is also Red Planet Lab's Rama, which takes the data flow idea and uses it to scale.


Async is not a solution for data parallelism.


> But then rust also has channels. When you read about it, it talks about "messages", which to me means little objects. Like a few bytes little.

As a wise programmer once said, "Do not communicate by sharing memory; instead, share memory by communicating"


Ooh, that's very ezn. Ah crap I think I have a race condition.


Hoare Was Right.

(But if you're only firing up a few tasks, why not just use threads? To get a nice wrapper around an I/O event loop?)


Exactly. People are too afraid of using threads these days for some perceived cargo-cult scalability reasons. My rule of thumb is just to use threads if the total number of threads per process won't exceed 1000.

(This is assuming you are already switching to communicating using channels or similar abstraction.)


The performance overhead of threads is largely unrelated to how many you have. The thing being minimized with async code is the rate at which you switch between them, because those context switches are expensive. On modern systems there are many common cases where the CPU time required to do the work between a pair of potentially blocking calls is much less than the CPU time required to yield when a blocking call occurs. Consequently, most of your CPU time is spent yielding to another thread. In good async designs, almost no CPU time is spent yielding. Channels will help batch up communication but you still have to context switch to read those channels. This is where thread-per-core software architectures came from; they use channels but they never context switch.

Any software that does a lot of fine-grained concurrent I/O has this issue. Database engines have been fighting this for many years, since they can pervasively block both on I/O and locking for data model concurrency control.


The cost of context switching in "async" code is very rarely smaller than the cost of switching OS threads. (Exception is when you'ree using a GC language with some sort of global lock.)

"Async" in native code is cargo cult, unless you're trying to run on bare metal without OS support.


The cost of switching goroutines, rust Futures, Zig async Frames, or fibers/userspace-tasks in general is on the other of a few nano-seconds whereas it's in the micro-second range for OS threads. This allows you to spawn tons of tasks and have them all communicate with each other very quickly (write to queue; push receiver to scheduler runqueue; switch out sender; switch to receiver) whereas doing so with OS threads would never scale (write to queue; syscall to wake receiver; syscall to switch out sender). Any highly concurrent application (think games, simulations, net services) uses userspace/custom task scheduling for similar reasons.


Nodejs is inherently asynchronous and the JavaScript developers bragged during its peak years how it was faster than Java for webservers despite only using one core because a classic JEE servlet container launches a new thread per request. Even if you don't count this as "context switch" and go for a thread pool you are deluding yourself because a thread pool is applying the principles of async with the caveat that tasks you send to the thread pool are not allowed to create tasks of their own.

There is a reason why so many developers have chosen to do application level scheduling: No operating system has exposed viable async primitives to build this on the OS level. OS threads suck so everyone reinvents the wheel. See Java's "virtual threads", Go's goroutines, Erlang's processes, NodeJS async.

You don't seem to be aware what a context switch on an application level is. It is often as simple as a function call. There is no way that returning to the OS, running a generic scheduler that is supposed to deal with any possible application workload that needs to store all the registers and possibly flush the TLB if the OS makes the mistake of executing a different process first and then restore all the registers can be faster than simply calling the next function in the same address space.

Developers of these systems brag about how you can have millions of tasks active at the same time without breaking any sweat.


The challenge is that async colors functions and many of the popular crates will force you to be async, so it isn't always a choice depending on which crates you need.


Please excuse my ignorance, I haven't done a ton of async Rust programming - but if you're trying to call async Rust from sync Rust, can you not just create a task, have that task push a value through a mpsc channel, shove the task on the executor, and wait for the value to be returned? Is the concern that control over the execution of the task is too coarse grained?


Yes, you can do that. You can use `block_on` to convert an async Future into a synchronous blocking call. So it is entirely possible to convert from the async world back into the sync world.


But you have to pull in an async runtime to do it. So library authors either have to force everyone to pull in an async runtime or write two versions of their code (sync and async).


There are ways to call both from both for sure, but my point is if you don't want any async in your code at all...that often isn't a choice if you want to use the popular web frameworks for example.


I can't both perform blocking I/O and wait for a cancellation signal from another thread. So I need to use poll(), and async is a nice interface to that.


99% of the use cases that ought to use async are server-side web services. If you're not writing one of those, you almost certainly don't need async.


Or desktop programs. Many GUI frameworks have a main thread that updates the layout (among other things) and various background ones.


Async and GUI threads are different concepts. Of course most GUIs have an event loop which can be used as a form of async, but with async you do your calculations in the main thread, while with GUIs you typically spin your calculations off to a different thread.

Most often when doing async you have a small number of tasks repeated many times, then you spin up one thread per CPU, and "randomly" assign each task as it comes in to a thread.

When doing GUI style programming you have a lot of different tasks and each task is done in exactly one thread.


Hmm I would say the concepts are intertwined. Lots of GUI frameworks use async/await and the GUI thread is just another concurrency pattern that adds lock free thread exclusivity to async tasks that are pinned to a single thread.


Async for GUIs is also nice. Not essential, but allows you to simply lot of callback code


Note that if you "just" write responses to queries without yielding execution, you don't need async, you just write Sync handlers to an async framework. (Hitting dB requests in a synchronous way is not good for your perf though, you better have a mostly read / well cached problem)


A particularly interesting use case for async Rust without threads is cooperative scheduling on microcontrollers[1]; this article also does a really good job of explaining some of the complications referenced in TFA.

[1]: https://news.ycombinator.com/item?id=36790238


Waiting asynchronously on multiple channels/signals. Heterogenous select is really nice.


It really is, but I still favour "unsexy" manual poll/select code with a lot of if/elseing if it means not having to deal with async.

I fully acknowledge that I'm an "old school" system dev who's coming from the C world and not the JS world, so I probably have a certain bias because of that, but I genuinely can't understand how anybody could look at the mess that's Rust's async and think that it was a good design for a language that already had the reputation of being very complicated to write.

I tried to get it, I really did, but my god what a massive mess that is. And it contaminates everything it touches, too. I really love Rust and I do most of my coding in it these days, but every time I encounter async-heavy Rust code my jaw clenches and my vision blurs.

At least my clunky select "runtime" code can be safely contained in a couple functions while the rest of the code remains blissfully unaware of the magic going on under the hood.

Dear people coming from the JS world: give system threads and channels a try. I swear that a lot of the time it's vastly simpler and more elegant. There are very, very few practical problems where async is clearly superior (although plenty where it's arguably superior).


> but I genuinely can't understand how anybody could look at the mess that's Rust's async and think that it was a good design for a language that already had the reputation of being very complicated to write.

Rust adopted the stackless coroutine model for async tasks based on its constraints, such as having a minimal runtime by default, not requiring heap allocations left and right, and being amenable to aggressive optimizations such as inlining. The function coloring problem ("contamination") is an unfortunate consequence. The Rust devs are currently working on an effects system to fix this. Missing features such as standard async traits, async functions in traits, and executor-agnosticism are also valid complaints. Considering Rust's strict backwards compatibility guarantee, some of these will take a long time.

I like to think of Rust's "async story" as a good analogue to Rust's "story" in general. The Rust devs work hard to deliver backwards compatible, efficient, performant features at the cost of programmer comfort (ballooning complexity, edge cases that don't compile, etc.) and compile time, mainly. Of course, they try to resolve the regressions too, but there's only so much that can be done after the fact. Those are just the tradeoffs the Rust language embodies, and at this point I don't expect anything more or less. I like Rust too, but there are many reasons others may not. The still-developing ecosystem is a prominent one.


I read comments like this and feel like I’m living in some weird parallel universe. The vast majority of Rust I write day in and day out for my job is in an async context. It has some rough edges, but it’s not particularly painful and is often pleasant enough. Certainly better than promises in JS. I have also used system threads, channels, etc., and indeed there are some places where we communicate between long running async tasks with channels, which is nice, and some very simple CLI apps and stuff where we just use system threads rather than pulling in tokio and all that.

Anyway, while I have some issues with async around futur composition and closures, I see people with the kind of super strong reaction here and just feel like I must not be seeing something. To me, it solves the job well, is comprehensible and relatively easy to work with, and remains performant at scale without too much fiddling.


Honestly, this is me too. The only thing I’d like to also see is OTP-like supervisors and Trio-like nurseries. They each have their use and they’re totally user land concerns.


> It really is, but I still favour "unsexy" manual poll/select code with a lot of if/elseing if it means not having to deal with async.

> I fully acknowledge that I'm an "old school" system dev who's coming from the C world and not the JS world, so I probably have a certain bias because of that, but I genuinely can't understand how anybody could look at the mess that's Rust's async and think that it was a good design for a language that already had the reputation of being very complicated to write.

I'm in the same "old school" system dev category as you, and I think that modern languages have gone off the deep end, and I complained about async specifically in a recent comment on HN: https://news.ycombinator.com/item?id=37342711

> At least my clunky select "runtime" code can be safely contained in a couple functions while the rest of the code remains blissfully unaware of the magic going on under the hood.

And we could have had that for async as well, if languages were designed by the in-the-trenches industry developer, and not the "I think Haskell and Ocaml is great readability" academic crowd.

With async in particular, the most common implementation is to color the functions by qualifying the specific function as async, which IMO is exactly the wrong way to do it.

The correct way would be for the caller to mark a specific call as async.

IOW, which of the following is clearer to the reader at the point where `foo` is called?

Option 1: color the function

      async function foo () {
         // ...
      }
      ...
      let promise = foo ();
      let bar = await promise;

Option 2: schedule any function

      function foo () {
         // ...
      }

      let sched_id = schedule foo ();

      ...

      let bar = await sched_id;

Option 1 results in compilation errors for code in the call-stack that isn't async, results in needing two different functions (a wrapper for sync execution), and means that async only works for that specific function. Option 2 is more like how humans think - schedule this for later execution, when I'm done with my current job I'll wait for you if you haven't finished.


Isn't mixing async and sync code like this a recipe for deadlocks?

What if your example code is holding onto a thread that foo() is waiting to use?

Said another way, explain how you solved the problems of just synchronously waiting for async. If that just worked then we wouldn't need to proliferate the async/await through the stack.


> Said another way, explain how you solved the problems of just synchronously waiting for async.

Why? It isn't solved for async functions, is it? Just because the async is propagated up the call-stack doesn't mean that the call can't deadlock, does it?

Deadlocks aren't solved for a purely synchronous callstack either - A grabbing a resource, then calling B which calls C which calls A ...

Deadlocks are potentially there whether or not you mix sync/async. All that colored functions will get you is the ability to ignore the deadlock because that entire call-stack is stuck.

> If that just worked then we wouldn't need to proliferate the async/await through the stack.

It's why I called it a leaky abstraction.


Yes actually it is solved. If you stick to async then it cannot deadlock (in this way) because you yield execution to await.


> Yes actually it is solved. If you stick to async then it cannot deadlock (in this way) because you yield execution to await.

Maybe I'm misunderstanding what you are saying. I use the word "_implementation_type_" below to mean "either implemented as option 1 or option 2 from my post above."

With current asynchronous implementations (like JS, Rust, etc), any time you use `await` or similar, that statement may never return due to a deadlock in the callstack (A is awaiting B which is awaiting C which is awaiting A).

And if you never `await`, then deadlocking is irrelevant to the _implementation_type_ anyway.

So I am trying to understand what you mean by "it cannot deadlock in this way" - in what way do you mean? async functions can accidentally await on each other without knowing it, which is the deadlock I am talking about.

I think I might understand better if you gave me an example call-chain that, in option 1, sidesteps the deadlock, and in option 2, deadlocks.


I'm referring to the situation where a synchronous wait consumes the thread pool, preventing any further work.

A is sychrounously waiting B which is awaiting C which could complete but never gets scheduled because A is holding onto the only thread. Its a very common situation when you mix sync and async and you're working in a single threaded context, like UI programming with async. Of course it can also cause starvation and deadlock in a multithreaded context as well but the single thread makes the pitfall obvious.


That's an implementation problem, not a problem with the concept of asynchronous execution, and it's specifically a problem in only one popular implementation: Javascript in the browser without web-workers.

That's specifically why I called it a Leaky Abstraction in my first post on this: too many people are confusing a particular implementation of asynchronous function calls with the concept of asynchronous function calls.

I'm complaining about how the mainstream languages have implemented async function calls, and how poorly they have done so. Pointing out problems with their implementation doesn't make me rethink my position.


I don't see how it can be an implementation detail when fundamentally you must yield execution when the programmer has asked to retain execution.

Besides Javascript, its also a common problem in C# when you force synchronous execution of an async Task. I'm fairly sure its a problem in any language that would allow an async call to wait for a thread that could be waiting for it.

I really can't imagine how your proposed syntax could work unless the synchronous calls could be pre-empted, in which case, why even have async/await at all?

But I look forward to your implementation.


> I don't see how it can be an implementation detail when fundamentally you must yield execution when the programmer has asked to retain execution.

It's an implementation issue, because "running on only a single thread" is an artificial constraint imposed by the implementation. There is nothing in the concept of async functions, coroutines, etc that has the constraint "must run on the same thread as the sync waiting call".

An "abstraction" isn't really one when it requires knowledge of a particular implementation. Async in JS, Rust, C#, etc all require that the programmer knows how many threads are running at a given time (namely, you need to know that there is only one thread).

> But I look forward to your implementation.

Thank you :-)[1]. I actually am working (when I get the time, here and there) on a language for grug-brained developers like myself.

One implementation of "async without colored functions" I am considering is simply executing all async calls for a particular host thread on a separate dedicated thread that only ever schedules async functions for that host thread. This sidesteps your issue and makes colored functions pointless.

This is one possible way to sidestep the specific example deadlock you brought up. There's probably more.

[1] I'm working on a charitable interpretation of your words, i.e. you really would look forward to an implementation that sidesteps the issues I am whining about.


That sounds interesting indeed.

I think the major disconnect is that I'm mostly familiar with UI and game programming. In these async discussions I see a lot of disregard for the use cases that async C# and JavaScript were built around. These languages have complex thread contexts so it's possible to run continuations on a UI thread or a specific native thread with a bound GL context that can communicate with the GPU.

I suppose supporting this use case is an implementation detail but I would suggest you dig into the challenge. I feel like this is a major friction point with using Go more widely, for example.


> and not the "I think Haskell and Ocaml is great readability" academic crowd.

Actually, Rust could still learn a lot from these languages. In Haskell, one declares the call site as async, rather than the function. OCaml 5 effect handlers would be an especially good fit for Rust and solve the "colouration" problem.


That’s how Haskell async works. You mark the call as async, not the function itself.


I think Rust’s async stuff is a little half baked now but I have hope that it will be improved as time goes on.

In the mean time it is a little annoying to use, but I don’t mind designing against it by default. I feel less architecturally constrained if more syntactically constrained.


I'm curious what things you consider to be half-baked about Rust async.

I've used Rust async extensively for years, and I consider it to be the cleanest and most well designed async system out of any language (and yes, I have used many languages besides Rust).


Async traits come to mind immediately, generally needing more capability to existentially quantify Future types without penalty. Async function types are a mess to write out. More control over heap allocations in async/await futures (we currently have to Box/Pin more often than necessary). Async drop. Better cancellation. Async iteration.


> Async traits come to mind immediately,

I agree that being able to use `async` inside of traits would be very useful, and hopefully we will get it soon.

> generally needing more capability to existentially quantify Future types without penalty

Could you clarify what you mean by that? Both `impl Future` and `dyn Future` exist, do they not work for your use case?

> Async function types are a mess to write out.

Are you talking about this?

    fn foo() -> impl Future<Output = u32>
Or this?

    async fn foo() -> u32

> More control over heap allocations in async/await futures (we currently have to Box/Pin more often than necessary).

I'm curious about your code that needs to extensively Box. In my experience Boxing is normally just done 1 time when spawning the Future.

> Async drop.

That would be useful, but I wouldn't call the lack of it "half-baked", since no other mainstream language has it either. It's just a nice-to-have.

> Better cancellation.

What do you mean by that? All Futures/Streams/etc. support cancellation out of the box, it's just automatic with all Futures/Streams.

If you want really explicit control you can use something like `abortable`, which gives you an AbortHandle, and then you can call `handle.abort()`

Rust has some of the best cancellation support out of any async language I've used.

> Async iteration.

Nicer syntax for Streams would be cool, but the combinators do a good job already, and StreamExt already has a similar API as Iterator.


Re: existential quantification and async function types

It'd be very nice to be able to use `impl` in more locations, representing a type which needs not be known to the user but is constant. This is a common occurrence and may let us write code like `fn foo(f: impl Fn() -> impl Future)` or maybe even eventually syntax sugar like `fn foo(f: impl async Fn())` which would be ideal.

Re: Boxing

I find that a common technique needed to get make abstraction around futures to work is the need to Box::pin things regularly. This isn't always an issue, but it's frequent enough that it's annoying. Moreover, it's not strictly necessary given knowledge of the future type, it's again more of a matter of Rust's minimal existential types.

Re: async drop and cancellation.

It's not always possible to have good guarantees about the cleanup of resources in async contexts. You can use abort, but that will just cause the the next yield point to not return and then the Drops to run. So now you're reliant on Drops working. I usually build in a "kind" shutdown with a timer before aborting in light of this.

C# has a version of this with their CancelationTokens. They're possible to get wrong and it's easy to fail to cancel promptly, but by convention it's also easy to pass a cancelation request and let tasks do resource cleanup before dying.

Re: Async iteration

Nicer syntax is definitely the thing. Futures without async/await also could just be done with combinators, but at the same time it wasn't popular or easy until the syntax was in place. I think there's a lot of leverage in getting good syntax and exploring the space of streams more fully.


> That would be useful, but I wouldn't call the lack of it "half-baked", since no other mainstream language has it either. It's just a nice-to-have.

Golang supports running asynchronous code in defers, similar with Zig when it still had async.

Async-drop gets upgraded from a nice-to-have into an efficiency concern as the current scheme of "finish your cancellation in Drop" doesn't support borrowed memory in completion-based APIs like Windows IOCP, Linux io_uring, etc. You have to resort to managed/owned memory to make it work in safe Rust which adds unnecessary inefficiency. The other alternatives are blocking in Drop or some language feature to statically guarantee a Future isn't cancelled once started/initially polled.


> Golang supports running asynchronous code in defers, similar with Zig when it still had async.

So does Rust. You can run async code inside `drop`.


To run async in Drop in rust, you need to use block_on() as you can't natively await (unlike in Go). This is the "blocking on Drop" mentioned and can result in deadlocks if the async logic is waiting on the runtime to advance, but the block_on() is preventing the runtime thread from advancing. Something like `async fn drop(&mut self)` is one way to avoid this if Rust supported it.


You need to `block_on` only if you need to block on async code. But you don't need to block on order to run async code. You can spawn async code without blocking just fine and there is no risk of deadlocks.


1) That's no longer "running async code in Drop" as it's spawned/detached and semantically/can run outside the Drop. This distinction is important for something like `select` which assumes all cancellation finishes in Drop. 2) This doesn't address the efficiency concern of using borrowed memory in the Future. You have to either reference count or own the memory used by the Future for the "spawn in Drop" scheme to work for cleanup. 3) Even if you define an explicit/custom async destructor, Rust doesn't have a way to syntactically defer its execution like Go and Zig do so you'd end up having to call it on all exit points which is error prone like C (would result in a panic instead of a leak/UB, but that can be equally undesirable). 4) Is there anywhere one can read up on the work being done for async Drop in Rust? Was only able to find this official link but it seems to still have some unanswered questions (https://rust-lang.github.io/async-fundamentals-initiative/ro...)


Now you lose determinism in tear down though.


Ok, in a very, very rare case (so far never happened to me) when I really need to await an async operation in the destructor, I just define an additional async destructor, call it explicitly and await it. Maybe it's not very elegant but gets the job done and is quite readable as well.

And this would be basically what you have to do in Go anyways - you need to explicitly use defer if you want code to run on destruction, with the caveat that in Go nothing stops you from forgetting to call it, when in Rust I can at least have a deterministic guard that would panic if I forget to call the explicit destructor before the object getting out of scope.

BTW async drop is being worked on in Rust, so in the future this minor annoyance will be gone


Yes I am aware of async drop proposals. And the point is not to handle a single value being dropped but to facilitate invariants during an abrupt tear down. Today, when I am writing a task which needs tear down I need to hand it a way to signal a “nice” shutdown, wait some time, and then hard abort it.


Actually, this "old school" approach is more readable even for folks who have never worked in the low-level C world. At-least everything is in front of your eyes and you can follow the logic. Unless code leveraging async is very well-structured, it requires too much brain-power to process and understand.


It's great! But there's nothing about it that requires futures.

It really annoys me that something like this isn't built-in: https://github.com/mrkline/channel-drain


That works for channels, but being able to wait other asynchronous things is better. Timeouts for instance.

We could imagine extending this to arbitrary poll-able things. And now we have futures, kind of.


> (But if you're only firing up a few tasks, why not just use threads? To get a nice wrapper around an I/O event loop?)

To get easier timers, to make cancellation at all possible (how to cancel a sync I/O operation?), and to write composable code.

There are patterns that become simpler in async code and much more complicated in sync code.


You cancel a sync IO op similar to how you cancel an async one: have another task (i.e OS thread in this case) issue the cancellation. Select semantically spawns a task per case/variant and does something similar under the hood if cancellation is implemented.


You can do that, but then the logic of your cancellable thread gets intermingled with the cancellation logic.

And since the cancellation logic runs on the cancellable thread, you can't really cancel a blocking operation. What you can do is to let it run to completion, check that it was canceled, and discard the value.


Not sure I follow; the cancellation logic is on both threads/tasks 1) the operation itself waiting for either the result or a cancel notification and 2) the cancellation thread sending that notification.

The cancellation thread is generally the one doing the `select` so it spawns the operation thread(s) and waits for (one of) their results (i.e. through a channel/event). The others which lose the race are sent the cancellation signal and optionally joined if they need to be (i.e. they use intrusive memory).


He didn't say queues though. CSP isn't processes streaming data to each other through buffered channels, it's one process synchronously passing one message to another. Whichever one gets the the communication point waits for the other.


It is both.

Hoare's later paper introduced buffered channels to CSP.

So one can use it as synchronous passing, or queued passing.


The author does mention that you should probably stop at using Threads and passing data around via channels... but then mentions the C10K problem and says that sometimes you need more... but does not answer the question that I think is begging to be asked: does using Rust async with all the complications (Arc, cloning, Mutex whatever) does actually outperform Threads/channels?? Even if it does, by how much? It would be really interesting to know the answer. I have a feeling that Threads/channels may be more performant in practice, despite the imagined overhead.


There's not a good distributed concurrent benchmark in the Techempower Web Framework benchmarks, because the Multiple Queries and Fortunes test programs don't use any parallelism or concurrency primitives to win at fast SQL queries. https://www.techempower.com/benchmarks/#section=data-r21&tes...

From https://news.ycombinator.com/item?id=37289579 :

> I haven't checked, but by the end of the day, I doubt eBPF is much slower than select() on a pipe()?

Channels have a per-platform implementation.

- "Patterns of Distributed Systems (2022)" (2023) https://news.ycombinator.com/item?id=36504073


Threads cannot scale at all, because you're limited to the number of threads (which is usually quite small).

Async code can scale essentially infinitely, because it can multiplex thousands of Futures onto a single thread. And you can have millions of Futures multiplexed onto a dozen threads.

This makes async ideal for situations where your program needs to handle a lot of simultaneous I/O operations... such as a web server:

http://aturon.github.io/blog/2016/08/11/futures/

Async wasn't invented for the fun of it, it was invented to solve practical real world problems.


Threads, at least on Linux, are much more lightweight than you seem to think. Async Rust can scale better, of course, but you're overexaggerating your case.


If your system cannot be decomposed away from shared mutable state, then you cannot avoid lifetime management and synchronization primitives.

Ultimately, it depends on your data model.


Channels with passing messages has been around as a solid way for doing async and multi threading since forever. These systems are called actor based systems. Erlang is a good example which uses it as its core. Then on the jvm there is Akka. Axon is another.


> If there's a large object that more than one task needs, either you put it in a task that sends messages containing the relevant query result, or you let each task construct its own copy from the stream of messages.

When you can guarantee sole ownership, why not put that exclusive pointer in the message? I’d think that this sort of compile-time lock would be an important advantage for the type system. (I think some VMs actually do this sort of thing dynamically, but I can’t quite remember where I read about it.)

On a multiprocessor, there’s of course a balance to be determined between the overhead of shuffling the object’s data back and forth between CPUs and the overhead of serializing and shuffling the queries and responses to the object’s owning thread. But I don’t think the latter approach always wins, does it? At least I can’t tell why it obviously should.


That was the argument for Go. But Go is not used that way. People still share and lock stuff in Go. Go is only safe for race conditions that break the memory model, not all race conditions, as Rust is.


Rust protects against data races, not race conditions.

https://doc.rust-lang.org/nomicon/races.html


Go doesn’t protect against even those, which is what the parent meant


Go cannot catch data races at compile time (like Rust) but can catch a subset of data races at run time with the race detector. Go provides imperfect, opt-in protection.


Technically, you can break Go's memory model via race conditions: Write to an interface on one thread while reading it from another and you may read the old vtable pointer but new data pointer / the other way around. Same goes for slices with data/length/capacity.


You don't need Rust for that. You can even do it in JavaScript.


how do you do bidirectional channels/rpc?

like “send request to channel A with message 123, make sure to get a response back from channel B exactly for that message”


this is what Go got right like 10 years ago


In the sense that green threads are easier, sure.

But green threads were not and are not the right solution for Rust, so it's kind of beside the point. Async Rust is difficult, but it will eventually be possible to use Async Rust inside the Linux kernel, which is something you can't do with the Go approach.


Rust Futures are essentially green threads, except much lighter-weight, much faster, and implemented in user space instead of being built-in to the language.

Basically Rust Futures is what Go wishes it could have. Rust made the right choice in waiting and spending the time to design async right.


You're overstating your case. Rust's async tasks (based on stackless coroutines) and Go's goroutines (based on stackful coroutines) have important differences. Rust's design introduces function coloring (tentative solution in progress) but is much more suited for the bare-metal scene that C and C++ are famous for. Go's design has more overhead but, by virtue of not having colored functions, is simpler for programmers to write code for. Most things in computer science/programming involve tradeoffs. Also, Rust's async/await is built-in to the language. It's not a library implementation of stackless coroutines.


> Go's design has more overhead but, by virtue of not having colored functions, is simpler for programmers to write code for.

Colored functions is a debatable problem at best. I consider it a feature not a bug and it makes reasoning about programs easier at the expense of writing additional async/await keywords which is really a very minor annoyance.

On the other hand Go's need of using channels to do trivial and common tasks like communicating the result of an async task together with the lack of RAII and proper cleanup signaling in channels (you can very easily deadlock if nothing is attached on the other end of the channel), plus no compile time race detection - all that makes writing concurrent code harder.


ok lol


I think they are referring to channels, which came with the tagline "share memory by communicating."


Rust has had channels since before Go was even publicly announced. Remember that Rust, like Go, was inspired by Pike's earlier language Limbo, which uses CSP. https://en.wikipedia.org/wiki/Limbo_(programming_language)


Rust has had OS channels since forever, and async channels for 5 years.

Rust has changed a lot in the past 5 years, people just haven't noticed, so they assume that Rust is still an old outdated language.


yes


We need a way to bridge the gap. Having a runtime may not be suitable for all apps but it can easily allow you to reach 95%+ concurrency performance. The async compile-to-state-machine model is only necessary for the last 5%. Most userland apps rarely need to maximize concurrency efficiency. They need concurrency yes, but performance at the 95th percentile is more than sufficient.


I really don’t buy this argument that only some small “special” fraction of apps “actually” need async, and for the rest of us “plebs” we should be relegated to blocking.

Async is just hard. That’s it. It’s fundamentally difficult.

In my experience language implementations of async fall into 2-axes: clarity and control. C# is straightforward-enough (having cribbed its async design off functional languages) but I find it scores low on the “clarity” scale and moderate-high in control, because you could control it, but it was t always clear.

JS is moderate-high clarity, low control: easy to understand, because all the knobs are set for you. Before it got async/await sugar, I’d have said it would have been low clarity, because I’ve seen the promise/callback hell people wrote when given rope.

Python is the bottom of the barrel for both clarity and control. It genuinely has to have the most awful and confusing async design I’ve ever seen.

I personally find Rust scores high in both clarity and control. Playing with the Glommio executor was what really solidified my understanding of how async works however.


I learned concurrency and parallelism by confronting blocking behavior: waiting on a networking or filesystem request stops the world, so we need a new execution context to keep things moving.

What I realized, eventually, is that blocking is a beautiful thing. Embrace the thread of execution going to sleep, as another thread may now execute on the (single core at the time) CPU.

Now you have an organization problem, how to distribute threads across different tasks, some sequential, some parallel, some blocking, some nonblocking. Thread-per-request? Thread-per-connection?

And now a management problem. Spawning threads. Killing threads. Thread pools. Multithreaded logging. Exceptions and error handling.

Totally manageable in mild cases, and big wins in throughput, but scaling limits will present themselves.

I confront many of these tradeoffs in a fun little exercise I call "Miner Mover", implemented in Ruby using many different concurrency primitives here: https://github.com/rickhull/miner_mover


Maybe "add a runtime that switches execution contexts on behalf of the user" and "force the programmer to reimplement everything" are not the only options.


in the sense that sharing memory by communicating is the right approach


Go: it turns out that generic is actually useful

Rust: it turns out that not every concurrency needs to be zero-cost abstraction


Except that Rust hasn’t yet realised it


If Rust had gone with green threads as the core async strategy (I know it was a thing pre-1.0), that would be terrible. You're not understanding Rust's design goals. Rust's async model, while it has several major pain points at present, is still undoubtedly superior for what Rust was made for. It would be a shame to throw all that away. Go can go do it's own thing (it has, evidently).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: