Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

well actually he says that DNS lookups etc needs to be asynchronous, but aren't go's channels blocking?

I mean he also says that rust has no good epoll/select abstraction, which is wrong. And also "a welter of half-solutions in third-party crates but describe no consensus about which to adopt" I think he didn't researched well enough.

I mean it would be the same as saying that there is no good way to create a custom non blocking dns server with pure java/scala (no external library used). Rust has https://tokio.rs/ and everbody who even looked into network services with rust, stumbled across it.

Well maybe for his goals and contributor base go is prefered and that is ok, but well this article is more or less written as a rant against rust, that he/she/it does not agree with the generel design/documentation/whatever of rust.

In fact I also think that the tooling of rust sucks. rust is a language which shines with extremly good tooling. I mean rust has a godlike package manager and every tooling gets better and better.

Compared to go where the package management just sucks but the tooling even IDE completion is extremly great.

P.S.: I would not want to have a garbage collected ntp server. But for a lot of things I thing Go would be a good fit, even when it would not be my tool of choice.



> has no good epoll/select abstraction, which is wrong

I can't reiterate this enough. Rust has one of the best abstractions I've ever used over epoll/kqueue and MS' variant, MIO. MIO has picked not the crappy POSIX select, which every major OS has a better alternative to now, but instead the OS' preferred event system. The 10 year stability requirement is awkward, as I don't think anyone can predict the future, and in 10 years, who knows what will change.

As you mention, tokio is an awesome futures based abstraction on top of that, making it a breeze to write complex, fast, event driven logic.

I'm sad that after four days he didn't like Rust, I had a 100% different experience. After one day I fell in love with nearly every aspect of the language, and it was specifically because of state machines (which they mention as one of their core desires). My biggest complaint against the language hasn't changed, which is dealing with different Error types, but even that after a lot more experience with Rust has become easier.

Anyway, for NTPSec it's sad that this didn't suit their desires, but that doesn't mean that we can't write our own.


Go's epoll abstraction is baked into the runtime. They are called goroutines. Goroutines blocked of FD's automatically use epoll and schedule based on its results: https://github.com/golang/go/blob/master/src/runtime/netpoll...


I wasn't commenting on go, just pointing out the library that most people in Rust are using for async-io.


> My biggest complaint against the language hasn't changed, which is dealing with different Error types

Do you mean syntactic sugar? Because there's try! and ?

Or do you mean dealing with Option and Result in a more uniform way?


So error_chain! is actually the answer to most of my issues with it, but it was the largest hurdle for me (coming mainly from Java) where you need to convert from one type to the next up the return stack. It took me a while to get used to this, but I'd say it's a large hurdle when learning the language. And, while error-chain is great, it means introducing you to macros and such if you try to adopt it early on.

Mind you, this is my biggest complaint, and it's trivial for me now, but after you grok the other major features of the language, properly dealing with errors/results is a hurdle for people new to the language.

link: https://docs.rs/error-chain/0.7.2/error_chain/


Best abstraction over epoll/kqueue ? As compared to what ? From all the things that rust does right, MIO event loop seems lacking as compared to boost::asio for example.


It strikes a good balance of portability and features, and where MIO leaves off, tokio picks up.

When comparing to other libraries in other languages, MIO probably looks like it's devoid of features, but that's because it's trying to be an abstraction layer only, and tokio is probably where most people should turn for higher level semantics...


It might be that MIO is a good epoll abstraction, but in Go you don't have to care wether your code blocks on IO or if it's computationally intensive and requires a separate, costlier thread. You just spawn a goroutine. Go's runtime does the rest.

And that for me that is the way I want to do concurrency and parallelism for pretty much everything I'll ever write.


> And that for me that is the way I want to do concurrency and parallelism for pretty much everything I'll ever write.

1. You can get exactly this with concurrency by spawning one thread per request. The only difference between Go and the pthread model is that Go has a userland scheduler.

2. If I tried to do my parallelism work using this model, the amount of multicore speedup I would get would be zero (actually, it would be negative). Sometimes you're compiling multiple .o files--you get lucky and the overhead of spawning a thread per work item is not restrictive. Sometimes you're shading individual pixels--spawn a goroutine per pixel and you're going to be laughably slow.


  The only difference between Go and the pthread model is that Go has a userland scheduler.
That's a _huge_ difference in the context of a network service, and also obscures some other details.

If I have several thousand long-polling clients, one kernel thread per client is simply not realistic. It uses up alot of memory, and the context switching can be costly. Throwing more hardware at the problem is just throwing money down the drain. And if throwing money down the drain is fine, one might as well use one process per connection with blocking I/O.

In order to avoid that in Rust, Node, or most other languages one needs to use a callback or future. But both a callback and future require a dynamic allocation and deallocation per invocation of each function that might block. Especially under heavy load (lots of broadcasts), that's alot when you consider chaining invocations, as you must when composing non-blocking I/O interfaces. Chaining futures hinders compiler optimizations. And in general it hinders function composition. (I commonly see claims that the react pattern is better for composition, yet I never see such people writing _all_ their code that way, e.g. for things like string manipulation. In reality most concurrency, even in network services, is effectively sequential within a wider context beyond the immediate operation. Have you ever tried to implement, e.g., a non-blocking MySQL protocol parser with callbacks/futures? It's ugly.)

People underestimate how efficient and useful storing function invocation state on a stack is. Goroutines (or contiguous, dynamically-sizeable stacks like in Lua) give you the best of all worlds--efficiency of a contiguous stack without the high fixed costs. That means performance _and_ scalability for massively concurrent network services, but also for other patterns--like being able to convert a callback-based push interface into a pull interface, without having to refactor the intermediate code, which can be amazingly powerful for things like parsers.

But for a language like Rust it's understandably difficult to implement stackful coroutines, let alone goroutines. I just wish people wouldn't whitewash the issue.


> If I have several thousand long-polling clients, one kernel thread per client is simply not realistic. It uses up alot of memory, and the context switching can be costly.

The memory cost of pthreads that people generally refer to is the stack. Userland scheduling doesn't have anything to do with the stack size. You can get the stack size down very low in a 1:1 thread model: 10kB (or, in future Linux kernels, 6kB). That's comparable to the stack size of a goroutine.

Context switching cost depends on your use case. For I/O, keep in mind that you have a round trip through the kernel either way.

> being able to convert a callback-based push interface into a pull interface, without having to refactor the intermediate code, which can be amazingly powerful for things like parsers.

You can efficiently do that with threads too. In fact, that is exactly how HTML parsing works in every modern browser: the parser runs off a separate thread and forwards DOM create events to the main thread. HTML parsing is about the most performance-critical parsing setting you can think of.


This is a big advantage to Go, you're totally correct, but it does come at a cost with the GC and a runtime. For some this is acceptable, for others it is not. But this is not the reason that I personally decided to bet on Rust and not Go, though they were part of it.

The three main reasons that Go doesn't fit with the work I want to do on the network is all around compile time guarantees:

1) Strong type checking on Null 2) Required handling of Errors 3) Generics

These all help me write stable software that I have higher confidence in than what I've written in Go.

In terms of the debate on Futures, etc. you are right that it's simpler to write pure stack based functions, but the overhead of tokio and futures in Rust is not as high as that of a goroutine, but it does take more thought. It's a trade off on productivity vs. stability. In your opinion it's ugly and hurts productivity, but to me it is pure elegance with zero overhead cost at runtime (and if you preallocate an arena/slab it can even have known memory costs). After working with tokio over the last half year or so, I find that I'm as productive as I am when writing single threaded blocking IO code (granted there was an initial ramp up time before I became as comfortable as I am now).


From https://tokio.rs/blog/tokio-0-1/:

"Announcing Tokio 0.1 ... The 0.1 release is a beta quality release. ... This release also represents a point of relative stability for the library, which has been undergoing frequent breaking changes up until now. While we do intend to eventually publish a 0.2 release with breaking changes ..."

I'm not trying to take a dump on the project (breaking changes are expected and welcome for new projects), but it is obvious a lot of the Rust ecosystem is in a very immature state. Probably much to immature to base an ntp server on.


Rust has had https://github.com/carllerche/mio for two years now, longer than Rust itself has been stable. tokio is just a higher-level, nicer, API.

ESR knows this. It was there in the comments of his previous issue. Yet he chooses to link to a two-year old issue with one comment that says that Rust doesn't have epoll as a source of truth.

Mio can't compare to the baked-in goroutine goodies (neither can epoll in C!), but this blog post is outright dishonest.


mio is currently at version 0.6.2. A version number with a leading zero usually indicates that the authors of the software deem it to be of alpha or beta quality. It's api has also significantly evolved in a backwards incompatible manner recently: https://www.reddit.com/r/rust/comments/50srx0/mio_api_has_ch...

Again, not taking a dump on mio because its authors are doing the right thing. 0.x means bugs and api breaks are to be expected. Use the software at your own risk. Basing an ntp server on it might not be appropriate (atm).


Right, mio isn't marked stable yet, and is changing the API (mostly for tokio), but the project is mature in the sense that you wouldn't expect many bugs in it, and you can pin to a version and just use that for a while if you are worried about API stability.

But yes, the async I/O space in Rust isn't amazing. But it's not as bad a picture as he paints.


Yes, you could, but then you run into the potential problem where dependency A uses version 0.6.x, but dependency B starts using 0.7.x, because there's no stability. It becomes an increasing maintenance nightmare, where every dependency adds new potential headaches because no one's willing to create a version that's supported for more than a few months. Async I/O is as bad as he paints it because every component that you can use isn't stable and has no plans toward stability.


Yeah, I'm aware that's a problem, but it's not a dealbreaking one usually IMO. Especially for libraries like mio and tokio with buy-in from the whole I/O side of the ecosystem.


Why wouldn't I expect many bugs in it? The 0.x version number implies exactly that! In this matter, I'd rather trust the mio developers themselves than you. By holding the version number at 0.x they are saying "stay away unless you like to tinker". I.e "Don't base your ntpd software on this library"

Pinning a version number is a hack. Not something you should rely on for long-term stability.


So the Rust community in general is pretty reluctant to move crates to 1.0, even when stable/bugfree. This is a problem, and is slowly improving, but that's currently the case.

0.x can mean "Not stable" and it can also mean "buggy", it need not mean both. Regardless of how bug-free the authors consider it to be, they won't mark it 1.0 until they think that they're happy with the API. IIRC they're waiting for tokio to be finished and for it to sit for a while to ensure the API is the right one, so it should be relatively free from bugs.

> Pinning a version number is a hack. Not something you should rely on for long-term stability.

Agreed. I'm saying that this shouldn't be a dealbreaker. It's usually very little work to upgrade to the latest version of a library.

Basically, mio's stability is a problem. Totally agree with that. But it's not as bad a picture as the article paints.


> So the Rust community in general is pretty reluctant to move crates to 1.0, even when stable/bugfree. This is a problem, and is slowly improving, but that's currently the case.

Keeping a package at 0.x means (to paraphrase) "I don't take responsibility. The code may or may not work and if it doesn't, then don't complain." (a) Putting it at 1.x means (to paraphrase) "I (the author) will at least respond to bug reports. I might even try to fix them if I have the time." (b)

Free software authors have this choice.

> so it should be relatively free from bugs.

And if it isn't? For example, I looked at the tracker and found a number of open bugs which looked troublesome for the Windows backend. I think you are doing the mio project a disservice by promising more than the developers themselves have said that they are willing to deliver.


Yes, and while that's what 0.x usually means, in the Rust community (and in other communities) it's usually more complicated than that. For mio in particular it's been a crate that a lot of folks have been using and it's been working fine. I'm not saying that that's the message that the version number broadcasts. You're totally right that it broadcasts a message that the crate is immature. I'm saying that the actual situation is not as bad as it sounds.

Fair point about the bugs :) I wasn't aware of them, and I've only heard really good things about mio.


Being 0.x could also just mean "we're not 100% certain we don't want to change any API" yet. Semver is about stability guarantees (promises), bugs is somewhat independent for a widely used and well-maintained library.


I can't find support for that interpretation in the Semantic Versioning spec: "Major version zero (0.y.z) is for initial development. Anything may change at any time. The public API should not be considered stable."

That means 0.x software is in "initial development" implying no guarantee on how many bugs it can contain and no guarantees regarding maintainership. Other parts of the spec deals with backwards-incompatible api changes.

I know where your interpretation comes from. It comes from users of software who believe that a 0.x version can be relied upon. Because it works for them and they haven't found any bugs. Software publishers on the other hand, never makes that claim and users to whom stability is important should stay away from 0.x software.


Agreed that the publishers don't make that claim. However, I know that many publishers are hesitant to go 1.0. Not because they think there are too many bugs, but because they are afraid of committing to not changing the API.


Note that there is nothing at all in the semver spec about bugs.


tokio 0.1 was released concurrently with (or slightly after) the initial investigations behind this post, so missing it/passing over seems reasonable.

That said, as pointed out in the previous discussion about this, C also doesn't have epoll in the core language (it also isn't in the standard library there, either), and Rust can call the C epoll functions directly on the platforms that have it.


It takes a fair bit of effort to call into the epoll system in a sane and safe manner from Rust. It's more than just slapping a thin "unsafe-Rust" wrapper on the C epoll API. Or at least it is in anything but, say, a toy rust-equivalent of the man-page entry examples for epoll.


That's fair, but the complaint seems to be epoll not being possible at all, rather than it being annoying to call. (Obviously the latter is important, but the former shows a possible deeper misunderstanding of the language.)


It also takes a fair bit of effort to call into epoll from C in a safe manner, though.


Much less than from any FFI.


Go has both async buffered channels and blocking sync channels. Just slightly different syntax to pick between the two.


> Rust has https://tokio.rs/ and everbody who even looked into network services with rust, stumbled across it.

didn't it launch a week ago?


Well, I think he has a point about the epoll/select - while I'm not sure about how good that is, the tokio library is relatively new - barely reached stable.

I do think that the majority of the resistance though is that the author could not get productive with Rust in 4 days. For that reason, even though the epoll and documentation issues might clear up, I don't think the author would ever select Rust, not matter how much time has passed.

Which as I write, I realize that I think that Rust would be better, lack of easy on-boarding aside.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: