1) Give the illusion that control has been returned to the caller. It hasn't, but it kinda looks like it. This is soothing for people who don't really like JavaScript all that much.
2) Provide a structured way of dealing with callback-oriented code. You don't have to think about in a function whether the callback should be that last parameter, the only parameter. You don't have to think about whether it should include an errback or if the error should instead be the first parameter of the callback. You just do it the way the promises library describes, so the interface is predictable.
#1 is of no concern to me. #2 I feel pretty meh about. A consistent interface is nice, but I don't lose sleep over a library using callbacks in a slightly different way than I like, I just write an extra function or two to wrap it into the way I like and move on with my life. I don't have a problem with Promises per se, I just don't want a library I'm using to pick a winner and tact on an extra few k.
Doesn't look like ES6 is going to pick a promises winner, so I probably will largely ignore it until ES7.
I wrote a ton of callback-oriented code, thinking similarly to you. Promises have come a long ways since then, and I tried out Q.js and discovered that I was missing out on a significant and powerful abstraction. The two things you're missing out on here are:
#1 Promises have strong guarantees. A promise will be resolved at most once, and it will be either resolves successfully or unsuccessfully. This makes reasoning about them and composing them together much easier. For example, whenever I need to perform multiple async calls with callbacks I've always got to write a bit of repetitive error-prone boiler plate to wait for and combine their results. With a good promise library this is a one-liner using a well tested library function.
#2 Error propagation. With callbacks most exceptions vanish into the ether, only catchable by window.onerror and friends. Propagating that promise up to the caller who can handle it is an enormous pain in the ass and requires repetitive error handling code everywhere if you want to be resilient. With promises an exception raised in a handler counts as the resulting promise failing, which can propagate up similarly to the way exceptions propagate with try/catch. Again, this makes writing reliable abstractions so much easier.
These two points seem to tack closely to the original article. They're certainly important characteristics about promises, but the fact that the promise is an object for the future, and can be composed and their exception handling can be composed, speaks to me of greatness. These things you mention are technical bullet points emphasizing components of that greater greatness.
Your casual ambivalence for #1 makes me think you don't appreciate the greatest value of promises: that they are values, objects.
Returning control to the caller is half the story. Along with returning control- which indeed is what is happening, no illusion, if the promise is triggering a non-blocking routine- the promise itself is returned: that object is what makes promises great.
We have no other object for an execution or process in JavaScript: a promise, the value it returns, is an object. Objects can be used and talked about inside JS. If there is no object, there's nothing to compute about. A promise is a first class execution that we can compute about.
timeout IDs are similar in a sense. By having the promise as an object, you can do neat things to modify program control flow as a separate concern from resolving a particular asynchronous operation. Sure, you can compose promises, just like you might with callback-oriented code. But you can also do neat things like await an object graph of mixed synchronous and promised values, eg: https://npmjs.org/package/resolved
An emergent property of #2 is that promises provide composition. In purely callback-oriented code composition tends to be a pain, whether it's chaining operation or synchronizing on multiple concurrent operations.
> Doesn't look like ES6 is going to pick a promises winner
Why would ES6 "pick a promises winner"? Promises don't need to be a language feature.
(ES6 will probably lead to people using coroutines with promises mostly being an implementation detail of that, e.g. http://taskjs.org/)
If they don't, the surrounding ecosystem is less likely to standardize on a single promise implementation. That means you'll have fragmentation where foo uses Futures and bar uses Promises so you have to use a bunch of Future<->Promise shims to get them to play together.
This is actually something I think is really nice about Dart: Futures are baked into the core library so they are The Way(tm) to pass around asynchronous computations.
Promise consumers have a very small interface to worry about, and in fact there are many interoperable implementations (Q, when, rsvp, etc). Unfortunately, jQuery is not one of those implementations, because rejections don't propagate.
At any rate, JavaScript is not built around having a monolithic base library, and as others have pointed out, it would not make sense to have language level support for promises per se.
However, I do agree that it would be great to have host APIs, for example, DOM, to make more widespread use of promises. In fact, WinJS for Windows 8 _does_ make use of promises in their async APIs.
> If they don't, the surrounding ecosystem is less likely to standardize on a single promise implementation.
There's no need for a single promise implementation as long as they all implement the same API, in this case CommonJS Promises/A. And for the most part they do and are interoperable.
I am also rather confused by that concept of 'real' objects you may 'pass around'. Those are wrapped callbacks, not real objects, you must keep that in mind anyway.
So, as far as I can tell, there are two benefits this article outlines: avoiding highly nested code, and handling certain types of errors better. Are there other benefits?
In my experience, promises can be more difficult to debug (once the promise library takes my callback, I can't follow the flow of execution until it is called, unless I crack open the library itself), and are less intuitive. While those shouldn't disqualify the idea immediately, it does make me hesitate.
And I'm not sold on the benefits to error handling either. In the author's example, yes, all those error handlers could be grouped together - but how often do you have that many async function calls with identical error handlers? In most situations, that is a warning sign that you aren't handling errors properly and with enough "resolution".
All that said, I don't think promises are useless. There may be times when it makes sense to use them. But calling them the "next great paradigm in JavaScript programming" seems like a bit of a stretch to me.
The article immediately dives into what promises actually are, and you've done excellently at summarizing possible benefits from the examples listed.
I would advise most strongly against this exercise in extrapolation: it is not a productive tasks to introduce a new technology via use cases without establishing some grounds for what the technology's role is. You've attempted to define, after reading, what that role is, and that to me bespeaks a horrible exercise that will draw bad results, and it speaks to a poor likely fruitless introduction for people who don't already know.
The net is this: a promise is a value you have, can pass around, can chain more promisary stages of computation on to, or combine with other promises that describes something of the future. You have a symbol for the future that you can continue to operate on. That is the value of promises: it's just an object. There are no other JS constructs we have for a first-class future: the promise is the sum of taking as much away from asynchronous as we can, for boiling the asynchronous future down into a mundane, primitive value. It's surpassing notability is that a promise is just an object, unlike everything else we have for reasoning about the future.
In callback style, one has to know ahead of time what one is doing in the future and have a function fully specified to deal with that. I see promises are fairly close to event style programming using only .once handlers, but the libraries for promises also focus on tasks like combining promises (Q.all, Q.allResolved) and there is chainability and error handling inbuilt: these sum to making Promises far easier to use in practice than I have events.
At least in node, you usually pass errors all the way back up the call chain. With normal callback style error handling, you have to put "if (err) return cb(err)" after each async call. It's crazy.
With promises, you can attach an error handler once. If you think of errors as only unexpected conditions, you can have a single error handler for each set of operations and not have to worry about checking at each step.
> With normal callback style error handling, you have to put "if (err) return cb(err)" after each async call. It's crazy.
Or use a sane async library. As far as I can tell, something like https://github.com/caolan/async gives you a strict superset of Promise features, unless you're for some reason tied to how code with promises looks.
I've tried stuff like this before, but it clutters your code pretty bad anyway, doesn't it? You have to do something like this:
function myAsyncThing(cb) {
var handleErrors = errHandler(cb)
doSomethingElse(handleErrors(function(data) {
// repeat. use handleErrors at each callback.
}))
}
Am I missing something? Can you get it cleaner than that?
If you're using Node you should be writing small(ish) modules anyways, so it doesn't lead to much code blote and its advantageous to consolidate your error handling in one or 2 spots.
Promises are not great -- I'd much rather have real coroutine support. But given that we don't have real coroutine support, promises are absolutely essential do writing sane, fault-tolerant Javascript.
Promises make a handy wrapper for any kind of value-in-the-future, be that the mundane case of invoking non-blocking functions with callbacks, in the case of getting values from remote remote systems (https://github.com/kriskowal/q-connection), and, I dare say, if we had coroutines, they'd make a great first-class object for pulling result values out of them too.
A promise is agnostic to your concern. Thanks for writing.
If you have coroutines, I don't think promises/futures are worth much. You don't need a value that represents "something I don't have yet" because an API can just block the current coroutine until it has the value.
There may be some utility in being able to pass them around so that the code that calls the API isn't the place where the block occurs, but I'm not convinced it's worth the extra complexity and API fragmentation to care about that.
val resultA = something()
val resultB = somethingElse()
return resultA + resultB
With Futures, the processing for resultA and resultB would be done in parallel, because the call to something() does not block. To combine them you don't even need to wait for the processing to finish, because you just create another Future.
Also, being able to pass them around is a really important use-case and Futures are also all about error handling. With Scala it's really trivial to wrap the async support in Servlets 3.0 and make your controllers return Future responses. Then in the servlet you just attach onComplete and onFailure events to return the request when it's ready or when it fails. Works like a charm.
The greatest thing about Futures is that the concept is entirely agnostic to the underlying implementation details. You could have a Future result that's being processed asynchronously by an IO loop (e.g. ning's AsyncHttpClient), you could have a Future result that's being processed by a thread and you can combine the results.
If anything, I don't think you worked with Futures much. You should try it out in Scala, where Futures (from Akka or Scala 2.10) are modelled as monadic types, making them highly composable. Doing multi-threading with Futures in Scala is like working with Lego blocks.
I'm curious what language you have used coroutines in where it works out that way.
In my experience of using fibers in ruby (which I _think_ are substantially/exactly the same as coroutines?), it hasn't been like you describe at all -- promises, or something other higher level abstraction, are still pretty neccesary to do useful and comprehensible things with them. So my experience colors my understanding and I think "What's he talking about, that doesn't make any sense."
More likely, you have experience in a language where things are done differently enough that it all comes out different.
If for some reason you end up writing javascript that only needs to run on Firefox, you do get coroutines via the yield keyword. Pretty sure it will also appear in a future version of javascript.
An analogy I like for the "language support" part is Python generators. All you need to do to turn a regular function into an async one is put the `yield` keyword where appropriate. Promises, are less worse then continuation passing style but they still require you to rewrite all your code using the promise library instead of reusing existing language constructs.
If you want to carry the analogy even further generators can also implement coroutine patterns, if you use yield as an expression (for communication) and yield from to do nested generators.
| "An await expression does not block the thread on which it is executing. Instead, it causes the compiler to sign up the rest of the async method as a continuation on the awaited task. Control then returns to the caller of the async method. When the task completes, it invokes its continuation, and execution of the async method resumes where it left off."
The implementation detail is that your caller's state needs to somehow be re-asserted when the asynchronous wait finishes evaluating- anyone mildly versed in computer hardware knows that to, in most situations (some green threading environments aside) be a very onerous scary intensive thing, speak nothing of the tasks of capturing and storing the continuation, which likely is fairly deep in a call stack somewhere.
It's great magic, it's certainly putting a lot of magic programmers like to use at their fingertips. But be advised that it is somewhat scary, handing out magic wands like candy to newcomers and telling them it's OK and good.
It's sorta unfortunate magic because it's a special case in the compiler. Instead of having a general use monad-like syntax, "await" gets special treatment. Compare this to F#'s approach, where the equivalent feature (async in F#) is just a library.
The actual transformations should be quite straight forward and give you the "pyramid" code the article mentions.
The magic that will bite you isn't the transformation as much as the runtime library that handles threading and the hidden choices made for you there. For instance, in ASP.NET, doing fooAsync().Result causes a deadock.
It's only magic if it's poorly or non-documented. I'm aware of many of the internals, and have written my own SynchronizationContext before. Execution actually can (and often) does continue using the same thread.
I still strongly believe it's better to make cool things available rather than try and protect people from themselves.
I'd fallen out of the .NET world before Async/Await arrived, but were someone to put some under-the-hood-monkeying around-with-it docs under my nose, I'd love to brush up. I seem to have osmosed that there is some compiler transforming going on, but I've not been exposed to ways to monkey around, did not know there was more than a use-only black box: would be lovely to get an engine-bay tour. Have you any recommendations?
For what it's worth, this is exactly how Parse's .NET SDK works. Promises in JavaScript go a long way toward improving the experience of async development, but I thoroughly agree that language-level support for this type of construct is immensely helpful.
Funnily, I was just doing some comparison shopping between different promise libraries as well.
Besides avoiding highly nested code, and handling certain types of errors better, I've found that they also provide an optional progress callback, in addition to the success and error callbacks in then(). This is highly useful in reporting the progress of a long running process, to help give the user some update or stream the results as it's being completed. That cuts down on both the actual and perceived response time.
In normal callback-style, it's awkward to have two callbacks, one for success/err, and one for progress. In addition, it's not the canonical form for node.js, so we often skip doing it.
>That’s getting pretty ridiculous...but because of the way promise chaining works, the code can now be much flatter.
In what way is the indented code ridiculous, and why is flatter better?
Conceptually, you still have to think of the callbacks operating the same way in order to properly understand the code. Flattening it out and changing the shape so that it isn't nested doesn't really help change what is going on. Maybe I'm missing the bigger picture.
Indenting is a symptom of having to write code structurally, having to tightly couple call side with it's handler.
Promises break that structural linkage: we have the only thing in JS that matters, an object, a first class plain old object, that we can reason with and perform operations on as if it were any other object. It happens to be an object about a future which will one day resolve, but it's just an object.
And because it's an object that can, after it's inception, have handlers added to it, we break the necessary structural linkage that callbacks mandate: we can make callbacks latter, we can condition what callbacks to attach, we can compose or chain more promises after our first promise and put our handler on that... we have a first class thing to work with, not just a function, which expects a fully-readied handler. It defers the need to define handling.
Read more of my comments to get a better sense of why A) I adore promises B) I feel this article is injurous to the case for promises.
It might not matter to you with two levels, but once you get to three levels of nesting, I bet it'd be more clear how much nicer the flat version is.
> you still have to think of the callbacks operating the same way in order to properly understand the code.
It depends on your conception. ;) Maybe it's my Haskell/monad experience, but to me, there's a difference between "here's the control flow between these bits of computation" and "where does this level of nesting end, again?"
I'll echo what rektide said, but add that the flat code is easier to modify. It's easy to add another promise anywhere in the flat chain, and it's easy to add conditional logic if you need to. In addition, I think we'll see easy-to-reason-about operators built on top of the promise "primitive" that you couldn't even imagine for the nested code.
I'm glad promises are gaining traction in JavaScript, but it's a shame they chose to use jQuery promises rather than CommonJS Promises/A. The differences in error handling are significant.
See Domenic Denicola's comment at the bottom of the article, and his excellent "You're Missing the Point of Promises" post: https://gist.github.com/3889970
Most of the time I'm processing data that is on a queue, and I have some kind of event loop. Promises are great for simple things, I guess. But they just seem like extra, unnecessary syntax. I tried a while ago looking at the different libraries to see if they would give me a "resolveAll/sequential/parallel" for a list of promises, but none of the libraries even seemed to do something very basic like that, which is where I would get any actual utility. I guess most promises are told to "execute" (do their asynchronous thing) as soon as the promise is created. I'm more interested in being able to have more fine grained control about when the execution happens, too.
All in all, I find promises to be pretty pointless in javascript. Until ES6, that is. I also experimented with the "yield" expression in google's traceur compiler, but they were still working out their issues last time I tried.
I think I'm the only one in this world who doesn't mind the async pyramid. To me, it is very intuitive. Also, if your pyramid is too deep, chances are you need to extract some of it into functions.
I hate the async pyramid because explicit continuation passing ought to be a low-level representation generated exclusively by machines, instead of by programmers.
For example, one of the great things about high level programming languages is that you can write composable expressions such as
f(g() + h())
but you can't do that if f and g are written in CPS style, since then you need to give an explicit names to all the intermediate "registers" used in the computation as well as baking down an specific order of evaluation for things
g(function(a){
return h(function(b){
return f(a + b)
})
})
> Also, if your pyramid is too deep, chances are you need to extract some of it into functions.
This sounds like that "if your method is more then 15 lines you should split it into subfunctions". I never liked this line of reasoning because the complexity (number of possible interactions) in a module increases quadratically with the number of functions in it. You fight this complexity by only breaking down a function on "natural" boundaries dictated by encapsulation and reuse, rather then by code size.
promises are also pretty essential when doing RPC-esque back-and-forth with web-workers when you need to distribute a workload and wait for all results to come back to continue a chain of ops on the main thread.
Promises actually aren't far off from functional reactive programming (FRP). The idea in common is to take the abstract notion of an entry point and make it an actual object to be manipulated.
1) Give the illusion that control has been returned to the caller. It hasn't, but it kinda looks like it. This is soothing for people who don't really like JavaScript all that much.
2) Provide a structured way of dealing with callback-oriented code. You don't have to think about in a function whether the callback should be that last parameter, the only parameter. You don't have to think about whether it should include an errback or if the error should instead be the first parameter of the callback. You just do it the way the promises library describes, so the interface is predictable.
#1 is of no concern to me. #2 I feel pretty meh about. A consistent interface is nice, but I don't lose sleep over a library using callbacks in a slightly different way than I like, I just write an extra function or two to wrap it into the way I like and move on with my life. I don't have a problem with Promises per se, I just don't want a library I'm using to pick a winner and tact on an extra few k.
Doesn't look like ES6 is going to pick a promises winner, so I probably will largely ignore it until ES7.