Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Of Lisp Macros and Washing Machines (loper-os.org)
99 points by MrJagil on May 13, 2011 | hide | past | favorite | 36 comments


I like macros and have used them heavily for most of my career. But in practice, macros have some limitations:

1. Macros allow you to easily improve your syntax, but offer less guidance for improving your semantics. To use a mathematical analogy, it's nice to have a better notation, but what you really want are better definitions and theorems. Most macros are thin syntactic wrappers; few offer profound semantic insights. (Although there are some lovely examples of the latter in PG's excellent On Lisp: http://www.paulgraham.com/onlisp.html .)

2. Macros almost never call macroexpand recursively, except inside of code bodies. This means that macros like defclass usually cannot be extended in an incremental or modular fashion. Instead, you need to declare a def-your-class macro that can't usefully compose with any other def-her-class macro that you might encounter.

3. Weaker mechanisms than macros can accomplish nearly as much, with fewer issues similar to (2). For example, you can re-implement many Lisp macros using Ruby's block syntax, metaobject protocol, and ability to call class methods from inside a class body. And you can usually compose the resulting DSLs freely, even when they're analogous to defclass extensions—witness all the ActiveRecord add-ons.

So I like macros, but I no longer think that they're the ultimate high-level abstraction. They're mostly a nice way to incrementally improve your notation, except in the hands of programming language designers who are prepared to pull out all the stops.

Vladimir Sedach: The Haskell gang is primarily interested in advancing applied type theory.

If we're talking about high-level abstractions, this is probably not the best way to describe Haskell. (In fact, Haskell's type system is notoriously ad hoc, compared to languages like ML.)

Originally, Haskell started out a project to explore lazy evaluation and programming without side effects. But in recent years, the Haskell community has been investigating ways to build better domain-specific languages using ideas from mathematics: combinators, monads, co-monads, arrows, derivatives of types, and so on.

So far, many of these Haskell ideas have a steep learning curve, but the results are nonetheless impressive. If I were trying to design a "hundred-year language", I would definitely pay close attention to the Haskell community—they just bubble with fascinating ideas. And mathematical ideas tend to endure.


> Macros allow you to easily improve your syntax, but offer less guidance for improving your semantics.

This IMO is the biggest problem with macro usage today and needs to be overcome. I've started a campaign against the "macros are syntactic sugar, and I have a sweet tooth!" camp (http://carcaddar.blogspot.com/2010/08/input-needed.html). I recommend reading this excellent parody blog post (written at the height of PG-inspired Lisp mania) for background: http://classic-web.archive.org/web/20070706135848/brucio.blo...

> Macros almost never call macroexpand recursively, except inside of code bodies.

This is by design. The expansion of a macro needs to be opaque - otherwise where is the abstraction? When you macroexpand, you're opening the black box. You can only rely on macroexpanding macros that you have some control over.

> This means that macros like defclass usually cannot be extended in an incremental or modular fashion.

This is because it's the same problem as extending an arbitrary grammar. You can't compose two grammars and expect the result to be non-ambiguous.

> Instead, you need to declare a def-your-class macro that can't usefully compose with any other def-her-class macro that you might encounter.

This is completely, totally wrong. Things like def-your-class is the reason I started my education campaign. You should never write a def-your-class macro. Macros are the wrong way to solve this problem, which is why CLOS and the CLOS metaobject protocol include all the facilities you could want for extending defclass in a composable way (Art of the Metaobject Protocol is a comprehensive, but confusing, reference).


You can only rely on macroexpanding macros that you have some control over.

I think we're looking at this from exactly opposite directions. I'm not talking about code-walking pre-existng macros, but rather providing hooks for other people to extend yours:

  ;; Completely hypothetical package management-system.
  (define-module foo
    (require 'bar)
    (file "foo.ss")
    (file "foo-tests.ss" :in-mode 'test))
There's a lot of macros like this in the Lisp world, and they define non-extensible DSLs. For example, you can't define a macro tested-file to eliminate the duplication above:

  (define-module foo
    (require 'bar)
    ;; Doesn't work:
    (tested-file "foo"))
This could be supported if define-module internally called macroexpand-1 on unrecognized child forms until it saw either require or file. Then you could define a tested-file macro to extend define-module.

Note that the actual advisability of this approach varies greatly between Lisp and Scheme dialects.

Contrast the following ActiveRecord example:

  class Person < ActiveRecord::Base
    belongs_to :organization
    
    state_machine :initial => :starting do
      state :starting
      state :online
      state :stopping
      state :offline
    end
  end
Here, we have a third-party state machine library extending ActiveRecord. This is just one of several places where many Lisp macros tend to be broken or inflexible.

Of course, macros are a very useful tool. But they have limitations, and other languages also provide interesting mechanisms for "writing programs that write programs", with different tradeoffs.

You should never write a def-your-class macro.

Well, even if the CLOS metaobject protocol does solve this problem, it doesn't fix defsystem, or any of the other thousand non-extensible defblah forms in the Lisp community.


The amount of OO runtime metaprogramming crap out there far outweighs the amount of macro metaprogramming crap from simple fact that hardly anyone knows anything about macro metaprogramming. I've seen only a few non-extensible defblah and a metric ton of non-extensible OO bullshit in major projects.

I agree that you have some valid points but your overall argument holds little water. In good Lisps you have access to both runtime and compile time abstractions. Used wisely they will always trump a system that can only lean on runtime abstractions.

I'll also note that in my experience runtime metaprogramming is horribly painful. And many, many, many others have pointed out this fact. There are numerous cases where structural transformation is far simpler to reason about.


Oh, I absolutely agree that OO languages do gross things, too. If somebody gets me started on AbstractRequestProcessorFactoryFactory, or inappropriate use of instance_eval, I can keep going for hours. :-)

Overall, macros are a very useful tool. But there's a school of thought that sees them as the ultimate, universal abstraction. I'm no longer convinced by that argument.

However, I do agree with your remark, elsewhere in this discussion, that The Reasoned Schemer is an extraordinary fine example of what macros can be used for. It's one of my all-time favorite programming books.


> This could be supported if define-module internally called macroexpand-1 on unrecognized child forms until it saw either require or file. Then you could define a tested-file macro to extend define-module.

If tested-file expanded to just file, it means that the only thing it can do is have side-effects at compile-time. Those side-effects won't be preserved in compiled files. So tested-file has to expand to something that includes file and the extra code, and define-module has to specify what that expansion can look like and how it will be handled. The design of this protocol can't be automated, because code-walking the expansion of tested-file is potentially Turing-complete.

So now you have two problems:

1. every define-foo macro needs a protocol to specify how it can be customized 2. this only works at compile-time

The need for this kind of customization is where the meta-object protocol arose from. Read up on Gregor Kiczales' work on open implementations (http://www2.parc.com/csl/groups/sda/projects/oi/ieee-softwar...). Putting in the required hooks into the interpreter (object system) using a restricted protocol provides a common system that all customizations can share (no need to design your own define-foo protocol) and is available at all stages (compile and run-time).


> If we're talking about high-level abstractions, this is probably not the best way to describe Haskell. (In fact, Haskell's type system is notoriously ad hoc, compared to languages like ML.)

In what sense is Haskell's type system any more ad-hoc than ML's?

They're almost equivalent, except for Haskell's type-class extension, which is used for ad-hoc polymorphism. Support for ad-hoc polymorphism does not make the type system ad-hoc. It means you can use (\*) for multiplication between any type of number, which is very useful.

Many GHC extensions to the type systems are also based on sound theory (Rank 2/N types, Type families, GADTs) and not ad-hoc. ML lacks these too.

Haskell is one of the forefronts of applied type theory, AFAIK, ML seems to stagnate in this area.


In what sense is Haskell's type system any more ad-hoc than ML's?

According to Mark P Jones, the author of "Typing Haskell in Haskell":

Haskell benefits from a sophisticated type system, but implementors, programmers, and researchers suffer because it has no formal description. http://web.cecs.pdx.edu/~mpj/thih/

Haskell's type system was designed to make programming delightful. I vastly prefer it to some of the contortions of ML's type system, such as the separate + and +. operators. But by functional programming standards, I think that any type system without an official description can be fairly described as ad hoc.


I posted the following comment earlier today but it was moderated by the infantile censorship police on HN :

Before muddying the water for future generations of potential Lisp programmers in this forum please go and master macros first instead of spewing unfounded bullshit about their limitations.

1. Macros were never meant to 'offer guidance for improving your semantics' however they are the mechanism that makes it possible to create your own new semantics (which is just not possible in any other family of languages). The semantics that you want to create are often specific to the domain you may be working in. Learn from those who have mastered the domain and created their own semantics and then come up with your own.

2. It's clear that you've been drinking the OO koolaid for a while. Let it go and try really learning macros. Leave the baggage at the door, it's just not required where you can go with macros.

3. Ruby is a badly diluted hack that borrowed some ideas from Lisp and other languages. No you cannot accomplish in Ruby what is possible with Lisp macros. For a start all the Ruby mechanisms that you suggest execute at runtime. Ruby does not have direct access to it's AST the way that Lisp does. Lisp and Ruby are not even in the same class of languages. Contrast Ruby with Java, and then yes it looks great.

If your talking about the "hundred-year language" then just forget Haskell. Lisp is the future, go and learn Qi. Qi has a turing complete type system, this is already light-years beyond what Haskell's type system will ever have.


If you want to understand the beauty of macros don't read silly trollish blogs. I've said it a million times and I'll say it a million more - look at very good Lisp code that wisely unites elegant runtime code with just enough macro sugar to have something beautiful and idiomatic. So far I can think of no better text than The Reasoned Schemer, http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&.... You get the soul of Prolog in 200 lines of Scheme R5RS. The entire system fits on 2 pages.

This book is written by three true masters of macros and functional programming Oleg Kiselyov, William Byrd, and Daniel P. Friedman.


Thank you for suggesting good code to read, so many comments like this just leave people hanging. :)

I know great C code to suggest, but draw a blank when it comes to Lisp macros done right.


"Wadler created ML to help automate proofs". Minor nitpick: actually, ML was designed by Robin Milner: http://en.wikipedia.org/wiki/ML_(programming_language)


What do CRUD apps have to do with Lisp macros? The points are disconnected and points are introduced with no evidence or logic to back them.

The only thing I can gather is this guy is about to become a billionaire, because he has found a way to rid the world of spreadsheets and CRUD apps... apparently with steel mills.


I try to think the best of people. With no code to Loper OS that I can find and a writing style that perpetuates lisp elitism using peculiar analogies to the Industrial Revolution without any evidence that this elitism is warrented--I use lisp, I like lisp but it's one tool in a bag--I find it hard to believe that Mr. Datskovskiy is serious. Surely this is brilliant satire?


> without any evidence that this elitism is warranted

What kind of evidence would you accept?

And if you won't accept it from R. Gabriel, P. Graham, P. Greenspun, and other "household" names, why should the rest of us bother sweating to assemble a watertight case for your wastebasket's eyes?


> What kind of evidence would you accept?

Code. Every author takes on himself the onerous to justify his claims. I am familiar with the works of the authors you mention; they present a good argument that lisp is a fine language for fine works. It is an entirely different argument to assert that, merely through the use of lisp, you will revolutionize the world.

That, as I read it, is the claim made by the original author. It is that claim which has gone unsubstantiated, for years. The rest of you--incidentally, no need to use exclusionary language: we are all friend here--need do nothing. Mr. Datskovskiy has some code to deliver. A bootloader in two years is no accomplishment, though Mr. Datskovskiy has excelled in punditry in this period.


Original author speaking.

If you actually read the blog (unbearable ordeal, I understand) you would know that Lisp (in available incarnations on available hardware) is not enough:

http://www.loper-os.org/?p=284

Let's see how much of an OS and hardware architecture you can design in several years - without using a single line of code written by another person. Starting with the NAND gate - because that is where you will have to start if you truly intend to avoid recapitulating the mistakes of the past three decades.

There was no sequel to the bootloader - I have been occupied with reverse-engineering the bitstream format of a major brand of FPGA (no naming names) in order to dispense with the proprietary x86 toolchain (and eventually with the abomination called Verilog.)

Publishing incremental progress in bite-sized chunks is vastly over-rated. It wastes everyone's time and contributes to the proliferation of hideously ragtag systems reminiscent of "Junkyard Wars" (aka much of the software industry.)

Punditry, on the other hand, is a relaxing and inexpensive hobby. The hosting bill for the past year of the blog was around $30.

Most of my time is spent working on unrelated efforts which I am not at liberty to make public. Currently, they leave precious-little time and energy for my long-term (think lifetime) project.

Feel free, of course, to imagine that I am an idler sitting in an armchair drinking the days away, if it makes you happy.


Clearly you are a man of strong opinion. You are also needlessly rude and insufferably self-important. I apologize that you took my critique so personally, without, of course, apologizing for the critique itself. May you succeed in your ambition.


> I have been occupied with reverse-engineering the bitstream format of a major brand of FPGA (no naming names) in order to dispense with the proprietary x86 toolchain (and eventually with the abomination called Verilog.)

I sincerely hope you succeed. Verilog needs to be killed and replaced with something saner.


I told a friend how I was going to implement my next project in Lisp. He immediately thought I was nuts to do so -- that it would never get acquired or it would be too hard to hire people to work on when it grows. All of the tried-and-true objections to using Lisp for a "real" project.

Then I read an article like this and realize I've been explaining it all wrong.

This is a great read and a real eye-opener.


Give your programmers a copy of _Practical Common Lisp_, it's not difficult to learn.



Eh, in lazily-evaluated languages such Haskell, there is much less need for macros.


True, but can't we say that about every non-trivial PL feature? For example, we could say that:

In OOP languages such as Smalltalk, there is much less need for macros.

The author's point is that programming languages are great when you're cutting with the grain, when you're working in the domain the author optimized the language around. But when you leave the domain, macros are an important tool.

Don't get me wrong, I think that lazy evaluation is an incredibly valuable tool for separating what from how. But I wonder how it obviates the need for macros any more so than a well-defined OO system or any other non-trivial language feature.


sedachv actually got my point. The standard need for macros rather than functions is to handle staged computations, distinguishing between names and objects. Lazy evaluation really does strike right at the heart of this, unlike something like "object system".


Closures give you this (staged computations). Macros really give syntactic abstraction. That's it. That's nice, but you get the rest via other mechanisms.


Closures can't do partial evaluation, macros can.


I wasn't trying to imply you can do everything with closures. But rather you could do lazy evaluation with it (I guess staged computations could include various forms of pre-rutime evaluation).


In my opinion, Smalltalk is prime example of language that is desperately missing macros. Most of Smalltalk code is boilerplate that is automatically generated by IDE, replacing this automatic generation with macros seems like natural thing to do. Smalltalk community seems to prefer to build complex tools to automatically generate and modify code in it's textual form, which I think is generally solving the wrong problem.


Lazy evaluation gets rid of the need for one common macro pattern (conditional evaluation of an expression) and makes the macro/function mismatch go away (the (reduce #'macro arguments) kind of thing - although fexprs also avoid this problem).

However, you need monads to express other patterns of control flow (exceptions and dynamic scoping, for example) that are possible with macros. The problem is that you have to do everything through bind. Macros don't have that restriction.

But the one thing that makes macros irreplaceable is the fact that they can operate at compile-time. This is what I meant to emphasize when I wrote about automating programming.


I was in fact thinking of such common uses as the loop macro.

> But the one thing that makes macros irreplaceable is the fact that they can operate at compile-time.

"Sufficiently smart compiler" is the standard strawman argument. It's not a sufficient response, but it is actually pretty astounding what can be handled at compile time.


"I was in fact thinking of such common uses as the loop macro."

Smalltalk has always done iteration this way with blocks (closures), and because of the syntax it works and looks nice.

Of course this needs the "sufficiently smart compiler" to be efficient. But there's no way to program domain-specific knowledge into the compiler like you can with macros. For example Gerald Sussman has done a lot of work in optimizing numerical code symbolically (http://repository.readscheme.org/ftp/papers/berlinsurati-pep...), I think _On Lisp_ has a chapter about doing partial evaluation on Bezier curves, and there's some regex-specific tricks in CL-PPCRE that I don't think can be matched by generic partial evaluation.


But there's no way to program domain-specific knowledge into the compiler like you can with macros.

You can. It just won't be as syntactically elegant. Use expression trees and code emission the language exposes. The optimization happens at run-time, rather than compile time. But you'll actually, in most cases, get more optimization opportunity.


I will have to take a look at these.

I loved the flexibility they allowed when I first used them, but these days they often feel to me like going down a level in that more needs to be expressed. Yeah, if your tools can't efficiently handle your current case, that escape hatch is incredibly valuable.


In eagerly-evaluated languages such as almost-anything-other-than-Haskell, there is much less need for monads or other weird contraptions that make it difficult to reason about the behavior of the program. When Haskell fits the problem, it's very good, when it doesn't, you probably need to employ non-portable compiler tricks.


Monads are not related to laziness in any meaningful way.

Monads are just a generalization of a commonly recurring pattern.

All imperative languages can be said to have "monads" in the sense of having a "semicolon operator" (in some languages, that's just newline) that is the monadic bind. You're programming in one "ambient monad" whose power is fixed by the language itself.

Haskell has an overridable semicolon, and that gives you a lot of DSL power. Instead of one "ambient monad", you have lots of user-defined monads useful for different purposes.

Besides overridable semicolons, monads are just a plain useful abstraction of data structures.

Monads are incredibly useful in a huge variety of programming tasks (non-determinism, effects, exceptions, parsing, and much more). They make Haskell code utilizing them far more broadly useful than code I've seen in any other language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: